ok, kind of incremental trading and backtester, but result not alligning
This commit is contained in:
parent
ed6d668a8a
commit
8055f46328
67
.cursor/create-prd.mdc
Normal file
67
.cursor/create-prd.mdc
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Product Requirements Document (PRD)
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality.
|
||||
2. **Ask Clarifying Questions:** Before writing the PRD, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out).
|
||||
3. **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below.
|
||||
4. **Save PRD:** Save the generated document as `prd-[feature-name].md` inside the `/tasks` directory.
|
||||
|
||||
## Clarifying Questions (Examples)
|
||||
|
||||
The AI should adapt its questions based on the prompt, but here are some common areas to explore:
|
||||
|
||||
* **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?"
|
||||
* **Target User:** "Who is the primary user of this feature?"
|
||||
* **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?"
|
||||
* **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)"
|
||||
* **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?"
|
||||
* **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?"
|
||||
* **Data Requirements:** "What kind of data does this feature need to display or manipulate?"
|
||||
* **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?"
|
||||
* **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?"
|
||||
|
||||
## PRD Structure
|
||||
|
||||
The generated PRD should include the following sections:
|
||||
|
||||
1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal.
|
||||
2. **Goals:** List the specific, measurable objectives for this feature.
|
||||
3. **User Stories:** Detail the user narratives describing feature usage and benefits.
|
||||
4. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements.
|
||||
5. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope.
|
||||
6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable.
|
||||
7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module").
|
||||
8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X").
|
||||
9. **Open Questions:** List any remaining questions or areas needing further clarification.
|
||||
|
||||
## Target Audience
|
||||
|
||||
Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic.
|
||||
|
||||
## Output
|
||||
|
||||
* **Format:** Markdown (`.md`)
|
||||
* **Location:** `/tasks/`
|
||||
* **Filename:** `prd-[feature-name].md`
|
||||
|
||||
## Final instructions
|
||||
|
||||
1. Do NOT start implmenting the PRD
|
||||
2. Make sure to ask the user clarifying questions
|
||||
|
||||
3. Take the user's answers to the clarifying questions and improve the PRD
|
||||
70
.cursor/generate-tskd.mdc
Normal file
70
.cursor/generate-tskd.mdc
Normal file
@ -0,0 +1,70 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Task List from a PRD
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer through implementation.
|
||||
|
||||
## Output
|
||||
|
||||
- **Format:** Markdown (`.md`)
|
||||
- **Location:** `/tasks/`
|
||||
- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`)
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive PRD Reference:** The user points the AI to a specific PRD file
|
||||
2. **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD.
|
||||
3. **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed."
|
||||
4. **Wait for Confirmation:** Pause and wait for the user to respond with "Go".
|
||||
5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task and cover the implementation details implied by the PRD.
|
||||
6. **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable.
|
||||
7. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure.
|
||||
8. **Save Task List:** Save the generated document in the `/tasks/` directory with the filename `tasks-[prd-file-name].md`, where `[prd-file-name]` matches the base name of the input PRD file (e.g., if the input was `prd-user-profile-editing.md`, the output is `tasks-prd-user-profile-editing.md`).
|
||||
|
||||
## Output Format
|
||||
|
||||
The generated task list _must_ follow this structure:
|
||||
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
- `path/to/potential/file1.ts` - Brief description of why this file is relevant (e.g., Contains the main component for this feature).
|
||||
- `path/to/file1.test.ts` - Unit tests for `file1.ts`.
|
||||
- `path/to/another/file.tsx` - Brief description (e.g., API route handler for data submission).
|
||||
- `path/to/another/file.test.tsx` - Unit tests for `another/file.tsx`.
|
||||
- `lib/utils/helpers.ts` - Brief description (e.g., Utility functions needed for calculations).
|
||||
- `lib/utils/helpers.test.ts` - Unit tests for `helpers.ts`.
|
||||
|
||||
### Notes
|
||||
|
||||
- Unit tests should typically be placed alongside the code files they are testing (e.g., `MyComponent.tsx` and `MyComponent.test.tsx` in the same directory).
|
||||
- Use `npx jest [optional/path/to/test/file]` to run tests. Running without a path executes all tests found by the Jest configuration.
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- [ ] 2.0 Parent Task Title
|
||||
- [ ] 2.1 [Sub-task description 2.1]
|
||||
- [ ] 3.0 Parent Task Title (may not require sub-tasks if purely structural or configuration)
|
||||
```
|
||||
|
||||
## Interaction Model
|
||||
|
||||
The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details.
|
||||
|
||||
## Target Audience
|
||||
|
||||
|
||||
Assume the primary reader of the task list is a **junior developer** who will implement the feature.
|
||||
44
.cursor/task-list.mdc
Normal file
44
.cursor/task-list.mdc
Normal file
@ -0,0 +1,44 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Task List Management
|
||||
|
||||
Guidelines for managing task lists in markdown files to track progress on completing a PRD
|
||||
|
||||
## Task Implementation
|
||||
- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say “yes” or "y"
|
||||
- **Completion protocol:**
|
||||
1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`.
|
||||
2. If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed.
|
||||
- Stop after each sub‑task and wait for the user’s go‑ahead.
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
1. **Update the task list as you work:**
|
||||
- Mark tasks and subtasks as completed (`[x]`) per the protocol above.
|
||||
- Add new tasks as they emerge.
|
||||
|
||||
2. **Maintain the “Relevant Files” section:**
|
||||
- List every file created or modified.
|
||||
- Give each file a one‑line description of its purpose.
|
||||
|
||||
## AI Instructions
|
||||
|
||||
When working with task lists, the AI must:
|
||||
|
||||
1. Regularly update the task list file after finishing any significant work.
|
||||
2. Follow the completion protocol:
|
||||
- Mark each finished **sub‑task** `[x]`.
|
||||
- Mark the **parent task** `[x]` once **all** its subtasks are `[x]`.
|
||||
3. Add newly discovered tasks.
|
||||
4. Keep “Relevant Files” accurate and up to date.
|
||||
5. Before starting work, check which sub‑task is next.
|
||||
|
||||
6. After implementing a sub‑task, update the file and then pause for user approval.
|
||||
7
.gitignore
vendored
7
.gitignore
vendored
@ -174,7 +174,8 @@ An introduction to trading cycles.pdf
|
||||
An introduction to trading cycles.txt
|
||||
README.md
|
||||
.vscode/launch.json
|
||||
data/btcusd_1-day_data.csv
|
||||
data/btcusd_1-min_data.csv
|
||||
data/*
|
||||
|
||||
frontend/
|
||||
frontend/
|
||||
results/*
|
||||
test/results/*
|
||||
@ -1,6 +1,6 @@
|
||||
{
|
||||
"start_date": "2024-01-01",
|
||||
"stop_date": null,
|
||||
"start_date": "2025-01-01",
|
||||
"stop_date": "2025-05-01",
|
||||
"initial_usd": 10000,
|
||||
"timeframes": ["15min"],
|
||||
"strategies": [
|
||||
|
||||
460
cycles/IncStrategies/README_BACKTESTER.md
Normal file
460
cycles/IncStrategies/README_BACKTESTER.md
Normal file
@ -0,0 +1,460 @@
|
||||
# Incremental Backtester
|
||||
|
||||
A high-performance backtesting system for incremental trading strategies with multiprocessing support for parameter optimization.
|
||||
|
||||
## Overview
|
||||
|
||||
The Incremental Backtester provides a complete solution for testing incremental trading strategies:
|
||||
|
||||
- **IncTrader**: Manages a single strategy during backtesting
|
||||
- **IncBacktester**: Orchestrates multiple traders and parameter optimization
|
||||
- **Multiprocessing Support**: Parallel execution across CPU cores
|
||||
- **Memory Efficient**: Bounded memory usage regardless of data length
|
||||
- **Real-time Compatible**: Same interface as live trading systems
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Basic Single Strategy Backtest
|
||||
|
||||
```python
|
||||
from cycles.IncStrategies import (
|
||||
IncBacktester, BacktestConfig, IncRandomStrategy
|
||||
)
|
||||
|
||||
# Configure backtest
|
||||
config = BacktestConfig(
|
||||
data_file="btc_1min_2023.csv",
|
||||
start_date="2023-01-01",
|
||||
end_date="2023-12-31",
|
||||
initial_usd=10000,
|
||||
stop_loss_pct=0.02, # 2% stop loss
|
||||
take_profit_pct=0.05 # 5% take profit
|
||||
)
|
||||
|
||||
# Create strategy
|
||||
strategy = IncRandomStrategy(params={
|
||||
"timeframe": "15min",
|
||||
"entry_probability": 0.1,
|
||||
"exit_probability": 0.15
|
||||
})
|
||||
|
||||
# Run backtest
|
||||
backtester = IncBacktester(config)
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
|
||||
print(f"Profit: {results['profit_ratio']*100:.2f}%")
|
||||
print(f"Trades: {results['n_trades']}")
|
||||
print(f"Win Rate: {results['win_rate']*100:.1f}%")
|
||||
```
|
||||
|
||||
### 2. Multiple Strategies
|
||||
|
||||
```python
|
||||
strategies = [
|
||||
IncRandomStrategy(params={"timeframe": "15min"}),
|
||||
IncRandomStrategy(params={"timeframe": "30min"}),
|
||||
IncMetaTrendStrategy(params={"timeframe": "15min"})
|
||||
]
|
||||
|
||||
results = backtester.run_multiple_strategies(strategies)
|
||||
|
||||
for result in results:
|
||||
print(f"{result['strategy_name']}: {result['profit_ratio']*100:.2f}%")
|
||||
```
|
||||
|
||||
### 3. Parameter Optimization
|
||||
|
||||
```python
|
||||
# Define parameter grids
|
||||
strategy_param_grid = {
|
||||
"timeframe": ["15min", "30min", "1h"],
|
||||
"entry_probability": [0.05, 0.1, 0.15],
|
||||
"exit_probability": [0.1, 0.15, 0.2]
|
||||
}
|
||||
|
||||
trader_param_grid = {
|
||||
"stop_loss_pct": [0.01, 0.02, 0.03],
|
||||
"take_profit_pct": [0.03, 0.05, 0.07]
|
||||
}
|
||||
|
||||
# Run optimization (uses all CPU cores)
|
||||
results = backtester.optimize_parameters(
|
||||
strategy_class=IncRandomStrategy,
|
||||
param_grid=strategy_param_grid,
|
||||
trader_param_grid=trader_param_grid,
|
||||
max_workers=8 # Use 8 CPU cores
|
||||
)
|
||||
|
||||
# Get summary statistics
|
||||
summary = backtester.get_summary_statistics(results)
|
||||
print(f"Best profit: {summary['profit_ratio']['max']*100:.2f}%")
|
||||
|
||||
# Save results
|
||||
backtester.save_results(results, "optimization_results.csv")
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### IncTrader Class
|
||||
|
||||
Manages a single strategy during backtesting:
|
||||
|
||||
```python
|
||||
trader = IncTrader(
|
||||
strategy=strategy,
|
||||
initial_usd=10000,
|
||||
params={
|
||||
"stop_loss_pct": 0.02,
|
||||
"take_profit_pct": 0.05
|
||||
}
|
||||
)
|
||||
|
||||
# Process data sequentially
|
||||
for timestamp, ohlcv_data in data_stream:
|
||||
trader.process_data_point(timestamp, ohlcv_data)
|
||||
|
||||
# Get results
|
||||
results = trader.get_results()
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
- Position management (USD/coin balance)
|
||||
- Trade execution based on strategy signals
|
||||
- Stop loss and take profit handling
|
||||
- Performance tracking and metrics
|
||||
- Fee calculation using existing MarketFees
|
||||
|
||||
### IncBacktester Class
|
||||
|
||||
Orchestrates multiple traders and handles data loading:
|
||||
|
||||
```python
|
||||
backtester = IncBacktester(config, storage)
|
||||
|
||||
# Single strategy
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
|
||||
# Multiple strategies
|
||||
results = backtester.run_multiple_strategies(strategies)
|
||||
|
||||
# Parameter optimization
|
||||
results = backtester.optimize_parameters(strategy_class, param_grid)
|
||||
```
|
||||
|
||||
**Key Features:**
|
||||
- Data loading using existing Storage class
|
||||
- Multiprocessing for parameter optimization
|
||||
- Result aggregation and analysis
|
||||
- Summary statistics calculation
|
||||
- CSV export functionality
|
||||
|
||||
### BacktestConfig Class
|
||||
|
||||
Configuration for backtesting runs:
|
||||
|
||||
```python
|
||||
config = BacktestConfig(
|
||||
data_file="btc_1min_2023.csv",
|
||||
start_date="2023-01-01",
|
||||
end_date="2023-12-31",
|
||||
initial_usd=10000,
|
||||
timeframe="1min",
|
||||
|
||||
# Trader parameters
|
||||
stop_loss_pct=0.02,
|
||||
take_profit_pct=0.05,
|
||||
|
||||
# Performance settings
|
||||
max_workers=None, # Auto-detect CPU cores
|
||||
chunk_size=1000
|
||||
)
|
||||
```
|
||||
|
||||
## Data Requirements
|
||||
|
||||
### Input Data Format
|
||||
|
||||
The backtester expects minute-level OHLCV data in CSV format:
|
||||
|
||||
```csv
|
||||
timestamp,open,high,low,close,volume
|
||||
1672531200,16625.1,16634.5,16620.0,16628.3,125.45
|
||||
1672531260,16628.3,16635.2,16625.8,16631.7,98.32
|
||||
...
|
||||
```
|
||||
|
||||
**Requirements:**
|
||||
- Timestamp column (Unix timestamp or datetime)
|
||||
- OHLCV columns: open, high, low, close, volume
|
||||
- Minute-level frequency (strategies handle timeframe aggregation)
|
||||
- Sorted by timestamp (ascending)
|
||||
|
||||
### Data Loading
|
||||
|
||||
Uses the existing Storage class for data loading:
|
||||
|
||||
```python
|
||||
from cycles.utils.storage import Storage
|
||||
|
||||
storage = Storage()
|
||||
data = storage.load_data(
|
||||
"btc_1min_2023.csv",
|
||||
"2023-01-01",
|
||||
"2023-12-31"
|
||||
)
|
||||
```
|
||||
|
||||
## Performance Features
|
||||
|
||||
### Multiprocessing Support
|
||||
|
||||
Parameter optimization automatically distributes work across CPU cores:
|
||||
|
||||
```python
|
||||
# Automatic CPU detection
|
||||
results = backtester.optimize_parameters(strategy_class, param_grid)
|
||||
|
||||
# Manual worker count
|
||||
results = backtester.optimize_parameters(
|
||||
strategy_class, param_grid, max_workers=4
|
||||
)
|
||||
|
||||
# Single-threaded (for debugging)
|
||||
results = backtester.optimize_parameters(
|
||||
strategy_class, param_grid, max_workers=1
|
||||
)
|
||||
```
|
||||
|
||||
### Memory Efficiency
|
||||
|
||||
- **Bounded Memory**: Strategy buffers have fixed size limits
|
||||
- **Incremental Processing**: No need to load entire datasets into memory
|
||||
- **Efficient Data Structures**: Optimized for sequential processing
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
Built-in performance tracking:
|
||||
|
||||
```python
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
|
||||
print(f"Backtest duration: {results['backtest_duration_seconds']:.2f}s")
|
||||
print(f"Data points processed: {results['data_points_processed']}")
|
||||
print(f"Processing rate: {results['data_points']/results['backtest_duration_seconds']:.0f} points/sec")
|
||||
```
|
||||
|
||||
## Result Analysis
|
||||
|
||||
### Individual Results
|
||||
|
||||
Each backtest returns comprehensive metrics:
|
||||
|
||||
```python
|
||||
{
|
||||
"strategy_name": "IncRandomStrategy",
|
||||
"strategy_params": {"timeframe": "15min", ...},
|
||||
"trader_params": {"stop_loss_pct": 0.02, ...},
|
||||
"initial_usd": 10000.0,
|
||||
"final_usd": 10250.0,
|
||||
"profit_ratio": 0.025,
|
||||
"n_trades": 15,
|
||||
"win_rate": 0.6,
|
||||
"max_drawdown": 0.08,
|
||||
"avg_trade": 0.0167,
|
||||
"total_fees_usd": 45.32,
|
||||
"trades": [...], # Individual trade records
|
||||
"backtest_duration_seconds": 2.45
|
||||
}
|
||||
```
|
||||
|
||||
### Summary Statistics
|
||||
|
||||
For parameter optimization runs:
|
||||
|
||||
```python
|
||||
summary = backtester.get_summary_statistics(results)
|
||||
|
||||
{
|
||||
"total_runs": 108,
|
||||
"successful_runs": 105,
|
||||
"failed_runs": 3,
|
||||
"profit_ratio": {
|
||||
"mean": 0.023,
|
||||
"std": 0.045,
|
||||
"min": -0.12,
|
||||
"max": 0.18,
|
||||
"median": 0.019
|
||||
},
|
||||
"best_run": {...},
|
||||
"worst_run": {...}
|
||||
}
|
||||
```
|
||||
|
||||
### Export Results
|
||||
|
||||
Save results to CSV for further analysis:
|
||||
|
||||
```python
|
||||
backtester.save_results(results, "backtest_results.csv")
|
||||
```
|
||||
|
||||
Output includes:
|
||||
- Strategy and trader parameters
|
||||
- Performance metrics
|
||||
- Trade statistics
|
||||
- Execution timing
|
||||
|
||||
## Integration with Existing System
|
||||
|
||||
### Compatibility
|
||||
|
||||
The incremental backtester integrates seamlessly with existing components:
|
||||
|
||||
- **Storage Class**: Uses existing data loading infrastructure
|
||||
- **MarketFees**: Uses existing fee calculation
|
||||
- **Strategy Interface**: Compatible with incremental strategies
|
||||
- **Result Format**: Similar to existing Backtest class
|
||||
|
||||
### Migration from Original Backtester
|
||||
|
||||
```python
|
||||
# Original backtester
|
||||
from cycles.backtest import Backtest
|
||||
|
||||
# Incremental backtester
|
||||
from cycles.IncStrategies import IncBacktester, BacktestConfig
|
||||
|
||||
# Similar interface, enhanced capabilities
|
||||
config = BacktestConfig(...)
|
||||
backtester = IncBacktester(config)
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Synthetic Data Testing
|
||||
|
||||
Test with synthetic data before using real market data:
|
||||
|
||||
```python
|
||||
from cycles.IncStrategies.test_inc_backtester import main
|
||||
|
||||
# Run all tests
|
||||
main()
|
||||
```
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Individual component testing:
|
||||
|
||||
```python
|
||||
# Test IncTrader
|
||||
from cycles.IncStrategies.test_inc_backtester import test_inc_trader
|
||||
test_inc_trader()
|
||||
|
||||
# Test IncBacktester
|
||||
from cycles.IncStrategies.test_inc_backtester import test_inc_backtester
|
||||
test_inc_backtester()
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
See `example_backtest.py` for comprehensive usage examples:
|
||||
|
||||
```python
|
||||
from cycles.IncStrategies.example_backtest import (
|
||||
example_single_strategy_backtest,
|
||||
example_parameter_optimization,
|
||||
example_custom_analysis
|
||||
)
|
||||
|
||||
# Run examples
|
||||
example_single_strategy_backtest()
|
||||
example_parameter_optimization()
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Data Preparation
|
||||
|
||||
- Ensure data quality (no gaps, correct format)
|
||||
- Use appropriate date ranges for testing
|
||||
- Consider market conditions in test periods
|
||||
|
||||
### 2. Parameter Optimization
|
||||
|
||||
- Start with small parameter grids for testing
|
||||
- Use representative time periods
|
||||
- Consider overfitting risks
|
||||
- Validate results on out-of-sample data
|
||||
|
||||
### 3. Performance Optimization
|
||||
|
||||
- Use multiprocessing for large parameter grids
|
||||
- Monitor memory usage for long backtests
|
||||
- Profile bottlenecks for optimization
|
||||
|
||||
### 4. Result Validation
|
||||
|
||||
- Compare with original backtester for validation
|
||||
- Check trade logic manually for small samples
|
||||
- Verify fee calculations and position management
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Data Loading Errors**
|
||||
- Check file path and format
|
||||
- Verify date range availability
|
||||
- Ensure required columns exist
|
||||
|
||||
2. **Strategy Errors**
|
||||
- Check strategy initialization
|
||||
- Verify parameter validity
|
||||
- Monitor warmup period completion
|
||||
|
||||
3. **Performance Issues**
|
||||
- Reduce parameter grid size
|
||||
- Limit worker count for memory constraints
|
||||
- Use shorter time periods for testing
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable detailed logging:
|
||||
|
||||
```python
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
|
||||
# Run with detailed output
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
```
|
||||
|
||||
### Memory Monitoring
|
||||
|
||||
Monitor memory usage during optimization:
|
||||
|
||||
```python
|
||||
import psutil
|
||||
import os
|
||||
|
||||
process = psutil.Process(os.getpid())
|
||||
print(f"Memory usage: {process.memory_info().rss / 1024 / 1024:.1f} MB")
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- **Live Trading Integration**: Direct connection to trading systems
|
||||
- **Advanced Analytics**: Risk metrics, Sharpe ratio, etc.
|
||||
- **Visualization**: Built-in plotting and analysis tools
|
||||
- **Database Support**: Direct database connectivity
|
||||
- **Strategy Combinations**: Multi-strategy portfolio testing
|
||||
|
||||
## Support
|
||||
|
||||
For issues and questions:
|
||||
1. Check the test scripts for working examples
|
||||
2. Review the TODO.md for known limitations
|
||||
3. Examine the base strategy implementations
|
||||
4. Use debug logging for detailed troubleshooting
|
||||
@ -17,11 +17,17 @@ Classes:
|
||||
IncDefaultStrategy: Incremental implementation of the default Supertrend strategy
|
||||
IncBBRSStrategy: Incremental implementation of Bollinger Bands + RSI strategy
|
||||
IncStrategyManager: Manager for coordinating multiple incremental strategies
|
||||
|
||||
IncTrader: Trader that manages a single strategy during backtesting
|
||||
IncBacktester: Backtester for testing incremental strategies with multiprocessing
|
||||
BacktestConfig: Configuration class for backtesting runs
|
||||
"""
|
||||
|
||||
from .base import IncStrategyBase, IncStrategySignal
|
||||
from .random_strategy import IncRandomStrategy
|
||||
from .metatrend_strategy import IncMetaTrendStrategy, MetaTrendStrategy
|
||||
from .inc_trader import IncTrader, TradeRecord
|
||||
from .inc_backtester import IncBacktester, BacktestConfig
|
||||
|
||||
# Note: These will be implemented in subsequent phases
|
||||
# from .default_strategy import IncDefaultStrategy
|
||||
@ -38,12 +44,25 @@ AVAILABLE_STRATEGIES = {
|
||||
}
|
||||
|
||||
__all__ = [
|
||||
# Base classes
|
||||
'IncStrategyBase',
|
||||
'IncStrategySignal',
|
||||
|
||||
# Strategies
|
||||
'IncRandomStrategy',
|
||||
'IncMetaTrendStrategy',
|
||||
'MetaTrendStrategy',
|
||||
|
||||
# Backtesting components
|
||||
'IncTrader',
|
||||
'IncBacktester',
|
||||
'BacktestConfig',
|
||||
'TradeRecord',
|
||||
|
||||
# Registry
|
||||
'AVAILABLE_STRATEGIES'
|
||||
|
||||
# Future implementations
|
||||
# 'IncDefaultStrategy',
|
||||
# 'IncBBRSStrategy',
|
||||
# 'IncStrategyManager'
|
||||
|
||||
@ -86,8 +86,24 @@ class TimeframeAggregator:
|
||||
return None # No completed bar yet
|
||||
|
||||
def _get_bar_start_time(self, timestamp: pd.Timestamp) -> pd.Timestamp:
|
||||
"""Calculate the start time of the timeframe bar for given timestamp."""
|
||||
# Round down to the nearest timeframe boundary
|
||||
"""Calculate the start time of the timeframe bar for given timestamp.
|
||||
|
||||
This method now aligns with pandas resampling to ensure consistency
|
||||
with the original strategy's bar boundaries.
|
||||
"""
|
||||
# Use pandas-style resampling alignment
|
||||
# This ensures bars align to standard boundaries (e.g., 00:00, 00:15, 00:30, 00:45)
|
||||
freq_str = f'{self.timeframe_minutes}min'
|
||||
|
||||
# Create a temporary series with the timestamp and resample to get the bar start
|
||||
temp_series = pd.Series([1], index=[timestamp])
|
||||
resampled = temp_series.resample(freq_str)
|
||||
|
||||
# Get the first group's name (which is the bar start time)
|
||||
for bar_start, _ in resampled:
|
||||
return bar_start
|
||||
|
||||
# Fallback to original method if resampling fails
|
||||
minutes_since_midnight = timestamp.hour * 60 + timestamp.minute
|
||||
bar_minutes = (minutes_since_midnight // self.timeframe_minutes) * self.timeframe_minutes
|
||||
|
||||
|
||||
447
cycles/IncStrategies/example_backtest.py
Normal file
447
cycles/IncStrategies/example_backtest.py
Normal file
@ -0,0 +1,447 @@
|
||||
"""
|
||||
Example usage of the Incremental Backtester.
|
||||
|
||||
This script demonstrates how to use the IncBacktester for various scenarios:
|
||||
1. Single strategy backtesting
|
||||
2. Multiple strategy comparison
|
||||
3. Parameter optimization with multiprocessing
|
||||
4. Custom analysis and result saving
|
||||
5. Comprehensive result logging and action tracking
|
||||
|
||||
Run this script to see the backtester in action with real or synthetic data.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
import os
|
||||
|
||||
from cycles.IncStrategies import (
|
||||
IncBacktester, BacktestConfig, IncRandomStrategy
|
||||
)
|
||||
from cycles.utils.storage import Storage
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||
)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def ensure_results_directory():
|
||||
"""Ensure the results directory exists."""
|
||||
results_dir = "results"
|
||||
if not os.path.exists(results_dir):
|
||||
os.makedirs(results_dir)
|
||||
logger.info(f"Created results directory: {results_dir}")
|
||||
return results_dir
|
||||
|
||||
|
||||
def create_sample_data(days: int = 30) -> pd.DataFrame:
|
||||
"""
|
||||
Create sample OHLCV data for demonstration.
|
||||
|
||||
Args:
|
||||
days: Number of days of data to generate
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: Sample OHLCV data
|
||||
"""
|
||||
# Create date range
|
||||
end_date = datetime.now()
|
||||
start_date = end_date - timedelta(days=days)
|
||||
timestamps = pd.date_range(start=start_date, end=end_date, freq='1min')
|
||||
|
||||
# Generate realistic price data
|
||||
np.random.seed(42)
|
||||
n_points = len(timestamps)
|
||||
|
||||
# Start with a base price
|
||||
base_price = 45000
|
||||
|
||||
# Generate price movements with trend and volatility
|
||||
trend = np.linspace(0, 0.1, n_points) # Slight upward trend
|
||||
volatility = np.random.normal(0, 0.002, n_points) # 0.2% volatility
|
||||
|
||||
# Calculate prices
|
||||
log_returns = trend + volatility
|
||||
prices = base_price * np.exp(np.cumsum(log_returns))
|
||||
|
||||
# Generate OHLCV data
|
||||
data = []
|
||||
for i, (timestamp, close_price) in enumerate(zip(timestamps, prices)):
|
||||
# Generate realistic OHLC
|
||||
intrabar_vol = close_price * 0.001
|
||||
|
||||
open_price = close_price + np.random.normal(0, intrabar_vol)
|
||||
high_price = max(open_price, close_price) + abs(np.random.normal(0, intrabar_vol))
|
||||
low_price = min(open_price, close_price) - abs(np.random.normal(0, intrabar_vol))
|
||||
volume = np.random.uniform(50, 500)
|
||||
|
||||
data.append({
|
||||
'open': open_price,
|
||||
'high': high_price,
|
||||
'low': low_price,
|
||||
'close': close_price,
|
||||
'volume': volume
|
||||
})
|
||||
|
||||
df = pd.DataFrame(data, index=timestamps)
|
||||
return df
|
||||
|
||||
|
||||
def example_single_strategy():
|
||||
"""Example 1: Single strategy backtesting with comprehensive results."""
|
||||
print("\n" + "="*60)
|
||||
print("EXAMPLE 1: Single Strategy Backtesting")
|
||||
print("="*60)
|
||||
|
||||
# Create sample data
|
||||
data = create_sample_data(days=7) # 1 week of data
|
||||
|
||||
# Save data
|
||||
storage = Storage()
|
||||
data_file = "sample_data_single.csv"
|
||||
storage.save_data(data, data_file)
|
||||
|
||||
# Configure backtest
|
||||
config = BacktestConfig(
|
||||
data_file=data_file,
|
||||
start_date=data.index[0].strftime("%Y-%m-%d"),
|
||||
end_date=data.index[-1].strftime("%Y-%m-%d"),
|
||||
initial_usd=10000,
|
||||
stop_loss_pct=0.02,
|
||||
take_profit_pct=0.05
|
||||
)
|
||||
|
||||
# Create strategy
|
||||
strategy = IncRandomStrategy(params={
|
||||
"timeframe": "15min",
|
||||
"entry_probability": 0.15,
|
||||
"exit_probability": 0.2,
|
||||
"random_seed": 42
|
||||
})
|
||||
|
||||
# Run backtest
|
||||
backtester = IncBacktester(config, storage)
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
|
||||
# Print results
|
||||
print(f"\nResults:")
|
||||
print(f" Strategy: {results['strategy_name']}")
|
||||
print(f" Profit: {results['profit_ratio']*100:.2f}%")
|
||||
print(f" Final Balance: ${results['final_usd']:,.2f}")
|
||||
print(f" Trades: {results['n_trades']}")
|
||||
print(f" Win Rate: {results['win_rate']*100:.1f}%")
|
||||
print(f" Max Drawdown: {results['max_drawdown']*100:.2f}%")
|
||||
|
||||
# Save comprehensive results
|
||||
backtester.save_comprehensive_results([results], "example_single_strategy")
|
||||
|
||||
# Cleanup
|
||||
if os.path.exists(f"data/{data_file}"):
|
||||
os.remove(f"data/{data_file}")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def example_multiple_strategies():
|
||||
"""Example 2: Multiple strategy comparison with comprehensive results."""
|
||||
print("\n" + "="*60)
|
||||
print("EXAMPLE 2: Multiple Strategy Comparison")
|
||||
print("="*60)
|
||||
|
||||
# Create sample data
|
||||
data = create_sample_data(days=10) # 10 days of data
|
||||
|
||||
# Save data
|
||||
storage = Storage()
|
||||
data_file = "sample_data_multiple.csv"
|
||||
storage.save_data(data, data_file)
|
||||
|
||||
# Configure backtest
|
||||
config = BacktestConfig(
|
||||
data_file=data_file,
|
||||
start_date=data.index[0].strftime("%Y-%m-%d"),
|
||||
end_date=data.index[-1].strftime("%Y-%m-%d"),
|
||||
initial_usd=10000,
|
||||
stop_loss_pct=0.015
|
||||
)
|
||||
|
||||
# Create multiple strategies with different parameters
|
||||
strategies = [
|
||||
IncRandomStrategy(params={
|
||||
"timeframe": "5min",
|
||||
"entry_probability": 0.1,
|
||||
"exit_probability": 0.15,
|
||||
"random_seed": 42
|
||||
}),
|
||||
IncRandomStrategy(params={
|
||||
"timeframe": "15min",
|
||||
"entry_probability": 0.12,
|
||||
"exit_probability": 0.18,
|
||||
"random_seed": 123
|
||||
}),
|
||||
IncRandomStrategy(params={
|
||||
"timeframe": "30min",
|
||||
"entry_probability": 0.08,
|
||||
"exit_probability": 0.12,
|
||||
"random_seed": 456
|
||||
}),
|
||||
IncRandomStrategy(params={
|
||||
"timeframe": "1h",
|
||||
"entry_probability": 0.06,
|
||||
"exit_probability": 0.1,
|
||||
"random_seed": 789
|
||||
})
|
||||
]
|
||||
|
||||
# Run backtest
|
||||
backtester = IncBacktester(config, storage)
|
||||
results = backtester.run_multiple_strategies(strategies)
|
||||
|
||||
# Print comparison
|
||||
print(f"\nStrategy Comparison:")
|
||||
print(f"{'Strategy':<20} {'Timeframe':<10} {'Profit %':<10} {'Trades':<8} {'Win Rate %':<12}")
|
||||
print("-" * 70)
|
||||
|
||||
for i, result in enumerate(results):
|
||||
if result.get("success", True):
|
||||
timeframe = result['strategy_params']['timeframe']
|
||||
profit = result['profit_ratio'] * 100
|
||||
trades = result['n_trades']
|
||||
win_rate = result['win_rate'] * 100
|
||||
print(f"Strategy {i+1:<13} {timeframe:<10} {profit:<10.2f} {trades:<8} {win_rate:<12.1f}")
|
||||
|
||||
# Get summary statistics
|
||||
summary = backtester.get_summary_statistics(results)
|
||||
print(f"\nSummary Statistics:")
|
||||
print(f" Best Profit: {summary['profit_ratio']['max']*100:.2f}%")
|
||||
print(f" Worst Profit: {summary['profit_ratio']['min']*100:.2f}%")
|
||||
print(f" Average Profit: {summary['profit_ratio']['mean']*100:.2f}%")
|
||||
print(f" Profit Std Dev: {summary['profit_ratio']['std']*100:.2f}%")
|
||||
|
||||
# Save comprehensive results
|
||||
backtester.save_comprehensive_results(results, "example_multiple_strategies", summary)
|
||||
|
||||
# Cleanup
|
||||
if os.path.exists(f"data/{data_file}"):
|
||||
os.remove(f"data/{data_file}")
|
||||
|
||||
return results, summary
|
||||
|
||||
|
||||
def example_parameter_optimization():
|
||||
"""Example 3: Parameter optimization with multiprocessing and comprehensive results."""
|
||||
print("\n" + "="*60)
|
||||
print("EXAMPLE 3: Parameter Optimization")
|
||||
print("="*60)
|
||||
|
||||
# Create sample data
|
||||
data = create_sample_data(days=5) # 5 days for faster optimization
|
||||
|
||||
# Save data
|
||||
storage = Storage()
|
||||
data_file = "sample_data_optimization.csv"
|
||||
storage.save_data(data, data_file)
|
||||
|
||||
# Configure backtest
|
||||
config = BacktestConfig(
|
||||
data_file=data_file,
|
||||
start_date=data.index[0].strftime("%Y-%m-%d"),
|
||||
end_date=data.index[-1].strftime("%Y-%m-%d"),
|
||||
initial_usd=10000
|
||||
)
|
||||
|
||||
# Define parameter grids
|
||||
strategy_param_grid = {
|
||||
"timeframe": ["5min", "15min", "30min"],
|
||||
"entry_probability": [0.08, 0.12, 0.16],
|
||||
"exit_probability": [0.1, 0.15, 0.2],
|
||||
"random_seed": [42] # Keep seed constant for fair comparison
|
||||
}
|
||||
|
||||
trader_param_grid = {
|
||||
"stop_loss_pct": [0.01, 0.015, 0.02],
|
||||
"take_profit_pct": [0.0, 0.03, 0.05]
|
||||
}
|
||||
|
||||
# Run optimization (will use SystemUtils to determine optimal workers)
|
||||
backtester = IncBacktester(config, storage)
|
||||
|
||||
print(f"Starting optimization with {len(strategy_param_grid['timeframe']) * len(strategy_param_grid['entry_probability']) * len(strategy_param_grid['exit_probability']) * len(trader_param_grid['stop_loss_pct']) * len(trader_param_grid['take_profit_pct'])} combinations...")
|
||||
|
||||
results = backtester.optimize_parameters(
|
||||
strategy_class=IncRandomStrategy,
|
||||
param_grid=strategy_param_grid,
|
||||
trader_param_grid=trader_param_grid,
|
||||
max_workers=None # Use SystemUtils for optimal worker count
|
||||
)
|
||||
|
||||
# Get summary
|
||||
summary = backtester.get_summary_statistics(results)
|
||||
|
||||
# Print optimization results
|
||||
print(f"\nOptimization Results:")
|
||||
print(f" Total Combinations: {summary['total_runs']}")
|
||||
print(f" Successful Runs: {summary['successful_runs']}")
|
||||
print(f" Failed Runs: {summary['failed_runs']}")
|
||||
|
||||
if summary['successful_runs'] > 0:
|
||||
print(f" Best Profit: {summary['profit_ratio']['max']*100:.2f}%")
|
||||
print(f" Worst Profit: {summary['profit_ratio']['min']*100:.2f}%")
|
||||
print(f" Average Profit: {summary['profit_ratio']['mean']*100:.2f}%")
|
||||
|
||||
# Show top 3 configurations
|
||||
valid_results = [r for r in results if r.get("success", True)]
|
||||
valid_results.sort(key=lambda x: x["profit_ratio"], reverse=True)
|
||||
|
||||
print(f"\nTop 3 Configurations:")
|
||||
for i, result in enumerate(valid_results[:3]):
|
||||
print(f" {i+1}. Profit: {result['profit_ratio']*100:.2f}% | "
|
||||
f"Timeframe: {result['strategy_params']['timeframe']} | "
|
||||
f"Entry Prob: {result['strategy_params']['entry_probability']} | "
|
||||
f"Stop Loss: {result['trader_params']['stop_loss_pct']*100:.1f}%")
|
||||
|
||||
# Save comprehensive results
|
||||
backtester.save_comprehensive_results(results, "example_parameter_optimization", summary)
|
||||
|
||||
# Cleanup
|
||||
if os.path.exists(f"data/{data_file}"):
|
||||
os.remove(f"data/{data_file}")
|
||||
|
||||
return results, summary
|
||||
|
||||
|
||||
def example_custom_analysis():
|
||||
"""Example 4: Custom analysis with detailed result examination."""
|
||||
print("\n" + "="*60)
|
||||
print("EXAMPLE 4: Custom Analysis")
|
||||
print("="*60)
|
||||
|
||||
# Create sample data with more volatility for interesting results
|
||||
data = create_sample_data(days=14) # 2 weeks
|
||||
|
||||
# Save data
|
||||
storage = Storage()
|
||||
data_file = "sample_data_analysis.csv"
|
||||
storage.save_data(data, data_file)
|
||||
|
||||
# Configure backtest
|
||||
config = BacktestConfig(
|
||||
data_file=data_file,
|
||||
start_date=data.index[0].strftime("%Y-%m-%d"),
|
||||
end_date=data.index[-1].strftime("%Y-%m-%d"),
|
||||
initial_usd=25000, # Larger starting capital
|
||||
stop_loss_pct=0.025,
|
||||
take_profit_pct=0.04
|
||||
)
|
||||
|
||||
# Create strategy with specific parameters for analysis
|
||||
strategy = IncRandomStrategy(params={
|
||||
"timeframe": "30min",
|
||||
"entry_probability": 0.1,
|
||||
"exit_probability": 0.15,
|
||||
"random_seed": 42
|
||||
})
|
||||
|
||||
# Run backtest
|
||||
backtester = IncBacktester(config, storage)
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
|
||||
# Detailed analysis
|
||||
print(f"\nDetailed Analysis:")
|
||||
print(f" Strategy: {results['strategy_name']}")
|
||||
print(f" Timeframe: {results['strategy_params']['timeframe']}")
|
||||
print(f" Data Period: {config.start_date} to {config.end_date}")
|
||||
print(f" Data Points: {results['data_points']:,}")
|
||||
print(f" Processing Time: {results['backtest_duration_seconds']:.2f}s")
|
||||
|
||||
print(f"\nPerformance Metrics:")
|
||||
print(f" Initial Capital: ${results['initial_usd']:,.2f}")
|
||||
print(f" Final Balance: ${results['final_usd']:,.2f}")
|
||||
print(f" Total Return: {results['profit_ratio']*100:.2f}%")
|
||||
print(f" Total Trades: {results['n_trades']}")
|
||||
|
||||
if results['n_trades'] > 0:
|
||||
print(f" Win Rate: {results['win_rate']*100:.1f}%")
|
||||
print(f" Average Trade: ${results['avg_trade']:.2f}")
|
||||
print(f" Max Drawdown: {results['max_drawdown']*100:.2f}%")
|
||||
print(f" Total Fees: ${results['total_fees_usd']:.2f}")
|
||||
|
||||
# Calculate additional metrics
|
||||
days_traded = (pd.to_datetime(config.end_date) - pd.to_datetime(config.start_date)).days
|
||||
annualized_return = (1 + results['profit_ratio']) ** (365 / days_traded) - 1
|
||||
print(f" Annualized Return: {annualized_return*100:.2f}%")
|
||||
|
||||
# Risk metrics
|
||||
if results['max_drawdown'] > 0:
|
||||
calmar_ratio = annualized_return / results['max_drawdown']
|
||||
print(f" Calmar Ratio: {calmar_ratio:.2f}")
|
||||
|
||||
# Save comprehensive results with custom analysis
|
||||
backtester.save_comprehensive_results([results], "example_custom_analysis")
|
||||
|
||||
# Cleanup
|
||||
if os.path.exists(f"data/{data_file}"):
|
||||
os.remove(f"data/{data_file}")
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all examples."""
|
||||
print("Incremental Backtester Examples")
|
||||
print("="*60)
|
||||
print("This script demonstrates various features of the IncBacktester:")
|
||||
print("1. Single strategy backtesting")
|
||||
print("2. Multiple strategy comparison")
|
||||
print("3. Parameter optimization with multiprocessing")
|
||||
print("4. Custom analysis and metrics")
|
||||
print("5. Comprehensive result saving and action logging")
|
||||
|
||||
# Ensure results directory exists
|
||||
ensure_results_directory()
|
||||
|
||||
try:
|
||||
# Run all examples
|
||||
single_results = example_single_strategy()
|
||||
multiple_results, multiple_summary = example_multiple_strategies()
|
||||
optimization_results, optimization_summary = example_parameter_optimization()
|
||||
analysis_results = example_custom_analysis()
|
||||
|
||||
print("\n" + "="*60)
|
||||
print("ALL EXAMPLES COMPLETED SUCCESSFULLY!")
|
||||
print("="*60)
|
||||
print("\n📊 Comprehensive results have been saved to the 'results' directory.")
|
||||
print("Each example generated multiple files:")
|
||||
print(" 📋 Summary JSON with session info and statistics")
|
||||
print(" 📈 Detailed CSV with all backtest results")
|
||||
print(" 📝 Action log JSON with all operations performed")
|
||||
print(" 📁 Individual strategy JSON files with trades and details")
|
||||
print(" 🗂️ Master index JSON for easy navigation")
|
||||
|
||||
print(f"\n🎯 Key Insights:")
|
||||
print(f" • Single strategy achieved {single_results['profit_ratio']*100:.2f}% return")
|
||||
print(f" • Multiple strategies: best {multiple_summary['profit_ratio']['max']*100:.2f}%, worst {multiple_summary['profit_ratio']['min']*100:.2f}%")
|
||||
print(f" • Optimization tested {optimization_summary['total_runs']} combinations")
|
||||
print(f" • Custom analysis provided detailed risk metrics")
|
||||
|
||||
print(f"\n🔧 System Performance:")
|
||||
print(f" • Used SystemUtils for optimal CPU core utilization")
|
||||
print(f" • All actions logged for reproducibility")
|
||||
print(f" • Results saved in multiple formats for analysis")
|
||||
|
||||
print(f"\n✅ The incremental backtester is ready for production use!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Example failed: {e}")
|
||||
print(f"\nError: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
736
cycles/IncStrategies/inc_backtester.py
Normal file
736
cycles/IncStrategies/inc_backtester.py
Normal file
@ -0,0 +1,736 @@
|
||||
"""
|
||||
Incremental Backtester for testing incremental strategies.
|
||||
|
||||
This module provides the IncBacktester class that orchestrates multiple IncTraders
|
||||
for parallel testing, handles data loading and feeding, and supports multiprocessing
|
||||
for parameter optimization.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from typing import Dict, List, Optional, Any, Callable, Union, Tuple
|
||||
import logging
|
||||
import time
|
||||
from concurrent.futures import ProcessPoolExecutor, as_completed
|
||||
from itertools import product
|
||||
import multiprocessing as mp
|
||||
from dataclasses import dataclass
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
from .inc_trader import IncTrader
|
||||
from .base import IncStrategyBase
|
||||
from ..utils.storage import Storage
|
||||
from ..utils.system import SystemUtils
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _worker_function(args: Tuple[type, Dict, Dict, 'BacktestConfig', str]) -> Dict[str, Any]:
|
||||
"""
|
||||
Worker function for multiprocessing parameter optimization.
|
||||
|
||||
This function must be at module level to be picklable for multiprocessing.
|
||||
|
||||
Args:
|
||||
args: Tuple containing (strategy_class, strategy_params, trader_params, config, data_file)
|
||||
|
||||
Returns:
|
||||
Dict containing backtest results
|
||||
"""
|
||||
try:
|
||||
strategy_class, strategy_params, trader_params, config, data_file = args
|
||||
|
||||
# Create new storage and backtester instance for this worker
|
||||
storage = Storage()
|
||||
worker_backtester = IncBacktester(config, storage)
|
||||
|
||||
# Create strategy instance
|
||||
strategy = strategy_class(params=strategy_params)
|
||||
|
||||
# Run backtest
|
||||
result = worker_backtester.run_single_strategy(strategy, trader_params)
|
||||
result["success"] = True
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Worker error for {strategy_params}, {trader_params}: {e}")
|
||||
return {
|
||||
"strategy_params": strategy_params,
|
||||
"trader_params": trader_params,
|
||||
"error": str(e),
|
||||
"success": False
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class BacktestConfig:
|
||||
"""Configuration for backtesting runs."""
|
||||
data_file: str
|
||||
start_date: str
|
||||
end_date: str
|
||||
initial_usd: float = 10000
|
||||
timeframe: str = "1min"
|
||||
|
||||
# Trader parameters
|
||||
stop_loss_pct: float = 0.0
|
||||
take_profit_pct: float = 0.0
|
||||
|
||||
# Performance settings
|
||||
max_workers: Optional[int] = None
|
||||
chunk_size: int = 1000
|
||||
|
||||
|
||||
class IncBacktester:
|
||||
"""
|
||||
Incremental backtester for testing incremental strategies.
|
||||
|
||||
This class orchestrates multiple IncTraders for parallel testing:
|
||||
- Loads data using the existing Storage class
|
||||
- Creates multiple IncTrader instances with different parameters
|
||||
- Feeds data sequentially to all traders
|
||||
- Collects and aggregates results
|
||||
- Supports multiprocessing for parallel execution
|
||||
- Uses SystemUtils for optimal worker count determination
|
||||
|
||||
The backtester can run multiple strategies simultaneously or test
|
||||
parameter combinations across multiple CPU cores.
|
||||
|
||||
Example:
|
||||
# Single strategy backtest
|
||||
config = BacktestConfig(
|
||||
data_file="btc_1min_2023.csv",
|
||||
start_date="2023-01-01",
|
||||
end_date="2023-12-31",
|
||||
initial_usd=10000
|
||||
)
|
||||
|
||||
strategy = IncRandomStrategy(params={"timeframe": "15min"})
|
||||
backtester = IncBacktester(config)
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
|
||||
# Multiple strategies
|
||||
strategies = [strategy1, strategy2, strategy3]
|
||||
results = backtester.run_multiple_strategies(strategies)
|
||||
|
||||
# Parameter optimization
|
||||
param_grid = {
|
||||
"timeframe": ["5min", "15min", "30min"],
|
||||
"stop_loss_pct": [0.01, 0.02, 0.03]
|
||||
}
|
||||
results = backtester.optimize_parameters(strategy_class, param_grid)
|
||||
"""
|
||||
|
||||
def __init__(self, config: BacktestConfig, storage: Optional[Storage] = None):
|
||||
"""
|
||||
Initialize the incremental backtester.
|
||||
|
||||
Args:
|
||||
config: Backtesting configuration
|
||||
storage: Storage instance for data loading (creates new if None)
|
||||
"""
|
||||
self.config = config
|
||||
self.storage = storage or Storage()
|
||||
self.system_utils = SystemUtils(logging=logger)
|
||||
self.data = None
|
||||
self.results_cache = {}
|
||||
|
||||
# Track all actions performed during backtesting
|
||||
self.action_log = []
|
||||
self.session_start_time = datetime.now()
|
||||
|
||||
logger.info(f"IncBacktester initialized: {config.data_file}, "
|
||||
f"{config.start_date} to {config.end_date}")
|
||||
|
||||
self._log_action("backtester_initialized", {
|
||||
"config": config.__dict__,
|
||||
"session_start": self.session_start_time.isoformat()
|
||||
})
|
||||
|
||||
def _log_action(self, action_type: str, details: Dict[str, Any]) -> None:
|
||||
"""Log an action performed during backtesting."""
|
||||
self.action_log.append({
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"action_type": action_type,
|
||||
"details": details
|
||||
})
|
||||
|
||||
def load_data(self) -> pd.DataFrame:
|
||||
"""
|
||||
Load and prepare data for backtesting.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: Loaded OHLCV data with DatetimeIndex
|
||||
"""
|
||||
if self.data is None:
|
||||
logger.info(f"Loading data from {self.config.data_file}...")
|
||||
start_time = time.time()
|
||||
|
||||
self.data = self.storage.load_data(
|
||||
self.config.data_file,
|
||||
self.config.start_date,
|
||||
self.config.end_date
|
||||
)
|
||||
|
||||
load_time = time.time() - start_time
|
||||
logger.info(f"Data loaded: {len(self.data)} rows in {load_time:.2f}s")
|
||||
|
||||
# Validate data
|
||||
if self.data.empty:
|
||||
raise ValueError(f"No data loaded for the specified date range")
|
||||
|
||||
required_columns = ['open', 'high', 'low', 'close', 'volume']
|
||||
missing_columns = [col for col in required_columns if col not in self.data.columns]
|
||||
if missing_columns:
|
||||
raise ValueError(f"Missing required columns: {missing_columns}")
|
||||
|
||||
self._log_action("data_loaded", {
|
||||
"file": self.config.data_file,
|
||||
"rows": len(self.data),
|
||||
"load_time_seconds": load_time,
|
||||
"date_range": f"{self.config.start_date} to {self.config.end_date}",
|
||||
"columns": list(self.data.columns)
|
||||
})
|
||||
|
||||
return self.data
|
||||
|
||||
def run_single_strategy(self, strategy: IncStrategyBase,
|
||||
trader_params: Optional[Dict] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Run backtest for a single strategy.
|
||||
|
||||
Args:
|
||||
strategy: Incremental strategy instance
|
||||
trader_params: Additional trader parameters
|
||||
|
||||
Returns:
|
||||
Dict containing backtest results
|
||||
"""
|
||||
data = self.load_data()
|
||||
|
||||
# Merge trader parameters
|
||||
final_trader_params = {
|
||||
"stop_loss_pct": self.config.stop_loss_pct,
|
||||
"take_profit_pct": self.config.take_profit_pct
|
||||
}
|
||||
if trader_params:
|
||||
final_trader_params.update(trader_params)
|
||||
|
||||
# Create trader
|
||||
trader = IncTrader(
|
||||
strategy=strategy,
|
||||
initial_usd=self.config.initial_usd,
|
||||
params=final_trader_params
|
||||
)
|
||||
|
||||
# Run backtest
|
||||
logger.info(f"Starting backtest for {strategy.name}...")
|
||||
start_time = time.time()
|
||||
|
||||
self._log_action("single_strategy_backtest_started", {
|
||||
"strategy_name": strategy.name,
|
||||
"strategy_params": strategy.params,
|
||||
"trader_params": final_trader_params,
|
||||
"data_points": len(data)
|
||||
})
|
||||
|
||||
for timestamp, row in data.iterrows():
|
||||
ohlcv_data = {
|
||||
'open': row['open'],
|
||||
'high': row['high'],
|
||||
'low': row['low'],
|
||||
'close': row['close'],
|
||||
'volume': row['volume']
|
||||
}
|
||||
trader.process_data_point(timestamp, ohlcv_data)
|
||||
|
||||
# Finalize and get results
|
||||
trader.finalize()
|
||||
results = trader.get_results()
|
||||
|
||||
backtest_time = time.time() - start_time
|
||||
results["backtest_duration_seconds"] = backtest_time
|
||||
results["data_points"] = len(data)
|
||||
results["config"] = self.config.__dict__
|
||||
|
||||
logger.info(f"Backtest completed for {strategy.name} in {backtest_time:.2f}s: "
|
||||
f"${results['final_usd']:.2f} ({results['profit_ratio']*100:.2f}%), "
|
||||
f"{results['n_trades']} trades")
|
||||
|
||||
self._log_action("single_strategy_backtest_completed", {
|
||||
"strategy_name": strategy.name,
|
||||
"backtest_duration_seconds": backtest_time,
|
||||
"final_usd": results['final_usd'],
|
||||
"profit_ratio": results['profit_ratio'],
|
||||
"n_trades": results['n_trades'],
|
||||
"win_rate": results['win_rate']
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
def run_multiple_strategies(self, strategies: List[IncStrategyBase],
|
||||
trader_params: Optional[Dict] = None) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Run backtest for multiple strategies simultaneously.
|
||||
|
||||
Args:
|
||||
strategies: List of incremental strategy instances
|
||||
trader_params: Additional trader parameters
|
||||
|
||||
Returns:
|
||||
List of backtest results for each strategy
|
||||
"""
|
||||
self._log_action("multiple_strategies_backtest_started", {
|
||||
"strategy_count": len(strategies),
|
||||
"strategy_names": [s.name for s in strategies]
|
||||
})
|
||||
|
||||
results = []
|
||||
|
||||
for strategy in strategies:
|
||||
try:
|
||||
result = self.run_single_strategy(strategy, trader_params)
|
||||
results.append(result)
|
||||
except Exception as e:
|
||||
logger.error(f"Error running strategy {strategy.name}: {e}")
|
||||
# Add error result
|
||||
error_result = {
|
||||
"strategy_name": strategy.name,
|
||||
"error": str(e),
|
||||
"success": False
|
||||
}
|
||||
results.append(error_result)
|
||||
|
||||
self._log_action("strategy_error", {
|
||||
"strategy_name": strategy.name,
|
||||
"error": str(e)
|
||||
})
|
||||
|
||||
self._log_action("multiple_strategies_backtest_completed", {
|
||||
"total_strategies": len(strategies),
|
||||
"successful_strategies": len([r for r in results if r.get("success", True)]),
|
||||
"failed_strategies": len([r for r in results if not r.get("success", True)])
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
def optimize_parameters(self, strategy_class: type, param_grid: Dict[str, List],
|
||||
trader_param_grid: Optional[Dict[str, List]] = None,
|
||||
max_workers: Optional[int] = None) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Optimize strategy parameters using grid search with multiprocessing.
|
||||
|
||||
Args:
|
||||
strategy_class: Strategy class to instantiate
|
||||
param_grid: Grid of strategy parameters to test
|
||||
trader_param_grid: Grid of trader parameters to test
|
||||
max_workers: Maximum number of worker processes (uses SystemUtils if None)
|
||||
|
||||
Returns:
|
||||
List of results for each parameter combination
|
||||
"""
|
||||
# Generate parameter combinations
|
||||
strategy_combinations = list(self._generate_param_combinations(param_grid))
|
||||
trader_combinations = list(self._generate_param_combinations(trader_param_grid or {}))
|
||||
|
||||
# If no trader param grid, use default
|
||||
if not trader_combinations:
|
||||
trader_combinations = [{}]
|
||||
|
||||
# Create all combinations
|
||||
all_combinations = []
|
||||
for strategy_params in strategy_combinations:
|
||||
for trader_params in trader_combinations:
|
||||
all_combinations.append((strategy_params, trader_params))
|
||||
|
||||
logger.info(f"Starting parameter optimization: {len(all_combinations)} combinations")
|
||||
|
||||
# Determine number of workers using SystemUtils
|
||||
if max_workers is None:
|
||||
max_workers = self.system_utils.get_optimal_workers()
|
||||
else:
|
||||
max_workers = min(max_workers, len(all_combinations))
|
||||
|
||||
self._log_action("parameter_optimization_started", {
|
||||
"strategy_class": strategy_class.__name__,
|
||||
"total_combinations": len(all_combinations),
|
||||
"max_workers": max_workers,
|
||||
"strategy_param_grid": param_grid,
|
||||
"trader_param_grid": trader_param_grid or {}
|
||||
})
|
||||
|
||||
# Run optimization
|
||||
if max_workers == 1 or len(all_combinations) == 1:
|
||||
# Single-threaded execution
|
||||
results = []
|
||||
for strategy_params, trader_params in all_combinations:
|
||||
result = self._run_single_combination(strategy_class, strategy_params, trader_params)
|
||||
results.append(result)
|
||||
else:
|
||||
# Multi-threaded execution
|
||||
results = self._run_parallel_optimization(
|
||||
strategy_class, all_combinations, max_workers
|
||||
)
|
||||
|
||||
# Sort results by profit ratio
|
||||
valid_results = [r for r in results if r.get("success", True)]
|
||||
valid_results.sort(key=lambda x: x.get("profit_ratio", -float('inf')), reverse=True)
|
||||
|
||||
logger.info(f"Parameter optimization completed: {len(valid_results)} successful runs")
|
||||
|
||||
self._log_action("parameter_optimization_completed", {
|
||||
"total_runs": len(results),
|
||||
"successful_runs": len(valid_results),
|
||||
"failed_runs": len(results) - len(valid_results),
|
||||
"best_profit_ratio": valid_results[0]["profit_ratio"] if valid_results else None,
|
||||
"worst_profit_ratio": valid_results[-1]["profit_ratio"] if valid_results else None
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
def _generate_param_combinations(self, param_grid: Dict[str, List]) -> List[Dict]:
|
||||
"""Generate all parameter combinations from grid."""
|
||||
if not param_grid:
|
||||
return [{}]
|
||||
|
||||
keys = list(param_grid.keys())
|
||||
values = list(param_grid.values())
|
||||
|
||||
combinations = []
|
||||
for combination in product(*values):
|
||||
param_dict = dict(zip(keys, combination))
|
||||
combinations.append(param_dict)
|
||||
|
||||
return combinations
|
||||
|
||||
def _run_single_combination(self, strategy_class: type, strategy_params: Dict,
|
||||
trader_params: Dict) -> Dict[str, Any]:
|
||||
"""Run backtest for a single parameter combination."""
|
||||
try:
|
||||
# Create strategy instance
|
||||
strategy = strategy_class(params=strategy_params)
|
||||
|
||||
# Run backtest
|
||||
result = self.run_single_strategy(strategy, trader_params)
|
||||
result["success"] = True
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in parameter combination {strategy_params}, {trader_params}: {e}")
|
||||
return {
|
||||
"strategy_params": strategy_params,
|
||||
"trader_params": trader_params,
|
||||
"error": str(e),
|
||||
"success": False
|
||||
}
|
||||
|
||||
def _run_parallel_optimization(self, strategy_class: type, combinations: List,
|
||||
max_workers: int) -> List[Dict[str, Any]]:
|
||||
"""Run parameter optimization in parallel."""
|
||||
results = []
|
||||
|
||||
# Prepare arguments for worker function
|
||||
worker_args = []
|
||||
for strategy_params, trader_params in combinations:
|
||||
args = (strategy_class, strategy_params, trader_params, self.config, self.config.data_file)
|
||||
worker_args.append(args)
|
||||
|
||||
# Execute in parallel
|
||||
with ProcessPoolExecutor(max_workers=max_workers) as executor:
|
||||
# Submit all jobs
|
||||
future_to_params = {
|
||||
executor.submit(_worker_function, args): args[1:3] # strategy_params, trader_params
|
||||
for args in worker_args
|
||||
}
|
||||
|
||||
# Collect results as they complete
|
||||
for future in as_completed(future_to_params):
|
||||
combo = future_to_params[future]
|
||||
try:
|
||||
result = future.result()
|
||||
results.append(result)
|
||||
|
||||
if result.get("success", True):
|
||||
logger.info(f"Completed: {combo[0]} -> "
|
||||
f"${result.get('final_usd', 0):.2f} "
|
||||
f"({result.get('profit_ratio', 0)*100:.2f}%)")
|
||||
except Exception as e:
|
||||
logger.error(f"Worker error for {combo}: {e}")
|
||||
results.append({
|
||||
"strategy_params": combo[0],
|
||||
"trader_params": combo[1],
|
||||
"error": str(e),
|
||||
"success": False
|
||||
})
|
||||
|
||||
return results
|
||||
|
||||
def get_summary_statistics(self, results: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate summary statistics across multiple backtest results.
|
||||
|
||||
Args:
|
||||
results: List of backtest results
|
||||
|
||||
Returns:
|
||||
Dict containing summary statistics
|
||||
"""
|
||||
valid_results = [r for r in results if r.get("success", True)]
|
||||
|
||||
if not valid_results:
|
||||
return {
|
||||
"total_runs": len(results),
|
||||
"successful_runs": 0,
|
||||
"failed_runs": len(results),
|
||||
"error": "No valid results to summarize"
|
||||
}
|
||||
|
||||
# Extract metrics
|
||||
profit_ratios = [r["profit_ratio"] for r in valid_results]
|
||||
final_balances = [r["final_usd"] for r in valid_results]
|
||||
n_trades_list = [r["n_trades"] for r in valid_results]
|
||||
win_rates = [r["win_rate"] for r in valid_results]
|
||||
max_drawdowns = [r["max_drawdown"] for r in valid_results]
|
||||
|
||||
summary = {
|
||||
"total_runs": len(results),
|
||||
"successful_runs": len(valid_results),
|
||||
"failed_runs": len(results) - len(valid_results),
|
||||
|
||||
# Profit statistics
|
||||
"profit_ratio": {
|
||||
"mean": np.mean(profit_ratios),
|
||||
"std": np.std(profit_ratios),
|
||||
"min": np.min(profit_ratios),
|
||||
"max": np.max(profit_ratios),
|
||||
"median": np.median(profit_ratios)
|
||||
},
|
||||
|
||||
# Balance statistics
|
||||
"final_usd": {
|
||||
"mean": np.mean(final_balances),
|
||||
"std": np.std(final_balances),
|
||||
"min": np.min(final_balances),
|
||||
"max": np.max(final_balances),
|
||||
"median": np.median(final_balances)
|
||||
},
|
||||
|
||||
# Trading statistics
|
||||
"n_trades": {
|
||||
"mean": np.mean(n_trades_list),
|
||||
"std": np.std(n_trades_list),
|
||||
"min": np.min(n_trades_list),
|
||||
"max": np.max(n_trades_list),
|
||||
"median": np.median(n_trades_list)
|
||||
},
|
||||
|
||||
# Performance statistics
|
||||
"win_rate": {
|
||||
"mean": np.mean(win_rates),
|
||||
"std": np.std(win_rates),
|
||||
"min": np.min(win_rates),
|
||||
"max": np.max(win_rates),
|
||||
"median": np.median(win_rates)
|
||||
},
|
||||
|
||||
"max_drawdown": {
|
||||
"mean": np.mean(max_drawdowns),
|
||||
"std": np.std(max_drawdowns),
|
||||
"min": np.min(max_drawdowns),
|
||||
"max": np.max(max_drawdowns),
|
||||
"median": np.median(max_drawdowns)
|
||||
},
|
||||
|
||||
# Best performing run
|
||||
"best_run": max(valid_results, key=lambda x: x["profit_ratio"]),
|
||||
"worst_run": min(valid_results, key=lambda x: x["profit_ratio"])
|
||||
}
|
||||
|
||||
return summary
|
||||
|
||||
def save_comprehensive_results(self, results: List[Dict[str, Any]],
|
||||
base_filename: str,
|
||||
summary: Optional[Dict[str, Any]] = None) -> None:
|
||||
"""
|
||||
Save comprehensive backtest results including summary, individual results, and action log.
|
||||
|
||||
Args:
|
||||
results: List of backtest results
|
||||
base_filename: Base filename (without extension)
|
||||
summary: Optional summary statistics
|
||||
"""
|
||||
try:
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
|
||||
# 1. Save summary report
|
||||
if summary is None:
|
||||
summary = self.get_summary_statistics(results)
|
||||
|
||||
summary_data = {
|
||||
"session_info": {
|
||||
"timestamp": timestamp,
|
||||
"session_start": self.session_start_time.isoformat(),
|
||||
"session_duration_seconds": (datetime.now() - self.session_start_time).total_seconds(),
|
||||
"config": self.config.__dict__
|
||||
},
|
||||
"summary_statistics": summary,
|
||||
"action_log_summary": {
|
||||
"total_actions": len(self.action_log),
|
||||
"action_types": list(set(action["action_type"] for action in self.action_log))
|
||||
}
|
||||
}
|
||||
|
||||
summary_filename = f"{base_filename}_summary_{timestamp}.json"
|
||||
with open(f"results/{summary_filename}", 'w') as f:
|
||||
json.dump(summary_data, f, indent=2, default=str)
|
||||
logger.info(f"Summary saved to results/{summary_filename}")
|
||||
|
||||
# 2. Save detailed results CSV
|
||||
self.save_results(results, f"{base_filename}_detailed_{timestamp}.csv")
|
||||
|
||||
# 3. Save individual strategy results
|
||||
valid_results = [r for r in results if r.get("success", True)]
|
||||
for i, result in enumerate(valid_results):
|
||||
strategy_filename = f"{base_filename}_strategy_{i+1}_{result['strategy_name']}_{timestamp}.json"
|
||||
|
||||
# Include trades and detailed info
|
||||
strategy_data = {
|
||||
"strategy_info": {
|
||||
"name": result['strategy_name'],
|
||||
"params": result.get('strategy_params', {}),
|
||||
"trader_params": result.get('trader_params', {})
|
||||
},
|
||||
"performance": {
|
||||
"initial_usd": result['initial_usd'],
|
||||
"final_usd": result['final_usd'],
|
||||
"profit_ratio": result['profit_ratio'],
|
||||
"n_trades": result['n_trades'],
|
||||
"win_rate": result['win_rate'],
|
||||
"max_drawdown": result['max_drawdown'],
|
||||
"avg_trade": result['avg_trade'],
|
||||
"total_fees_usd": result['total_fees_usd']
|
||||
},
|
||||
"execution": {
|
||||
"backtest_duration_seconds": result.get('backtest_duration_seconds', 0),
|
||||
"data_points_processed": result.get('data_points_processed', 0),
|
||||
"warmup_complete": result.get('warmup_complete', False)
|
||||
},
|
||||
"trades": result.get('trades', [])
|
||||
}
|
||||
|
||||
with open(f"results/{strategy_filename}", 'w') as f:
|
||||
json.dump(strategy_data, f, indent=2, default=str)
|
||||
logger.info(f"Strategy {i+1} details saved to results/{strategy_filename}")
|
||||
|
||||
# 4. Save complete action log
|
||||
action_log_filename = f"{base_filename}_actions_{timestamp}.json"
|
||||
action_log_data = {
|
||||
"session_info": {
|
||||
"timestamp": timestamp,
|
||||
"session_start": self.session_start_time.isoformat(),
|
||||
"total_actions": len(self.action_log)
|
||||
},
|
||||
"actions": self.action_log
|
||||
}
|
||||
|
||||
with open(f"results/{action_log_filename}", 'w') as f:
|
||||
json.dump(action_log_data, f, indent=2, default=str)
|
||||
logger.info(f"Action log saved to results/{action_log_filename}")
|
||||
|
||||
# 5. Create a master index file
|
||||
index_filename = f"{base_filename}_index_{timestamp}.json"
|
||||
index_data = {
|
||||
"session_info": {
|
||||
"timestamp": timestamp,
|
||||
"base_filename": base_filename,
|
||||
"total_strategies": len(valid_results),
|
||||
"session_duration_seconds": (datetime.now() - self.session_start_time).total_seconds()
|
||||
},
|
||||
"files": {
|
||||
"summary": summary_filename,
|
||||
"detailed_csv": f"{base_filename}_detailed_{timestamp}.csv",
|
||||
"action_log": action_log_filename,
|
||||
"individual_strategies": [
|
||||
f"{base_filename}_strategy_{i+1}_{result['strategy_name']}_{timestamp}.json"
|
||||
for i, result in enumerate(valid_results)
|
||||
]
|
||||
},
|
||||
"quick_stats": {
|
||||
"best_profit": summary.get("profit_ratio", {}).get("max", 0) if summary.get("profit_ratio") else 0,
|
||||
"worst_profit": summary.get("profit_ratio", {}).get("min", 0) if summary.get("profit_ratio") else 0,
|
||||
"avg_profit": summary.get("profit_ratio", {}).get("mean", 0) if summary.get("profit_ratio") else 0,
|
||||
"total_successful_runs": summary.get("successful_runs", 0),
|
||||
"total_failed_runs": summary.get("failed_runs", 0)
|
||||
}
|
||||
}
|
||||
|
||||
with open(f"results/{index_filename}", 'w') as f:
|
||||
json.dump(index_data, f, indent=2, default=str)
|
||||
logger.info(f"Master index saved to results/{index_filename}")
|
||||
|
||||
print(f"\n📊 Comprehensive results saved:")
|
||||
print(f" 📋 Summary: results/{summary_filename}")
|
||||
print(f" 📈 Detailed CSV: results/{base_filename}_detailed_{timestamp}.csv")
|
||||
print(f" 📝 Action Log: results/{action_log_filename}")
|
||||
print(f" 📁 Individual Strategies: {len(valid_results)} files")
|
||||
print(f" 🗂️ Master Index: results/{index_filename}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving comprehensive results: {e}")
|
||||
raise
|
||||
|
||||
def save_results(self, results: List[Dict[str, Any]], filename: str) -> None:
|
||||
"""
|
||||
Save backtest results to file.
|
||||
|
||||
Args:
|
||||
results: List of backtest results
|
||||
filename: Output filename
|
||||
"""
|
||||
try:
|
||||
# Convert results to DataFrame for easy saving
|
||||
df_data = []
|
||||
for result in results:
|
||||
if result.get("success", True):
|
||||
row = {
|
||||
"strategy_name": result.get("strategy_name", ""),
|
||||
"profit_ratio": result.get("profit_ratio", 0),
|
||||
"final_usd": result.get("final_usd", 0),
|
||||
"n_trades": result.get("n_trades", 0),
|
||||
"win_rate": result.get("win_rate", 0),
|
||||
"max_drawdown": result.get("max_drawdown", 0),
|
||||
"avg_trade": result.get("avg_trade", 0),
|
||||
"total_fees_usd": result.get("total_fees_usd", 0),
|
||||
"backtest_duration_seconds": result.get("backtest_duration_seconds", 0),
|
||||
"data_points_processed": result.get("data_points_processed", 0)
|
||||
}
|
||||
|
||||
# Add strategy parameters
|
||||
strategy_params = result.get("strategy_params", {})
|
||||
for key, value in strategy_params.items():
|
||||
row[f"strategy_{key}"] = value
|
||||
|
||||
# Add trader parameters
|
||||
trader_params = result.get("trader_params", {})
|
||||
for key, value in trader_params.items():
|
||||
row[f"trader_{key}"] = value
|
||||
|
||||
df_data.append(row)
|
||||
|
||||
# Save to CSV
|
||||
df = pd.DataFrame(df_data)
|
||||
self.storage.save_data(df, filename)
|
||||
|
||||
logger.info(f"Results saved to {filename}: {len(df_data)} rows")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error saving results to {filename}: {e}")
|
||||
raise
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""String representation of the backtester."""
|
||||
return (f"IncBacktester(data_file={self.config.data_file}, "
|
||||
f"date_range={self.config.start_date} to {self.config.end_date}, "
|
||||
f"initial_usd=${self.config.initial_usd})")
|
||||
344
cycles/IncStrategies/inc_trader.py
Normal file
344
cycles/IncStrategies/inc_trader.py
Normal file
@ -0,0 +1,344 @@
|
||||
"""
|
||||
Incremental Trader for backtesting incremental strategies.
|
||||
|
||||
This module provides the IncTrader class that manages a single incremental strategy
|
||||
during backtesting, handling position state, trade execution, and performance tracking.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from typing import Dict, Optional, List, Any
|
||||
import logging
|
||||
from dataclasses import dataclass
|
||||
|
||||
from .base import IncStrategyBase, IncStrategySignal
|
||||
from ..market_fees import MarketFees
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class TradeRecord:
|
||||
"""Record of a completed trade."""
|
||||
entry_time: pd.Timestamp
|
||||
exit_time: pd.Timestamp
|
||||
entry_price: float
|
||||
exit_price: float
|
||||
entry_fee: float
|
||||
exit_fee: float
|
||||
profit_pct: float
|
||||
exit_reason: str
|
||||
strategy_name: str
|
||||
|
||||
|
||||
class IncTrader:
|
||||
"""
|
||||
Incremental trader that manages a single strategy during backtesting.
|
||||
|
||||
This class handles:
|
||||
- Strategy initialization and data feeding
|
||||
- Position management (USD/coin balance)
|
||||
- Trade execution based on strategy signals
|
||||
- Performance tracking and metrics collection
|
||||
- Fee calculation and trade logging
|
||||
|
||||
The trader processes data points sequentially, feeding them to the strategy
|
||||
and executing trades based on the generated signals.
|
||||
|
||||
Example:
|
||||
strategy = IncRandomStrategy(params={"timeframe": "15min"})
|
||||
trader = IncTrader(
|
||||
strategy=strategy,
|
||||
initial_usd=10000,
|
||||
params={"stop_loss_pct": 0.02}
|
||||
)
|
||||
|
||||
# Process data sequentially
|
||||
for timestamp, ohlcv_data in data_stream:
|
||||
trader.process_data_point(timestamp, ohlcv_data)
|
||||
|
||||
# Get results
|
||||
results = trader.get_results()
|
||||
"""
|
||||
|
||||
def __init__(self, strategy: IncStrategyBase, initial_usd: float = 10000,
|
||||
params: Optional[Dict] = None):
|
||||
"""
|
||||
Initialize the incremental trader.
|
||||
|
||||
Args:
|
||||
strategy: Incremental strategy instance
|
||||
initial_usd: Initial USD balance
|
||||
params: Trader parameters (stop_loss_pct, take_profit_pct, etc.)
|
||||
"""
|
||||
self.strategy = strategy
|
||||
self.initial_usd = initial_usd
|
||||
self.params = params or {}
|
||||
|
||||
# Position state
|
||||
self.usd = initial_usd
|
||||
self.coin = 0.0
|
||||
self.position = 0 # 0 = no position, 1 = long position
|
||||
self.entry_price = 0.0
|
||||
self.entry_time = None
|
||||
|
||||
# Performance tracking
|
||||
self.max_balance = initial_usd
|
||||
self.drawdowns = []
|
||||
self.trade_records = []
|
||||
self.current_timestamp = None
|
||||
self.current_price = None
|
||||
|
||||
# Strategy state
|
||||
self.data_points_processed = 0
|
||||
self.warmup_complete = False
|
||||
|
||||
# Parameters
|
||||
self.stop_loss_pct = self.params.get("stop_loss_pct", 0.0)
|
||||
self.take_profit_pct = self.params.get("take_profit_pct", 0.0)
|
||||
|
||||
logger.info(f"IncTrader initialized: strategy={strategy.name}, "
|
||||
f"initial_usd=${initial_usd}, stop_loss={self.stop_loss_pct*100:.1f}%")
|
||||
|
||||
def process_data_point(self, timestamp: pd.Timestamp, ohlcv_data: Dict[str, float]) -> None:
|
||||
"""
|
||||
Process a single data point through the strategy and handle trading logic.
|
||||
|
||||
Args:
|
||||
timestamp: Data point timestamp
|
||||
ohlcv_data: OHLCV data dictionary with keys: open, high, low, close, volume
|
||||
"""
|
||||
self.current_timestamp = timestamp
|
||||
self.current_price = ohlcv_data['close']
|
||||
self.data_points_processed += 1
|
||||
|
||||
try:
|
||||
# Feed data to strategy (handles timeframe aggregation internally)
|
||||
result = self.strategy.update_minute_data(timestamp, ohlcv_data)
|
||||
|
||||
# Check if strategy is warmed up
|
||||
if not self.warmup_complete and self.strategy.is_warmed_up:
|
||||
self.warmup_complete = True
|
||||
logger.info(f"Strategy {self.strategy.name} warmed up after "
|
||||
f"{self.data_points_processed} data points")
|
||||
|
||||
# Only process signals if strategy is warmed up and we have a complete timeframe bar
|
||||
if self.warmup_complete and result is not None:
|
||||
self._process_trading_logic()
|
||||
|
||||
# Update performance tracking
|
||||
self._update_performance_metrics()
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error processing data point at {timestamp}: {e}")
|
||||
raise
|
||||
|
||||
def _process_trading_logic(self) -> None:
|
||||
"""Process trading logic based on current position and strategy signals."""
|
||||
if self.position == 0:
|
||||
# No position - check for entry signals
|
||||
self._check_entry_signals()
|
||||
else:
|
||||
# In position - check for exit signals
|
||||
self._check_exit_signals()
|
||||
|
||||
def _check_entry_signals(self) -> None:
|
||||
"""Check for entry signals when not in position."""
|
||||
try:
|
||||
entry_signal = self.strategy.get_entry_signal()
|
||||
|
||||
if entry_signal.signal_type == "ENTRY" and entry_signal.confidence > 0:
|
||||
self._execute_entry(entry_signal)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking entry signals: {e}")
|
||||
|
||||
def _check_exit_signals(self) -> None:
|
||||
"""Check for exit signals when in position."""
|
||||
try:
|
||||
# Check strategy exit signals
|
||||
exit_signal = self.strategy.get_exit_signal()
|
||||
|
||||
if exit_signal.signal_type == "EXIT" and exit_signal.confidence > 0:
|
||||
exit_reason = exit_signal.metadata.get("type", "STRATEGY_EXIT")
|
||||
self._execute_exit(exit_reason, exit_signal.price)
|
||||
return
|
||||
|
||||
# Check stop loss
|
||||
if self.stop_loss_pct > 0:
|
||||
stop_loss_price = self.entry_price * (1 - self.stop_loss_pct)
|
||||
if self.current_price <= stop_loss_price:
|
||||
self._execute_exit("STOP_LOSS", self.current_price)
|
||||
return
|
||||
|
||||
# Check take profit
|
||||
if self.take_profit_pct > 0:
|
||||
take_profit_price = self.entry_price * (1 + self.take_profit_pct)
|
||||
if self.current_price >= take_profit_price:
|
||||
self._execute_exit("TAKE_PROFIT", self.current_price)
|
||||
return
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking exit signals: {e}")
|
||||
|
||||
def _execute_entry(self, signal: IncStrategySignal) -> None:
|
||||
"""Execute entry trade."""
|
||||
entry_price = signal.price if signal.price else self.current_price
|
||||
entry_fee = MarketFees.calculate_okx_taker_maker_fee(self.usd, is_maker=False)
|
||||
usd_after_fee = self.usd - entry_fee
|
||||
|
||||
self.coin = usd_after_fee / entry_price
|
||||
self.entry_price = entry_price
|
||||
self.entry_time = self.current_timestamp
|
||||
self.usd = 0.0
|
||||
self.position = 1
|
||||
|
||||
logger.info(f"ENTRY: {self.strategy.name} at ${entry_price:.2f}, "
|
||||
f"confidence={signal.confidence:.2f}, fee=${entry_fee:.2f}")
|
||||
|
||||
def _execute_exit(self, exit_reason: str, exit_price: Optional[float] = None) -> None:
|
||||
"""Execute exit trade."""
|
||||
exit_price = exit_price if exit_price else self.current_price
|
||||
usd_gross = self.coin * exit_price
|
||||
exit_fee = MarketFees.calculate_okx_taker_maker_fee(usd_gross, is_maker=False)
|
||||
|
||||
self.usd = usd_gross - exit_fee
|
||||
|
||||
# Calculate profit
|
||||
profit_pct = (exit_price - self.entry_price) / self.entry_price
|
||||
|
||||
# Record trade
|
||||
trade_record = TradeRecord(
|
||||
entry_time=self.entry_time,
|
||||
exit_time=self.current_timestamp,
|
||||
entry_price=self.entry_price,
|
||||
exit_price=exit_price,
|
||||
entry_fee=MarketFees.calculate_okx_taker_maker_fee(
|
||||
self.coin * self.entry_price, is_maker=False
|
||||
),
|
||||
exit_fee=exit_fee,
|
||||
profit_pct=profit_pct,
|
||||
exit_reason=exit_reason,
|
||||
strategy_name=self.strategy.name
|
||||
)
|
||||
self.trade_records.append(trade_record)
|
||||
|
||||
# Reset position
|
||||
self.coin = 0.0
|
||||
self.position = 0
|
||||
self.entry_price = 0.0
|
||||
self.entry_time = None
|
||||
|
||||
logger.info(f"EXIT: {self.strategy.name} at ${exit_price:.2f}, "
|
||||
f"reason={exit_reason}, profit={profit_pct*100:.2f}%, fee=${exit_fee:.2f}")
|
||||
|
||||
def _update_performance_metrics(self) -> None:
|
||||
"""Update performance tracking metrics."""
|
||||
# Calculate current balance
|
||||
if self.position == 0:
|
||||
current_balance = self.usd
|
||||
else:
|
||||
current_balance = self.coin * self.current_price
|
||||
|
||||
# Update max balance and drawdown
|
||||
if current_balance > self.max_balance:
|
||||
self.max_balance = current_balance
|
||||
|
||||
drawdown = (self.max_balance - current_balance) / self.max_balance
|
||||
self.drawdowns.append(drawdown)
|
||||
|
||||
def finalize(self) -> None:
|
||||
"""Finalize trading session (close any open positions)."""
|
||||
if self.position == 1:
|
||||
self._execute_exit("EOD", self.current_price)
|
||||
logger.info(f"Closed final position for {self.strategy.name} at EOD")
|
||||
|
||||
def get_results(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get comprehensive trading results.
|
||||
|
||||
Returns:
|
||||
Dict containing performance metrics, trade records, and statistics
|
||||
"""
|
||||
final_balance = self.usd
|
||||
n_trades = len(self.trade_records)
|
||||
|
||||
# Calculate statistics
|
||||
if n_trades > 0:
|
||||
profits = [trade.profit_pct for trade in self.trade_records]
|
||||
wins = [p for p in profits if p > 0]
|
||||
win_rate = len(wins) / n_trades
|
||||
avg_trade = np.mean(profits)
|
||||
total_fees = sum(trade.entry_fee + trade.exit_fee for trade in self.trade_records)
|
||||
else:
|
||||
win_rate = 0.0
|
||||
avg_trade = 0.0
|
||||
total_fees = 0.0
|
||||
|
||||
max_drawdown = max(self.drawdowns) if self.drawdowns else 0.0
|
||||
profit_ratio = (final_balance - self.initial_usd) / self.initial_usd
|
||||
|
||||
# Convert trade records to dictionaries
|
||||
trades = []
|
||||
for trade in self.trade_records:
|
||||
trades.append({
|
||||
'entry_time': trade.entry_time,
|
||||
'exit_time': trade.exit_time,
|
||||
'entry': trade.entry_price,
|
||||
'exit': trade.exit_price,
|
||||
'profit_pct': trade.profit_pct,
|
||||
'type': trade.exit_reason,
|
||||
'fee_usd': trade.entry_fee + trade.exit_fee,
|
||||
'strategy': trade.strategy_name
|
||||
})
|
||||
|
||||
results = {
|
||||
"strategy_name": self.strategy.name,
|
||||
"strategy_params": self.strategy.params,
|
||||
"trader_params": self.params,
|
||||
"initial_usd": self.initial_usd,
|
||||
"final_usd": final_balance,
|
||||
"profit_ratio": profit_ratio,
|
||||
"n_trades": n_trades,
|
||||
"win_rate": win_rate,
|
||||
"max_drawdown": max_drawdown,
|
||||
"avg_trade": avg_trade,
|
||||
"total_fees_usd": total_fees,
|
||||
"data_points_processed": self.data_points_processed,
|
||||
"warmup_complete": self.warmup_complete,
|
||||
"trades": trades
|
||||
}
|
||||
|
||||
# Add first and last trade info if available
|
||||
if n_trades > 0:
|
||||
results["first_trade"] = {
|
||||
"entry_time": self.trade_records[0].entry_time,
|
||||
"entry": self.trade_records[0].entry_price
|
||||
}
|
||||
results["last_trade"] = {
|
||||
"exit_time": self.trade_records[-1].exit_time,
|
||||
"exit": self.trade_records[-1].exit_price
|
||||
}
|
||||
|
||||
return results
|
||||
|
||||
def get_current_state(self) -> Dict[str, Any]:
|
||||
"""Get current trader state for debugging."""
|
||||
return {
|
||||
"strategy": self.strategy.name,
|
||||
"position": self.position,
|
||||
"usd": self.usd,
|
||||
"coin": self.coin,
|
||||
"current_price": self.current_price,
|
||||
"entry_price": self.entry_price,
|
||||
"data_points_processed": self.data_points_processed,
|
||||
"warmup_complete": self.warmup_complete,
|
||||
"n_trades": len(self.trade_records),
|
||||
"strategy_state": self.strategy.get_current_state_summary()
|
||||
}
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""String representation of the trader."""
|
||||
return (f"IncTrader(strategy={self.strategy.name}, "
|
||||
f"position={self.position}, usd=${self.usd:.2f}, "
|
||||
f"trades={len(self.trade_records)})")
|
||||
@ -320,12 +320,13 @@ class IncMetaTrendStrategy(IncStrategyBase):
|
||||
"""
|
||||
Check if meta-trend exit condition is met.
|
||||
|
||||
Exit condition: meta-trend changes from != -1 to == -1
|
||||
Exit condition: meta-trend changes from != 1 to == -1
|
||||
(Modified to match original strategy behavior)
|
||||
|
||||
Returns:
|
||||
bool: True if exit condition is met
|
||||
"""
|
||||
return (self.previous_meta_trend != -1 and
|
||||
return (self.previous_meta_trend != 1 and
|
||||
self.current_meta_trend == -1)
|
||||
|
||||
def get_current_state_summary(self) -> Dict[str, Any]:
|
||||
|
||||
@ -96,10 +96,10 @@ class DefaultStrategy(StrategyBase):
|
||||
return
|
||||
|
||||
# Limit data size to prevent excessive computation time
|
||||
original_length = len(strategy_data)
|
||||
if len(strategy_data) > 200:
|
||||
strategy_data = strategy_data.tail(200)
|
||||
print(f"DefaultStrategy: Limited data from {original_length} to {len(strategy_data)} points for faster computation")
|
||||
# original_length = len(strategy_data)
|
||||
# if len(strategy_data) > 200:
|
||||
# strategy_data = strategy_data.tail(200)
|
||||
# print(f"DefaultStrategy: Limited data from {original_length} to {len(strategy_data)} points for faster computation")
|
||||
|
||||
# Use a timeout mechanism for Supertrend calculation
|
||||
result_container = {}
|
||||
|
||||
343
scripts/compare_same_logic.py
Normal file
343
scripts/compare_same_logic.py
Normal file
@ -0,0 +1,343 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Compare both strategies using identical all-in/all-out logic.
|
||||
This will help identify where the performance difference comes from.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
from datetime import datetime
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Add project root to path
|
||||
sys.path.insert(0, os.path.abspath('..'))
|
||||
|
||||
def process_trades_with_same_logic(trades_file, strategy_name, initial_usd=10000):
|
||||
"""Process trades using identical all-in/all-out logic for both strategies."""
|
||||
|
||||
print(f"\n🔍 Processing {strategy_name}...")
|
||||
|
||||
# Load trades data
|
||||
trades_df = pd.read_csv(trades_file)
|
||||
|
||||
# Convert timestamps
|
||||
trades_df['entry_time'] = pd.to_datetime(trades_df['entry_time'])
|
||||
trades_df['exit_time'] = pd.to_datetime(trades_df['exit_time'], errors='coerce')
|
||||
|
||||
# Separate buy and sell signals
|
||||
buy_signals = trades_df[trades_df['type'] == 'BUY'].copy()
|
||||
sell_signals = trades_df[trades_df['type'] != 'BUY'].copy()
|
||||
|
||||
print(f" 📊 {len(buy_signals)} buy signals, {len(sell_signals)} sell signals")
|
||||
|
||||
# Debug: Show first few trades
|
||||
print(f" 🔍 First few trades:")
|
||||
for i, (_, trade) in enumerate(trades_df.head(6).iterrows()):
|
||||
print(f" {i+1}. {trade['entry_time']} - {trade['type']} at ${trade.get('entry_price', trade.get('exit_price', 'N/A'))}")
|
||||
|
||||
# Apply identical all-in/all-out logic
|
||||
portfolio_history = []
|
||||
current_usd = initial_usd
|
||||
current_btc = 0.0
|
||||
in_position = False
|
||||
|
||||
# Combine all trades and sort by time
|
||||
all_trades = []
|
||||
|
||||
# Add buy signals
|
||||
for _, buy in buy_signals.iterrows():
|
||||
all_trades.append({
|
||||
'timestamp': buy['entry_time'],
|
||||
'type': 'BUY',
|
||||
'price': buy['entry_price'],
|
||||
'trade_data': buy
|
||||
})
|
||||
|
||||
# Add sell signals
|
||||
for _, sell in sell_signals.iterrows():
|
||||
all_trades.append({
|
||||
'timestamp': sell['exit_time'],
|
||||
'type': 'SELL',
|
||||
'price': sell['exit_price'],
|
||||
'profit_pct': sell['profit_pct'],
|
||||
'trade_data': sell
|
||||
})
|
||||
|
||||
# Sort by timestamp
|
||||
all_trades = sorted(all_trades, key=lambda x: x['timestamp'])
|
||||
|
||||
print(f" ⏰ Processing {len(all_trades)} trade events...")
|
||||
|
||||
# Process each trade event
|
||||
trade_count = 0
|
||||
for i, trade in enumerate(all_trades):
|
||||
timestamp = trade['timestamp']
|
||||
trade_type = trade['type']
|
||||
price = trade['price']
|
||||
|
||||
if trade_type == 'BUY' and not in_position:
|
||||
# ALL-IN: Use all USD to buy BTC
|
||||
current_btc = current_usd / price
|
||||
current_usd = 0.0
|
||||
in_position = True
|
||||
trade_count += 1
|
||||
|
||||
portfolio_history.append({
|
||||
'timestamp': timestamp,
|
||||
'portfolio_value': current_btc * price,
|
||||
'usd_balance': current_usd,
|
||||
'btc_balance': current_btc,
|
||||
'trade_type': 'BUY',
|
||||
'price': price,
|
||||
'in_position': in_position
|
||||
})
|
||||
|
||||
if trade_count <= 3: # Debug first few trades
|
||||
print(f" BUY {trade_count}: ${current_usd:.0f} → {current_btc:.6f} BTC at ${price:.0f}")
|
||||
|
||||
elif trade_type == 'SELL' and in_position:
|
||||
# ALL-OUT: Sell all BTC for USD
|
||||
old_usd = current_usd
|
||||
current_usd = current_btc * price
|
||||
current_btc = 0.0
|
||||
in_position = False
|
||||
|
||||
portfolio_history.append({
|
||||
'timestamp': timestamp,
|
||||
'portfolio_value': current_usd,
|
||||
'usd_balance': current_usd,
|
||||
'btc_balance': current_btc,
|
||||
'trade_type': 'SELL',
|
||||
'price': price,
|
||||
'profit_pct': trade.get('profit_pct', 0) * 100,
|
||||
'in_position': in_position
|
||||
})
|
||||
|
||||
if trade_count <= 3: # Debug first few trades
|
||||
print(f" SELL {trade_count}: {current_btc:.6f} BTC → ${current_usd:.0f} at ${price:.0f}")
|
||||
|
||||
# Convert to DataFrame
|
||||
portfolio_df = pd.DataFrame(portfolio_history)
|
||||
|
||||
if len(portfolio_df) > 0:
|
||||
portfolio_df = portfolio_df.sort_values('timestamp').reset_index(drop=True)
|
||||
final_value = portfolio_df['portfolio_value'].iloc[-1]
|
||||
else:
|
||||
final_value = initial_usd
|
||||
print(f" ⚠️ Warning: No portfolio history generated!")
|
||||
|
||||
# Calculate performance metrics
|
||||
total_return = (final_value - initial_usd) / initial_usd * 100
|
||||
num_trades = len(sell_signals)
|
||||
|
||||
if num_trades > 0:
|
||||
winning_trades = len(sell_signals[sell_signals['profit_pct'] > 0])
|
||||
win_rate = winning_trades / num_trades * 100
|
||||
avg_trade = sell_signals['profit_pct'].mean() * 100
|
||||
best_trade = sell_signals['profit_pct'].max() * 100
|
||||
worst_trade = sell_signals['profit_pct'].min() * 100
|
||||
else:
|
||||
win_rate = avg_trade = best_trade = worst_trade = 0
|
||||
|
||||
performance = {
|
||||
'strategy_name': strategy_name,
|
||||
'initial_value': initial_usd,
|
||||
'final_value': final_value,
|
||||
'total_return': total_return,
|
||||
'num_trades': num_trades,
|
||||
'win_rate': win_rate,
|
||||
'avg_trade': avg_trade,
|
||||
'best_trade': best_trade,
|
||||
'worst_trade': worst_trade
|
||||
}
|
||||
|
||||
print(f" 💰 Final Value: ${final_value:,.0f} ({total_return:+.1f}%)")
|
||||
print(f" 📈 Portfolio events: {len(portfolio_df)}")
|
||||
|
||||
return buy_signals, sell_signals, portfolio_df, performance
|
||||
|
||||
def create_side_by_side_comparison(data1, data2, save_path="same_logic_comparison.png"):
|
||||
"""Create side-by-side comparison plot."""
|
||||
|
||||
buy1, sell1, portfolio1, perf1 = data1
|
||||
buy2, sell2, portfolio2, perf2 = data2
|
||||
|
||||
# Create figure with subplots
|
||||
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(20, 16))
|
||||
|
||||
# Plot 1: Original Strategy Signals
|
||||
ax1.scatter(buy1['entry_time'], buy1['entry_price'],
|
||||
color='green', marker='^', s=60, label=f"Buy ({len(buy1)})",
|
||||
zorder=5, alpha=0.8)
|
||||
|
||||
profitable_sells1 = sell1[sell1['profit_pct'] > 0]
|
||||
losing_sells1 = sell1[sell1['profit_pct'] <= 0]
|
||||
|
||||
if len(profitable_sells1) > 0:
|
||||
ax1.scatter(profitable_sells1['exit_time'], profitable_sells1['exit_price'],
|
||||
color='blue', marker='v', s=60, label=f"Profitable Sells ({len(profitable_sells1)})",
|
||||
zorder=5, alpha=0.8)
|
||||
|
||||
if len(losing_sells1) > 0:
|
||||
ax1.scatter(losing_sells1['exit_time'], losing_sells1['exit_price'],
|
||||
color='red', marker='v', s=60, label=f"Losing Sells ({len(losing_sells1)})",
|
||||
zorder=5, alpha=0.8)
|
||||
|
||||
ax1.set_title(f'{perf1["strategy_name"]} - Trading Signals', fontsize=14, fontweight='bold')
|
||||
ax1.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax1.legend(loc='upper left', fontsize=9)
|
||||
ax1.grid(True, alpha=0.3)
|
||||
ax1.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Plot 2: Incremental Strategy Signals
|
||||
ax2.scatter(buy2['entry_time'], buy2['entry_price'],
|
||||
color='darkgreen', marker='^', s=60, label=f"Buy ({len(buy2)})",
|
||||
zorder=5, alpha=0.8)
|
||||
|
||||
profitable_sells2 = sell2[sell2['profit_pct'] > 0]
|
||||
losing_sells2 = sell2[sell2['profit_pct'] <= 0]
|
||||
|
||||
if len(profitable_sells2) > 0:
|
||||
ax2.scatter(profitable_sells2['exit_time'], profitable_sells2['exit_price'],
|
||||
color='darkblue', marker='v', s=60, label=f"Profitable Sells ({len(profitable_sells2)})",
|
||||
zorder=5, alpha=0.8)
|
||||
|
||||
if len(losing_sells2) > 0:
|
||||
ax2.scatter(losing_sells2['exit_time'], losing_sells2['exit_price'],
|
||||
color='darkred', marker='v', s=60, label=f"Losing Sells ({len(losing_sells2)})",
|
||||
zorder=5, alpha=0.8)
|
||||
|
||||
ax2.set_title(f'{perf2["strategy_name"]} - Trading Signals', fontsize=14, fontweight='bold')
|
||||
ax2.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax2.legend(loc='upper left', fontsize=9)
|
||||
ax2.grid(True, alpha=0.3)
|
||||
ax2.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Plot 3: Portfolio Value Comparison
|
||||
if len(portfolio1) > 0:
|
||||
ax3.plot(portfolio1['timestamp'], portfolio1['portfolio_value'],
|
||||
color='blue', linewidth=2, label=f'{perf1["strategy_name"]}', alpha=0.8)
|
||||
|
||||
if len(portfolio2) > 0:
|
||||
ax3.plot(portfolio2['timestamp'], portfolio2['portfolio_value'],
|
||||
color='red', linewidth=2, label=f'{perf2["strategy_name"]}', alpha=0.8)
|
||||
|
||||
ax3.axhline(y=10000, color='gray', linestyle='--', alpha=0.7, label='Initial Value ($10,000)')
|
||||
|
||||
ax3.set_title('Portfolio Value Comparison (Same Logic)', fontsize=14, fontweight='bold')
|
||||
ax3.set_ylabel('Portfolio Value (USD)', fontsize=12)
|
||||
ax3.set_xlabel('Date', fontsize=12)
|
||||
ax3.legend(loc='upper left', fontsize=10)
|
||||
ax3.grid(True, alpha=0.3)
|
||||
ax3.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Plot 4: Performance Comparison Table
|
||||
ax4.axis('off')
|
||||
|
||||
# Create detailed comparison table
|
||||
comparison_text = f"""
|
||||
IDENTICAL LOGIC COMPARISON
|
||||
{'='*50}
|
||||
|
||||
{'Metric':<25} {perf1['strategy_name']:<15} {perf2['strategy_name']:<15} {'Difference':<15}
|
||||
{'-'*75}
|
||||
{'Initial Value':<25} ${perf1['initial_value']:>10,.0f} ${perf2['initial_value']:>12,.0f} ${perf2['initial_value'] - perf1['initial_value']:>12,.0f}
|
||||
{'Final Value':<25} ${perf1['final_value']:>10,.0f} ${perf2['final_value']:>12,.0f} ${perf2['final_value'] - perf1['final_value']:>12,.0f}
|
||||
{'Total Return':<25} {perf1['total_return']:>10.1f}% {perf2['total_return']:>12.1f}% {perf2['total_return'] - perf1['total_return']:>12.1f}%
|
||||
{'Number of Trades':<25} {perf1['num_trades']:>10} {perf2['num_trades']:>12} {perf2['num_trades'] - perf1['num_trades']:>12}
|
||||
{'Win Rate':<25} {perf1['win_rate']:>10.1f}% {perf2['win_rate']:>12.1f}% {perf2['win_rate'] - perf1['win_rate']:>12.1f}%
|
||||
{'Average Trade':<25} {perf1['avg_trade']:>10.2f}% {perf2['avg_trade']:>12.2f}% {perf2['avg_trade'] - perf1['avg_trade']:>12.2f}%
|
||||
{'Best Trade':<25} {perf1['best_trade']:>10.1f}% {perf2['best_trade']:>12.1f}% {perf2['best_trade'] - perf1['best_trade']:>12.1f}%
|
||||
{'Worst Trade':<25} {perf1['worst_trade']:>10.1f}% {perf2['worst_trade']:>12.1f}% {perf2['worst_trade'] - perf1['worst_trade']:>12.1f}%
|
||||
|
||||
LOGIC APPLIED:
|
||||
• ALL-IN: Use 100% of USD to buy BTC on entry signals
|
||||
• ALL-OUT: Sell 100% of BTC for USD on exit signals
|
||||
• NO FEES: Pure price-based calculations
|
||||
• SAME COMPOUNDING: Each trade uses full available balance
|
||||
|
||||
TIME PERIODS:
|
||||
{perf1['strategy_name']}: {buy1['entry_time'].min().strftime('%Y-%m-%d')} to {sell1['exit_time'].max().strftime('%Y-%m-%d')}
|
||||
{perf2['strategy_name']}: {buy2['entry_time'].min().strftime('%Y-%m-%d')} to {sell2['exit_time'].max().strftime('%Y-%m-%d')}
|
||||
|
||||
ANALYSIS:
|
||||
If results differ significantly, it indicates:
|
||||
1. Different entry/exit timing
|
||||
2. Different price execution points
|
||||
3. Different trade frequency or duration
|
||||
4. Data inconsistencies between files
|
||||
"""
|
||||
|
||||
ax4.text(0.05, 0.95, comparison_text, transform=ax4.transAxes, fontsize=10,
|
||||
verticalalignment='top', fontfamily='monospace',
|
||||
bbox=dict(boxstyle="round,pad=0.5", facecolor="lightgray", alpha=0.9))
|
||||
|
||||
# Format x-axis for signal plots
|
||||
for ax in [ax1, ax2, ax3]:
|
||||
ax.xaxis.set_major_locator(mdates.MonthLocator())
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
# Adjust layout and save
|
||||
plt.tight_layout()
|
||||
plt.savefig(save_path, dpi=300, bbox_inches='tight')
|
||||
plt.show()
|
||||
|
||||
print(f"Comparison plot saved to: {save_path}")
|
||||
|
||||
def main():
|
||||
"""Main function to run the identical logic comparison."""
|
||||
print("🚀 Starting Identical Logic Comparison")
|
||||
print("=" * 60)
|
||||
|
||||
# File paths
|
||||
original_file = "../results/trades_15min(15min)_ST3pct.csv"
|
||||
incremental_file = "../results/trades_incremental_15min(15min)_ST3pct.csv"
|
||||
output_file = "../results/same_logic_comparison.png"
|
||||
|
||||
# Check if files exist
|
||||
if not os.path.exists(original_file):
|
||||
print(f"❌ Error: Original trades file not found: {original_file}")
|
||||
return
|
||||
|
||||
if not os.path.exists(incremental_file):
|
||||
print(f"❌ Error: Incremental trades file not found: {incremental_file}")
|
||||
return
|
||||
|
||||
try:
|
||||
# Process both strategies with identical logic
|
||||
original_data = process_trades_with_same_logic(original_file, "Original Strategy")
|
||||
incremental_data = process_trades_with_same_logic(incremental_file, "Incremental Strategy")
|
||||
|
||||
# Create comparison plot
|
||||
create_side_by_side_comparison(original_data, incremental_data, output_file)
|
||||
|
||||
# Print summary comparison
|
||||
_, _, _, perf1 = original_data
|
||||
_, _, _, perf2 = incremental_data
|
||||
|
||||
print(f"\n📊 IDENTICAL LOGIC COMPARISON SUMMARY:")
|
||||
print(f"Original Strategy: ${perf1['final_value']:,.0f} ({perf1['total_return']:+.1f}%)")
|
||||
print(f"Incremental Strategy: ${perf2['final_value']:,.0f} ({perf2['total_return']:+.1f}%)")
|
||||
print(f"Difference: ${perf2['final_value'] - perf1['final_value']:,.0f} ({perf2['total_return'] - perf1['total_return']:+.1f}%)")
|
||||
|
||||
if abs(perf1['total_return'] - perf2['total_return']) < 1.0:
|
||||
print("✅ Results are very similar - strategies are equivalent!")
|
||||
else:
|
||||
print("⚠️ Significant difference detected - investigating causes...")
|
||||
print(f" • Trade count difference: {perf2['num_trades'] - perf1['num_trades']}")
|
||||
print(f" • Win rate difference: {perf2['win_rate'] - perf1['win_rate']:+.1f}%")
|
||||
print(f" • Avg trade difference: {perf2['avg_trade'] - perf1['avg_trade']:+.2f}%")
|
||||
|
||||
print(f"\n✅ Analysis completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during analysis: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
271
scripts/plot_old.py
Normal file
271
scripts/plot_old.py
Normal file
@ -0,0 +1,271 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Plot original strategy results from trades CSV file.
|
||||
Shows buy/sell signals and portfolio value over time.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
from datetime import datetime
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Add project root to path
|
||||
sys.path.insert(0, os.path.abspath('..'))
|
||||
|
||||
def load_and_process_trades(trades_file, initial_usd=10000):
|
||||
"""Load trades and calculate portfolio value over time."""
|
||||
|
||||
# Load trades data
|
||||
trades_df = pd.read_csv(trades_file)
|
||||
|
||||
# Convert timestamps
|
||||
trades_df['entry_time'] = pd.to_datetime(trades_df['entry_time'])
|
||||
trades_df['exit_time'] = pd.to_datetime(trades_df['exit_time'], errors='coerce')
|
||||
|
||||
# Separate buy and sell signals
|
||||
buy_signals = trades_df[trades_df['type'] == 'BUY'].copy()
|
||||
sell_signals = trades_df[trades_df['type'] != 'BUY'].copy()
|
||||
|
||||
print(f"Loaded {len(buy_signals)} buy signals and {len(sell_signals)} sell signals")
|
||||
|
||||
# Calculate portfolio value using compounding
|
||||
portfolio_value = initial_usd
|
||||
portfolio_history = []
|
||||
|
||||
# Create timeline from all trade times
|
||||
all_times = []
|
||||
all_times.extend(buy_signals['entry_time'].tolist())
|
||||
all_times.extend(sell_signals['exit_time'].dropna().tolist())
|
||||
all_times = sorted(set(all_times))
|
||||
|
||||
print(f"Processing {len(all_times)} trade events...")
|
||||
|
||||
# Track portfolio value at each trade
|
||||
current_value = initial_usd
|
||||
|
||||
for sell_trade in sell_signals.itertuples():
|
||||
# Apply the profit/loss from this trade
|
||||
profit_pct = sell_trade.profit_pct
|
||||
current_value *= (1 + profit_pct)
|
||||
|
||||
portfolio_history.append({
|
||||
'timestamp': sell_trade.exit_time,
|
||||
'portfolio_value': current_value,
|
||||
'trade_type': 'SELL',
|
||||
'price': sell_trade.exit_price,
|
||||
'profit_pct': profit_pct * 100
|
||||
})
|
||||
|
||||
# Convert to DataFrame
|
||||
portfolio_df = pd.DataFrame(portfolio_history)
|
||||
portfolio_df = portfolio_df.sort_values('timestamp').reset_index(drop=True)
|
||||
|
||||
# Calculate performance metrics
|
||||
final_value = current_value
|
||||
total_return = (final_value - initial_usd) / initial_usd * 100
|
||||
num_trades = len(sell_signals)
|
||||
|
||||
winning_trades = len(sell_signals[sell_signals['profit_pct'] > 0])
|
||||
win_rate = winning_trades / num_trades * 100 if num_trades > 0 else 0
|
||||
|
||||
avg_trade = sell_signals['profit_pct'].mean() * 100 if num_trades > 0 else 0
|
||||
best_trade = sell_signals['profit_pct'].max() * 100 if num_trades > 0 else 0
|
||||
worst_trade = sell_signals['profit_pct'].min() * 100 if num_trades > 0 else 0
|
||||
|
||||
performance = {
|
||||
'initial_value': initial_usd,
|
||||
'final_value': final_value,
|
||||
'total_return': total_return,
|
||||
'num_trades': num_trades,
|
||||
'win_rate': win_rate,
|
||||
'avg_trade': avg_trade,
|
||||
'best_trade': best_trade,
|
||||
'worst_trade': worst_trade
|
||||
}
|
||||
|
||||
return buy_signals, sell_signals, portfolio_df, performance
|
||||
|
||||
def create_comprehensive_plot(buy_signals, sell_signals, portfolio_df, performance, save_path="original_strategy_analysis.png"):
|
||||
"""Create comprehensive plot with signals and portfolio value."""
|
||||
|
||||
# Create figure with subplots
|
||||
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16, 12),
|
||||
gridspec_kw={'height_ratios': [2, 1]})
|
||||
|
||||
# Plot 1: Price chart with buy/sell signals
|
||||
# Get price range for the chart
|
||||
all_prices = []
|
||||
all_prices.extend(buy_signals['entry_price'].tolist())
|
||||
all_prices.extend(sell_signals['exit_price'].tolist())
|
||||
|
||||
price_min = min(all_prices)
|
||||
price_max = max(all_prices)
|
||||
|
||||
# Create a price line by connecting buy and sell points
|
||||
price_timeline = []
|
||||
value_timeline = []
|
||||
|
||||
# Combine and sort all signals by time
|
||||
all_signals = []
|
||||
|
||||
for _, buy in buy_signals.iterrows():
|
||||
all_signals.append({
|
||||
'time': buy['entry_time'],
|
||||
'price': buy['entry_price'],
|
||||
'type': 'BUY'
|
||||
})
|
||||
|
||||
for _, sell in sell_signals.iterrows():
|
||||
all_signals.append({
|
||||
'time': sell['exit_time'],
|
||||
'price': sell['exit_price'],
|
||||
'type': 'SELL'
|
||||
})
|
||||
|
||||
all_signals = sorted(all_signals, key=lambda x: x['time'])
|
||||
|
||||
# Create price line
|
||||
for signal in all_signals:
|
||||
price_timeline.append(signal['time'])
|
||||
value_timeline.append(signal['price'])
|
||||
|
||||
# Plot price line
|
||||
if price_timeline:
|
||||
ax1.plot(price_timeline, value_timeline, color='black', linewidth=1.5, alpha=0.7, label='Price Action')
|
||||
|
||||
# Plot buy signals
|
||||
ax1.scatter(buy_signals['entry_time'], buy_signals['entry_price'],
|
||||
color='green', marker='^', s=80, label=f"Buy Signals ({len(buy_signals)})",
|
||||
zorder=5, alpha=0.9, edgecolors='white', linewidth=1)
|
||||
|
||||
# Plot sell signals with different colors based on profit/loss
|
||||
profitable_sells = sell_signals[sell_signals['profit_pct'] > 0]
|
||||
losing_sells = sell_signals[sell_signals['profit_pct'] <= 0]
|
||||
|
||||
if len(profitable_sells) > 0:
|
||||
ax1.scatter(profitable_sells['exit_time'], profitable_sells['exit_price'],
|
||||
color='blue', marker='v', s=80, label=f"Profitable Sells ({len(profitable_sells)})",
|
||||
zorder=5, alpha=0.9, edgecolors='white', linewidth=1)
|
||||
|
||||
if len(losing_sells) > 0:
|
||||
ax1.scatter(losing_sells['exit_time'], losing_sells['exit_price'],
|
||||
color='red', marker='v', s=80, label=f"Losing Sells ({len(losing_sells)})",
|
||||
zorder=5, alpha=0.9, edgecolors='white', linewidth=1)
|
||||
|
||||
ax1.set_title('Original Strategy - Trading Signals', fontsize=16, fontweight='bold')
|
||||
ax1.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax1.legend(loc='upper left', fontsize=10)
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Format y-axis for price
|
||||
ax1.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Plot 2: Portfolio Value Over Time
|
||||
if len(portfolio_df) > 0:
|
||||
ax2.plot(portfolio_df['timestamp'], portfolio_df['portfolio_value'],
|
||||
color='purple', linewidth=2, label='Portfolio Value')
|
||||
|
||||
# Add horizontal line for initial value
|
||||
ax2.axhline(y=performance['initial_value'], color='gray',
|
||||
linestyle='--', alpha=0.7, label='Initial Value ($10,000)')
|
||||
|
||||
# Add profit/loss shading
|
||||
initial_value = performance['initial_value']
|
||||
profit_mask = portfolio_df['portfolio_value'] > initial_value
|
||||
loss_mask = portfolio_df['portfolio_value'] < initial_value
|
||||
|
||||
if profit_mask.any():
|
||||
ax2.fill_between(portfolio_df['timestamp'], portfolio_df['portfolio_value'], initial_value,
|
||||
where=profit_mask, color='green', alpha=0.2, label='Profit Zone')
|
||||
|
||||
if loss_mask.any():
|
||||
ax2.fill_between(portfolio_df['timestamp'], portfolio_df['portfolio_value'], initial_value,
|
||||
where=loss_mask, color='red', alpha=0.2, label='Loss Zone')
|
||||
|
||||
ax2.set_title('Portfolio Value Over Time', fontsize=14, fontweight='bold')
|
||||
ax2.set_ylabel('Portfolio Value (USD)', fontsize=12)
|
||||
ax2.set_xlabel('Date', fontsize=12)
|
||||
ax2.legend(loc='upper left', fontsize=10)
|
||||
ax2.grid(True, alpha=0.3)
|
||||
|
||||
# Format y-axis for portfolio value
|
||||
ax2.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Format x-axis for both plots
|
||||
for ax in [ax1, ax2]:
|
||||
ax.xaxis.set_major_locator(mdates.MonthLocator())
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
# Add performance text box
|
||||
perf_text = f"""
|
||||
PERFORMANCE SUMMARY
|
||||
{'='*30}
|
||||
Initial Value: ${performance['initial_value']:,.0f}
|
||||
Final Value: ${performance['final_value']:,.0f}
|
||||
Total Return: {performance['total_return']:+.1f}%
|
||||
|
||||
Trading Statistics:
|
||||
• Number of Trades: {performance['num_trades']}
|
||||
• Win Rate: {performance['win_rate']:.1f}%
|
||||
• Average Trade: {performance['avg_trade']:+.2f}%
|
||||
• Best Trade: {performance['best_trade']:+.1f}%
|
||||
• Worst Trade: {performance['worst_trade']:+.1f}%
|
||||
|
||||
Period: {buy_signals['entry_time'].min().strftime('%Y-%m-%d')} to {sell_signals['exit_time'].max().strftime('%Y-%m-%d')}
|
||||
"""
|
||||
|
||||
# Add text box to the plot
|
||||
ax2.text(1.02, 0.98, perf_text, transform=ax2.transAxes, fontsize=10,
|
||||
verticalalignment='top', fontfamily='monospace',
|
||||
bbox=dict(boxstyle="round,pad=0.5", facecolor="lightgray", alpha=0.9))
|
||||
|
||||
# Adjust layout and save
|
||||
plt.tight_layout()
|
||||
plt.subplots_adjust(right=0.75) # Make room for text box
|
||||
plt.savefig(save_path, dpi=300, bbox_inches='tight')
|
||||
plt.show()
|
||||
|
||||
print(f"Plot saved to: {save_path}")
|
||||
|
||||
def main():
|
||||
"""Main function to run the analysis."""
|
||||
print("🚀 Starting Original Strategy Analysis")
|
||||
print("=" * 50)
|
||||
|
||||
# File paths
|
||||
trades_file = "../results/trades_15min(15min)_ST3pct.csv"
|
||||
output_file = "../results/original_strategy_analysis.png"
|
||||
|
||||
if not os.path.exists(trades_file):
|
||||
print(f"❌ Error: Trades file not found: {trades_file}")
|
||||
return
|
||||
|
||||
try:
|
||||
# Load and process trades
|
||||
buy_signals, sell_signals, portfolio_df, performance = load_and_process_trades(trades_file)
|
||||
|
||||
# Print performance summary
|
||||
print(f"\n📊 PERFORMANCE SUMMARY:")
|
||||
print(f"Initial Value: ${performance['initial_value']:,.0f}")
|
||||
print(f"Final Value: ${performance['final_value']:,.0f}")
|
||||
print(f"Total Return: {performance['total_return']:+.1f}%")
|
||||
print(f"Number of Trades: {performance['num_trades']}")
|
||||
print(f"Win Rate: {performance['win_rate']:.1f}%")
|
||||
print(f"Average Trade: {performance['avg_trade']:+.2f}%")
|
||||
|
||||
# Create plot
|
||||
create_comprehensive_plot(buy_signals, sell_signals, portfolio_df, performance, output_file)
|
||||
|
||||
print(f"\n✅ Analysis completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during analysis: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
276
scripts/plot_results.py
Normal file
276
scripts/plot_results.py
Normal file
@ -0,0 +1,276 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Comprehensive comparison plotting script for trading strategies.
|
||||
Compares original strategy vs incremental strategy results.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
from datetime import datetime
|
||||
import warnings
|
||||
warnings.filterwarnings('ignore')
|
||||
|
||||
# Add the project root to the path
|
||||
sys.path.insert(0, os.path.abspath('..'))
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.utils.data_utils import aggregate_to_minutes
|
||||
|
||||
|
||||
def load_trades_data(trades_file):
|
||||
"""Load and process trades data."""
|
||||
if not os.path.exists(trades_file):
|
||||
print(f"File not found: {trades_file}")
|
||||
return None
|
||||
|
||||
df = pd.read_csv(trades_file)
|
||||
|
||||
# Convert timestamps
|
||||
df['entry_time'] = pd.to_datetime(df['entry_time'])
|
||||
if 'exit_time' in df.columns:
|
||||
df['exit_time'] = pd.to_datetime(df['exit_time'], errors='coerce')
|
||||
|
||||
# Separate buy and sell signals
|
||||
buy_signals = df[df['type'] == 'BUY'].copy()
|
||||
sell_signals = df[df['type'] != 'BUY'].copy()
|
||||
|
||||
return {
|
||||
'all_trades': df,
|
||||
'buy_signals': buy_signals,
|
||||
'sell_signals': sell_signals
|
||||
}
|
||||
|
||||
|
||||
def calculate_strategy_performance(trades_data):
|
||||
"""Calculate basic performance metrics."""
|
||||
if trades_data is None:
|
||||
return None
|
||||
|
||||
sell_signals = trades_data['sell_signals']
|
||||
|
||||
if len(sell_signals) == 0:
|
||||
return None
|
||||
|
||||
total_profit_pct = sell_signals['profit_pct'].sum()
|
||||
num_trades = len(sell_signals)
|
||||
win_rate = len(sell_signals[sell_signals['profit_pct'] > 0]) / num_trades
|
||||
avg_profit = sell_signals['profit_pct'].mean()
|
||||
|
||||
# Exit type breakdown
|
||||
exit_types = sell_signals['type'].value_counts().to_dict()
|
||||
|
||||
return {
|
||||
'total_profit_pct': total_profit_pct * 100,
|
||||
'num_trades': num_trades,
|
||||
'win_rate': win_rate * 100,
|
||||
'avg_profit_pct': avg_profit * 100,
|
||||
'exit_types': exit_types,
|
||||
'best_trade': sell_signals['profit_pct'].max() * 100,
|
||||
'worst_trade': sell_signals['profit_pct'].min() * 100
|
||||
}
|
||||
|
||||
|
||||
def plot_strategy_comparison(original_file, incremental_file, price_data, output_file="strategy_comparison.png"):
|
||||
"""Create comprehensive comparison plot of both strategies on the same chart."""
|
||||
|
||||
print(f"Loading original strategy: {original_file}")
|
||||
original_data = load_trades_data(original_file)
|
||||
|
||||
print(f"Loading incremental strategy: {incremental_file}")
|
||||
incremental_data = load_trades_data(incremental_file)
|
||||
|
||||
if original_data is None or incremental_data is None:
|
||||
print("Error: Could not load one or both trade files")
|
||||
return
|
||||
|
||||
# Calculate performance metrics
|
||||
original_perf = calculate_strategy_performance(original_data)
|
||||
incremental_perf = calculate_strategy_performance(incremental_data)
|
||||
|
||||
# Create figure with subplots
|
||||
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 16),
|
||||
gridspec_kw={'height_ratios': [3, 1]})
|
||||
|
||||
# Plot 1: Combined Strategy Comparison on Same Chart
|
||||
ax1.plot(price_data.index, price_data['close'], label='BTC Price', color='black', linewidth=2, zorder=1)
|
||||
|
||||
# Calculate price range for offset positioning
|
||||
price_min = price_data['close'].min()
|
||||
price_max = price_data['close'].max()
|
||||
price_range = price_max - price_min
|
||||
offset = price_range * 0.02 # 2% offset
|
||||
|
||||
# Original strategy signals (ABOVE the price)
|
||||
if len(original_data['buy_signals']) > 0:
|
||||
buy_prices_offset = original_data['buy_signals']['entry_price'] + offset
|
||||
ax1.scatter(original_data['buy_signals']['entry_time'], buy_prices_offset,
|
||||
color='darkgreen', marker='^', s=80, label=f"Original Buy ({len(original_data['buy_signals'])})",
|
||||
zorder=6, alpha=0.9, edgecolors='white', linewidth=1)
|
||||
|
||||
if len(original_data['sell_signals']) > 0:
|
||||
# Separate by exit type for original strategy
|
||||
for exit_type in original_data['sell_signals']['type'].unique():
|
||||
exit_data = original_data['sell_signals'][original_data['sell_signals']['type'] == exit_type]
|
||||
exit_prices_offset = exit_data['exit_price'] + offset
|
||||
|
||||
if exit_type == 'STOP_LOSS':
|
||||
color, marker, size = 'red', 'X', 100
|
||||
elif exit_type == 'TAKE_PROFIT':
|
||||
color, marker, size = 'gold', '*', 120
|
||||
elif exit_type == 'EOD':
|
||||
color, marker, size = 'gray', 's', 70
|
||||
else:
|
||||
color, marker, size = 'blue', 'v', 80
|
||||
|
||||
ax1.scatter(exit_data['exit_time'], exit_prices_offset,
|
||||
color=color, marker=marker, s=size,
|
||||
label=f"Original {exit_type} ({len(exit_data)})", zorder=6, alpha=0.9,
|
||||
edgecolors='white', linewidth=1)
|
||||
|
||||
# Incremental strategy signals (BELOW the price)
|
||||
if len(incremental_data['buy_signals']) > 0:
|
||||
buy_prices_offset = incremental_data['buy_signals']['entry_price'] - offset
|
||||
ax1.scatter(incremental_data['buy_signals']['entry_time'], buy_prices_offset,
|
||||
color='lime', marker='^', s=80, label=f"Incremental Buy ({len(incremental_data['buy_signals'])})",
|
||||
zorder=5, alpha=0.9, edgecolors='black', linewidth=1)
|
||||
|
||||
if len(incremental_data['sell_signals']) > 0:
|
||||
# Separate by exit type for incremental strategy
|
||||
for exit_type in incremental_data['sell_signals']['type'].unique():
|
||||
exit_data = incremental_data['sell_signals'][incremental_data['sell_signals']['type'] == exit_type]
|
||||
exit_prices_offset = exit_data['exit_price'] - offset
|
||||
|
||||
if exit_type == 'STOP_LOSS':
|
||||
color, marker, size = 'darkred', 'X', 100
|
||||
elif exit_type == 'TAKE_PROFIT':
|
||||
color, marker, size = 'orange', '*', 120
|
||||
elif exit_type == 'EOD':
|
||||
color, marker, size = 'darkgray', 's', 70
|
||||
else:
|
||||
color, marker, size = 'purple', 'v', 80
|
||||
|
||||
ax1.scatter(exit_data['exit_time'], exit_prices_offset,
|
||||
color=color, marker=marker, s=size,
|
||||
label=f"Incremental {exit_type} ({len(exit_data)})", zorder=5, alpha=0.9,
|
||||
edgecolors='black', linewidth=1)
|
||||
|
||||
# Add horizontal reference lines to show offset zones
|
||||
ax1.axhline(y=price_data['close'].mean() + offset, color='darkgreen', linestyle='--', alpha=0.3, linewidth=1)
|
||||
ax1.axhline(y=price_data['close'].mean() - offset, color='lime', linestyle='--', alpha=0.3, linewidth=1)
|
||||
|
||||
# Add text annotations
|
||||
ax1.text(0.02, 0.98, 'Original Strategy (Above Price)', transform=ax1.transAxes,
|
||||
fontsize=12, fontweight='bold', color='darkgreen',
|
||||
bbox=dict(boxstyle="round,pad=0.3", facecolor="white", alpha=0.8))
|
||||
ax1.text(0.02, 0.02, 'Incremental Strategy (Below Price)', transform=ax1.transAxes,
|
||||
fontsize=12, fontweight='bold', color='lime',
|
||||
bbox=dict(boxstyle="round,pad=0.3", facecolor="black", alpha=0.8))
|
||||
|
||||
ax1.set_title('Strategy Comparison - Trading Signals Overlay', fontsize=16, fontweight='bold')
|
||||
ax1.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax1.legend(loc='upper right', fontsize=9, ncol=2)
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 2: Performance Comparison and Statistics
|
||||
ax2.axis('off')
|
||||
|
||||
# Create detailed comparison table
|
||||
stats_text = f"""
|
||||
STRATEGY COMPARISON SUMMARY - {price_data.index[0].strftime('%Y-%m-%d')} to {price_data.index[-1].strftime('%Y-%m-%d')}
|
||||
|
||||
{'Metric':<25} {'Original':<15} {'Incremental':<15} {'Difference':<15}
|
||||
{'-'*75}
|
||||
{'Total Profit':<25} {original_perf['total_profit_pct']:>10.1f}% {incremental_perf['total_profit_pct']:>12.1f}% {incremental_perf['total_profit_pct'] - original_perf['total_profit_pct']:>12.1f}%
|
||||
{'Number of Trades':<25} {original_perf['num_trades']:>10} {incremental_perf['num_trades']:>12} {incremental_perf['num_trades'] - original_perf['num_trades']:>12}
|
||||
{'Win Rate':<25} {original_perf['win_rate']:>10.1f}% {incremental_perf['win_rate']:>12.1f}% {incremental_perf['win_rate'] - original_perf['win_rate']:>12.1f}%
|
||||
{'Average Trade Profit':<25} {original_perf['avg_profit_pct']:>10.2f}% {incremental_perf['avg_profit_pct']:>12.2f}% {incremental_perf['avg_profit_pct'] - original_perf['avg_profit_pct']:>12.2f}%
|
||||
{'Best Trade':<25} {original_perf['best_trade']:>10.1f}% {incremental_perf['best_trade']:>12.1f}% {incremental_perf['best_trade'] - original_perf['best_trade']:>12.1f}%
|
||||
{'Worst Trade':<25} {original_perf['worst_trade']:>10.1f}% {incremental_perf['worst_trade']:>12.1f}% {incremental_perf['worst_trade'] - original_perf['worst_trade']:>12.1f}%
|
||||
|
||||
EXIT TYPE BREAKDOWN:
|
||||
Original Strategy: {original_perf['exit_types']}
|
||||
Incremental Strategy: {incremental_perf['exit_types']}
|
||||
|
||||
SIGNAL POSITIONING:
|
||||
• Original signals are positioned ABOVE the price line (darker colors)
|
||||
• Incremental signals are positioned BELOW the price line (brighter colors)
|
||||
• Both strategies use the same 15-minute timeframe and 3% stop loss
|
||||
|
||||
TOTAL DATA POINTS: {len(price_data):,} bars ({len(price_data)*15:,} minutes)
|
||||
"""
|
||||
|
||||
ax2.text(0.05, 0.95, stats_text, transform=ax2.transAxes, fontsize=11,
|
||||
verticalalignment='top', fontfamily='monospace',
|
||||
bbox=dict(boxstyle="round,pad=0.5", facecolor="lightgray", alpha=0.9))
|
||||
|
||||
# Format x-axis for price plot
|
||||
ax1.xaxis.set_major_locator(mdates.MonthLocator())
|
||||
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m'))
|
||||
plt.setp(ax1.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
# Adjust layout and save
|
||||
plt.tight_layout()
|
||||
# plt.savefig(output_file, dpi=300, bbox_inches='tight')
|
||||
# plt.close()
|
||||
|
||||
# Show interactive plot for manual exploration
|
||||
plt.show()
|
||||
|
||||
print(f"Comparison plot saved to: {output_file}")
|
||||
|
||||
# Print summary to console
|
||||
print(f"\n📊 STRATEGY COMPARISON SUMMARY:")
|
||||
print(f"Original Strategy: {original_perf['total_profit_pct']:.1f}% profit, {original_perf['num_trades']} trades, {original_perf['win_rate']:.1f}% win rate")
|
||||
print(f"Incremental Strategy: {incremental_perf['total_profit_pct']:.1f}% profit, {incremental_perf['num_trades']} trades, {incremental_perf['win_rate']:.1f}% win rate")
|
||||
print(f"Difference: {incremental_perf['total_profit_pct'] - original_perf['total_profit_pct']:.1f}% profit, {incremental_perf['num_trades'] - original_perf['num_trades']} trades")
|
||||
|
||||
# Signal positioning explanation
|
||||
print(f"\n🎯 SIGNAL POSITIONING:")
|
||||
print(f"• Original strategy signals are positioned ABOVE the price line")
|
||||
print(f"• Incremental strategy signals are positioned BELOW the price line")
|
||||
print(f"• This allows easy visual comparison of timing differences")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to run the comparison."""
|
||||
print("🚀 Starting Strategy Comparison Analysis")
|
||||
print("=" * 60)
|
||||
|
||||
# File paths
|
||||
original_file = "results/trades_15min(15min)_ST3pct.csv"
|
||||
incremental_file = "results/trades_incremental_15min(15min)_ST3pct.csv"
|
||||
output_file = "results/strategy_comparison_analysis.png"
|
||||
|
||||
# Load price data
|
||||
print("Loading price data...")
|
||||
storage = Storage()
|
||||
|
||||
try:
|
||||
# Load data for the same period as the trades
|
||||
price_data = storage.load_data("btcusd_1-min_data.csv", "2025-01-01", "2025-05-01")
|
||||
print(f"Loaded {len(price_data)} minute-level data points")
|
||||
|
||||
# Aggregate to 15-minute bars for cleaner visualization
|
||||
print("Aggregating to 15-minute bars...")
|
||||
price_data = aggregate_to_minutes(price_data, 15)
|
||||
print(f"Aggregated to {len(price_data)} bars")
|
||||
|
||||
# Create comparison plot
|
||||
plot_strategy_comparison(original_file, incremental_file, price_data, output_file)
|
||||
|
||||
print(f"\n✅ Analysis completed successfully!")
|
||||
print(f"📁 Check the results: {output_file}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error during analysis: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
321
test/align_strategy_timing.py
Normal file
321
test/align_strategy_timing.py
Normal file
@ -0,0 +1,321 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Align Strategy Timing for Fair Comparison
|
||||
=========================================
|
||||
|
||||
This script aligns the timing between original and incremental strategies
|
||||
by removing early trades from the original strategy that occur before
|
||||
the incremental strategy starts trading (warmup period).
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
def load_trade_files():
|
||||
"""Load both strategy trade files."""
|
||||
|
||||
print("📊 LOADING TRADE FILES")
|
||||
print("=" * 60)
|
||||
|
||||
# Load original strategy trades
|
||||
original_file = "../results/trades_15min(15min)_ST3pct.csv"
|
||||
incremental_file = "../results/trades_incremental_15min(15min)_ST3pct.csv"
|
||||
|
||||
print(f"Loading original trades: {original_file}")
|
||||
original_df = pd.read_csv(original_file)
|
||||
original_df['entry_time'] = pd.to_datetime(original_df['entry_time'])
|
||||
original_df['exit_time'] = pd.to_datetime(original_df['exit_time'])
|
||||
|
||||
print(f"Loading incremental trades: {incremental_file}")
|
||||
incremental_df = pd.read_csv(incremental_file)
|
||||
incremental_df['entry_time'] = pd.to_datetime(incremental_df['entry_time'])
|
||||
incremental_df['exit_time'] = pd.to_datetime(incremental_df['exit_time'])
|
||||
|
||||
print(f"Original trades: {len(original_df)} total")
|
||||
print(f"Incremental trades: {len(incremental_df)} total")
|
||||
|
||||
return original_df, incremental_df
|
||||
|
||||
def find_alignment_point(original_df, incremental_df):
|
||||
"""Find the point where both strategies should start for fair comparison."""
|
||||
|
||||
print(f"\n🕐 FINDING ALIGNMENT POINT")
|
||||
print("=" * 60)
|
||||
|
||||
# Find when incremental strategy starts trading
|
||||
incremental_start = incremental_df[incremental_df['type'] == 'BUY']['entry_time'].min()
|
||||
print(f"Incremental strategy first trade: {incremental_start}")
|
||||
|
||||
# Find original strategy trades before this point
|
||||
original_buys = original_df[original_df['type'] == 'BUY']
|
||||
early_trades = original_buys[original_buys['entry_time'] < incremental_start]
|
||||
|
||||
print(f"Original trades before incremental start: {len(early_trades)}")
|
||||
|
||||
if len(early_trades) > 0:
|
||||
print(f"First original trade: {original_buys['entry_time'].min()}")
|
||||
print(f"Last early trade: {early_trades['entry_time'].max()}")
|
||||
print(f"Time gap: {incremental_start - original_buys['entry_time'].min()}")
|
||||
|
||||
# Show the early trades that will be excluded
|
||||
print(f"\n📋 EARLY TRADES TO EXCLUDE:")
|
||||
for i, trade in early_trades.iterrows():
|
||||
print(f" {trade['entry_time']} - ${trade['entry_price']:.0f}")
|
||||
|
||||
return incremental_start
|
||||
|
||||
def align_strategies(original_df, incremental_df, alignment_time):
|
||||
"""Align both strategies to start at the same time."""
|
||||
|
||||
print(f"\n⚖️ ALIGNING STRATEGIES")
|
||||
print("=" * 60)
|
||||
|
||||
# Filter original strategy to start from alignment time
|
||||
aligned_original = original_df[original_df['entry_time'] >= alignment_time].copy()
|
||||
|
||||
# Incremental strategy remains the same (already starts at alignment time)
|
||||
aligned_incremental = incremental_df.copy()
|
||||
|
||||
print(f"Original trades after alignment: {len(aligned_original)}")
|
||||
print(f"Incremental trades: {len(aligned_incremental)}")
|
||||
|
||||
# Reset indices for clean comparison
|
||||
aligned_original = aligned_original.reset_index(drop=True)
|
||||
aligned_incremental = aligned_incremental.reset_index(drop=True)
|
||||
|
||||
return aligned_original, aligned_incremental
|
||||
|
||||
def calculate_aligned_performance(aligned_original, aligned_incremental):
|
||||
"""Calculate performance metrics for aligned strategies."""
|
||||
|
||||
print(f"\n💰 CALCULATING ALIGNED PERFORMANCE")
|
||||
print("=" * 60)
|
||||
|
||||
def calculate_strategy_performance(df, strategy_name):
|
||||
"""Calculate performance for a single strategy."""
|
||||
|
||||
# Filter to complete trades (buy + sell pairs)
|
||||
buy_signals = df[df['type'] == 'BUY'].copy()
|
||||
sell_signals = df[df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
|
||||
print(f"\n{strategy_name}:")
|
||||
print(f" Buy signals: {len(buy_signals)}")
|
||||
print(f" Sell signals: {len(sell_signals)}")
|
||||
|
||||
if len(buy_signals) == 0:
|
||||
return {
|
||||
'final_value': 10000,
|
||||
'total_return': 0.0,
|
||||
'trade_count': 0,
|
||||
'win_rate': 0.0,
|
||||
'avg_trade': 0.0
|
||||
}
|
||||
|
||||
# Calculate performance using same logic as comparison script
|
||||
initial_usd = 10000
|
||||
current_usd = initial_usd
|
||||
|
||||
for i, buy_trade in buy_signals.iterrows():
|
||||
# Find corresponding sell trade
|
||||
sell_trades = sell_signals[sell_signals['entry_time'] == buy_trade['entry_time']]
|
||||
if len(sell_trades) == 0:
|
||||
continue
|
||||
|
||||
sell_trade = sell_trades.iloc[0]
|
||||
|
||||
# Calculate trade performance
|
||||
entry_price = buy_trade['entry_price']
|
||||
exit_price = sell_trade['exit_price']
|
||||
profit_pct = sell_trade['profit_pct']
|
||||
|
||||
# Apply profit/loss
|
||||
current_usd *= (1 + profit_pct)
|
||||
|
||||
total_return = ((current_usd - initial_usd) / initial_usd) * 100
|
||||
|
||||
# Calculate trade statistics
|
||||
profits = sell_signals['profit_pct'].values
|
||||
winning_trades = len(profits[profits > 0])
|
||||
win_rate = (winning_trades / len(profits)) * 100 if len(profits) > 0 else 0
|
||||
avg_trade = np.mean(profits) * 100 if len(profits) > 0 else 0
|
||||
|
||||
print(f" Final value: ${current_usd:,.0f}")
|
||||
print(f" Total return: {total_return:.1f}%")
|
||||
print(f" Win rate: {win_rate:.1f}%")
|
||||
print(f" Average trade: {avg_trade:.2f}%")
|
||||
|
||||
return {
|
||||
'final_value': current_usd,
|
||||
'total_return': total_return,
|
||||
'trade_count': len(profits),
|
||||
'win_rate': win_rate,
|
||||
'avg_trade': avg_trade,
|
||||
'profits': profits.tolist()
|
||||
}
|
||||
|
||||
# Calculate performance for both strategies
|
||||
original_perf = calculate_strategy_performance(aligned_original, "Aligned Original")
|
||||
incremental_perf = calculate_strategy_performance(aligned_incremental, "Incremental")
|
||||
|
||||
# Compare performance
|
||||
print(f"\n📊 PERFORMANCE COMPARISON:")
|
||||
print("=" * 60)
|
||||
print(f"Original (aligned): ${original_perf['final_value']:,.0f} ({original_perf['total_return']:+.1f}%)")
|
||||
print(f"Incremental: ${incremental_perf['final_value']:,.0f} ({incremental_perf['total_return']:+.1f}%)")
|
||||
|
||||
difference = incremental_perf['total_return'] - original_perf['total_return']
|
||||
print(f"Difference: {difference:+.1f}%")
|
||||
|
||||
if abs(difference) < 5:
|
||||
print("✅ Performance is now closely aligned!")
|
||||
elif difference > 0:
|
||||
print("📈 Incremental strategy outperforms after alignment")
|
||||
else:
|
||||
print("📉 Original strategy still outperforms")
|
||||
|
||||
return original_perf, incremental_perf
|
||||
|
||||
def save_aligned_results(aligned_original, aligned_incremental, original_perf, incremental_perf):
|
||||
"""Save aligned results for further analysis."""
|
||||
|
||||
print(f"\n💾 SAVING ALIGNED RESULTS")
|
||||
print("=" * 60)
|
||||
|
||||
# Save aligned trade files
|
||||
aligned_original.to_csv("../results/trades_original_aligned.csv", index=False)
|
||||
aligned_incremental.to_csv("../results/trades_incremental_aligned.csv", index=False)
|
||||
|
||||
print("Saved aligned trade files:")
|
||||
print(" - ../results/trades_original_aligned.csv")
|
||||
print(" - ../results/trades_incremental_aligned.csv")
|
||||
|
||||
# Save performance comparison
|
||||
comparison_results = {
|
||||
'alignment_analysis': {
|
||||
'original_performance': original_perf,
|
||||
'incremental_performance': incremental_perf,
|
||||
'performance_difference': incremental_perf['total_return'] - original_perf['total_return'],
|
||||
'trade_count_difference': incremental_perf['trade_count'] - original_perf['trade_count'],
|
||||
'win_rate_difference': incremental_perf['win_rate'] - original_perf['win_rate']
|
||||
},
|
||||
'timestamp': datetime.now().isoformat()
|
||||
}
|
||||
|
||||
with open("../results/aligned_performance_comparison.json", "w") as f:
|
||||
json.dump(comparison_results, f, indent=2)
|
||||
|
||||
print(" - ../results/aligned_performance_comparison.json")
|
||||
|
||||
def create_aligned_visualization(aligned_original, aligned_incremental):
|
||||
"""Create visualization of aligned strategies."""
|
||||
|
||||
print(f"\n📊 CREATING ALIGNED VISUALIZATION")
|
||||
print("=" * 60)
|
||||
|
||||
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(15, 10))
|
||||
|
||||
# Get buy signals for plotting
|
||||
orig_buys = aligned_original[aligned_original['type'] == 'BUY']
|
||||
inc_buys = aligned_incremental[aligned_incremental['type'] == 'BUY']
|
||||
|
||||
# Plot 1: Trade timing comparison
|
||||
ax1.scatter(orig_buys['entry_time'], orig_buys['entry_price'],
|
||||
alpha=0.7, label='Original (Aligned)', color='blue', s=40)
|
||||
ax1.scatter(inc_buys['entry_time'], inc_buys['entry_price'],
|
||||
alpha=0.7, label='Incremental', color='red', s=40)
|
||||
ax1.set_title('Aligned Strategy Trade Timing Comparison')
|
||||
ax1.set_xlabel('Date')
|
||||
ax1.set_ylabel('Entry Price ($)')
|
||||
ax1.legend()
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 2: Cumulative performance
|
||||
def calculate_cumulative_returns(df):
|
||||
"""Calculate cumulative returns over time."""
|
||||
buy_signals = df[df['type'] == 'BUY'].copy()
|
||||
sell_signals = df[df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
|
||||
cumulative_returns = []
|
||||
current_value = 10000
|
||||
dates = []
|
||||
|
||||
for i, buy_trade in buy_signals.iterrows():
|
||||
sell_trades = sell_signals[sell_signals['entry_time'] == buy_trade['entry_time']]
|
||||
if len(sell_trades) == 0:
|
||||
continue
|
||||
|
||||
sell_trade = sell_trades.iloc[0]
|
||||
current_value *= (1 + sell_trade['profit_pct'])
|
||||
|
||||
cumulative_returns.append(current_value)
|
||||
dates.append(sell_trade['exit_time'])
|
||||
|
||||
return dates, cumulative_returns
|
||||
|
||||
orig_dates, orig_returns = calculate_cumulative_returns(aligned_original)
|
||||
inc_dates, inc_returns = calculate_cumulative_returns(aligned_incremental)
|
||||
|
||||
if orig_dates:
|
||||
ax2.plot(orig_dates, orig_returns, label='Original (Aligned)', color='blue', linewidth=2)
|
||||
if inc_dates:
|
||||
ax2.plot(inc_dates, inc_returns, label='Incremental', color='red', linewidth=2)
|
||||
|
||||
ax2.set_title('Aligned Strategy Cumulative Performance')
|
||||
ax2.set_xlabel('Date')
|
||||
ax2.set_ylabel('Portfolio Value ($)')
|
||||
ax2.legend()
|
||||
ax2.grid(True, alpha=0.3)
|
||||
|
||||
plt.tight_layout()
|
||||
plt.savefig('../results/aligned_strategy_comparison.png', dpi=300, bbox_inches='tight')
|
||||
print("Visualization saved: ../results/aligned_strategy_comparison.png")
|
||||
|
||||
def main():
|
||||
"""Main alignment function."""
|
||||
|
||||
print("🚀 ALIGNING STRATEGY TIMING FOR FAIR COMPARISON")
|
||||
print("=" * 80)
|
||||
|
||||
try:
|
||||
# Load trade files
|
||||
original_df, incremental_df = load_trade_files()
|
||||
|
||||
# Find alignment point
|
||||
alignment_time = find_alignment_point(original_df, incremental_df)
|
||||
|
||||
# Align strategies
|
||||
aligned_original, aligned_incremental = align_strategies(
|
||||
original_df, incremental_df, alignment_time
|
||||
)
|
||||
|
||||
# Calculate aligned performance
|
||||
original_perf, incremental_perf = calculate_aligned_performance(
|
||||
aligned_original, aligned_incremental
|
||||
)
|
||||
|
||||
# Save results
|
||||
save_aligned_results(aligned_original, aligned_incremental,
|
||||
original_perf, incremental_perf)
|
||||
|
||||
# Create visualization
|
||||
create_aligned_visualization(aligned_original, aligned_incremental)
|
||||
|
||||
print(f"\n✅ ALIGNMENT COMPLETED SUCCESSFULLY!")
|
||||
print("=" * 80)
|
||||
print("The strategies are now aligned for fair comparison.")
|
||||
print("Check the results/ directory for aligned trade files and analysis.")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during alignment: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
exit(0 if success else 1)
|
||||
289
test/analyze_aligned_trades.py
Normal file
289
test/analyze_aligned_trades.py
Normal file
@ -0,0 +1,289 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Analyze Aligned Trades in Detail
|
||||
================================
|
||||
|
||||
This script performs a detailed analysis of the aligned trades to understand
|
||||
why there's still a large performance difference between the strategies.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
from datetime import datetime
|
||||
|
||||
def load_aligned_trades():
|
||||
"""Load the aligned trade files."""
|
||||
|
||||
print("📊 LOADING ALIGNED TRADES")
|
||||
print("=" * 60)
|
||||
|
||||
original_file = "../results/trades_original_aligned.csv"
|
||||
incremental_file = "../results/trades_incremental_aligned.csv"
|
||||
|
||||
original_df = pd.read_csv(original_file)
|
||||
original_df['entry_time'] = pd.to_datetime(original_df['entry_time'])
|
||||
original_df['exit_time'] = pd.to_datetime(original_df['exit_time'])
|
||||
|
||||
incremental_df = pd.read_csv(incremental_file)
|
||||
incremental_df['entry_time'] = pd.to_datetime(incremental_df['entry_time'])
|
||||
incremental_df['exit_time'] = pd.to_datetime(incremental_df['exit_time'])
|
||||
|
||||
print(f"Aligned original trades: {len(original_df)}")
|
||||
print(f"Incremental trades: {len(incremental_df)}")
|
||||
|
||||
return original_df, incremental_df
|
||||
|
||||
def analyze_trade_timing_differences(original_df, incremental_df):
|
||||
"""Analyze timing differences between aligned trades."""
|
||||
|
||||
print(f"\n🕐 ANALYZING TRADE TIMING DIFFERENCES")
|
||||
print("=" * 60)
|
||||
|
||||
# Get buy signals
|
||||
orig_buys = original_df[original_df['type'] == 'BUY'].copy()
|
||||
inc_buys = incremental_df[incremental_df['type'] == 'BUY'].copy()
|
||||
|
||||
print(f"Original buy signals: {len(orig_buys)}")
|
||||
print(f"Incremental buy signals: {len(inc_buys)}")
|
||||
|
||||
# Compare first 10 trades
|
||||
print(f"\n📋 FIRST 10 ALIGNED TRADES:")
|
||||
print("-" * 80)
|
||||
print("Original Strategy:")
|
||||
for i, (idx, trade) in enumerate(orig_buys.head(10).iterrows()):
|
||||
print(f" {i+1:2d}. {trade['entry_time']} - ${trade['entry_price']:8.0f}")
|
||||
|
||||
print("\nIncremental Strategy:")
|
||||
for i, (idx, trade) in enumerate(inc_buys.head(10).iterrows()):
|
||||
print(f" {i+1:2d}. {trade['entry_time']} - ${trade['entry_price']:8.0f}")
|
||||
|
||||
# Find timing differences
|
||||
print(f"\n⏰ TIMING ANALYSIS:")
|
||||
print("-" * 60)
|
||||
|
||||
# Group by date to find same-day trades
|
||||
orig_buys['date'] = orig_buys['entry_time'].dt.date
|
||||
inc_buys['date'] = inc_buys['entry_time'].dt.date
|
||||
|
||||
common_dates = set(orig_buys['date']) & set(inc_buys['date'])
|
||||
print(f"Common trading dates: {len(common_dates)}")
|
||||
|
||||
timing_diffs = []
|
||||
price_diffs = []
|
||||
|
||||
for date in sorted(list(common_dates))[:10]:
|
||||
orig_day_trades = orig_buys[orig_buys['date'] == date]
|
||||
inc_day_trades = inc_buys[inc_buys['date'] == date]
|
||||
|
||||
if len(orig_day_trades) > 0 and len(inc_day_trades) > 0:
|
||||
orig_time = orig_day_trades.iloc[0]['entry_time']
|
||||
inc_time = inc_day_trades.iloc[0]['entry_time']
|
||||
orig_price = orig_day_trades.iloc[0]['entry_price']
|
||||
inc_price = inc_day_trades.iloc[0]['entry_price']
|
||||
|
||||
time_diff = (inc_time - orig_time).total_seconds() / 60 # minutes
|
||||
price_diff = ((inc_price - orig_price) / orig_price) * 100
|
||||
|
||||
timing_diffs.append(time_diff)
|
||||
price_diffs.append(price_diff)
|
||||
|
||||
print(f" {date}: Original {orig_time.strftime('%H:%M')} (${orig_price:.0f}), "
|
||||
f"Incremental {inc_time.strftime('%H:%M')} (${inc_price:.0f}), "
|
||||
f"Diff: {time_diff:+.0f}min, {price_diff:+.2f}%")
|
||||
|
||||
if timing_diffs:
|
||||
avg_time_diff = np.mean(timing_diffs)
|
||||
avg_price_diff = np.mean(price_diffs)
|
||||
print(f"\nAverage timing difference: {avg_time_diff:+.1f} minutes")
|
||||
print(f"Average price difference: {avg_price_diff:+.2f}%")
|
||||
|
||||
def analyze_profit_distributions(original_df, incremental_df):
|
||||
"""Analyze profit distributions between strategies."""
|
||||
|
||||
print(f"\n💰 ANALYZING PROFIT DISTRIBUTIONS")
|
||||
print("=" * 60)
|
||||
|
||||
# Get sell signals (exits)
|
||||
orig_exits = original_df[original_df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
inc_exits = incremental_df[incremental_df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
|
||||
orig_profits = orig_exits['profit_pct'].values * 100
|
||||
inc_profits = inc_exits['profit_pct'].values * 100
|
||||
|
||||
print(f"Original strategy trades: {len(orig_profits)}")
|
||||
print(f" Winning trades: {len(orig_profits[orig_profits > 0])} ({len(orig_profits[orig_profits > 0])/len(orig_profits)*100:.1f}%)")
|
||||
print(f" Average profit: {np.mean(orig_profits):.2f}%")
|
||||
print(f" Best trade: {np.max(orig_profits):.2f}%")
|
||||
print(f" Worst trade: {np.min(orig_profits):.2f}%")
|
||||
print(f" Std deviation: {np.std(orig_profits):.2f}%")
|
||||
|
||||
print(f"\nIncremental strategy trades: {len(inc_profits)}")
|
||||
print(f" Winning trades: {len(inc_profits[inc_profits > 0])} ({len(inc_profits[inc_profits > 0])/len(inc_profits)*100:.1f}%)")
|
||||
print(f" Average profit: {np.mean(inc_profits):.2f}%")
|
||||
print(f" Best trade: {np.max(inc_profits):.2f}%")
|
||||
print(f" Worst trade: {np.min(inc_profits):.2f}%")
|
||||
print(f" Std deviation: {np.std(inc_profits):.2f}%")
|
||||
|
||||
# Analyze profit ranges
|
||||
print(f"\n📊 PROFIT RANGE ANALYSIS:")
|
||||
print("-" * 60)
|
||||
|
||||
ranges = [(-100, -5), (-5, -1), (-1, 0), (0, 1), (1, 5), (5, 100)]
|
||||
range_names = ["< -5%", "-5% to -1%", "-1% to 0%", "0% to 1%", "1% to 5%", "> 5%"]
|
||||
|
||||
for i, (low, high) in enumerate(ranges):
|
||||
orig_count = len(orig_profits[(orig_profits >= low) & (orig_profits < high)])
|
||||
inc_count = len(inc_profits[(inc_profits >= low) & (inc_profits < high)])
|
||||
|
||||
orig_pct = (orig_count / len(orig_profits)) * 100 if len(orig_profits) > 0 else 0
|
||||
inc_pct = (inc_count / len(inc_profits)) * 100 if len(inc_profits) > 0 else 0
|
||||
|
||||
print(f" {range_names[i]:>10}: Original {orig_count:3d} ({orig_pct:4.1f}%), "
|
||||
f"Incremental {inc_count:3d} ({inc_pct:4.1f}%)")
|
||||
|
||||
return orig_profits, inc_profits
|
||||
|
||||
def analyze_trade_duration(original_df, incremental_df):
|
||||
"""Analyze trade duration differences."""
|
||||
|
||||
print(f"\n⏱️ ANALYZING TRADE DURATION")
|
||||
print("=" * 60)
|
||||
|
||||
# Get complete trades (buy + sell pairs)
|
||||
orig_buys = original_df[original_df['type'] == 'BUY'].copy()
|
||||
orig_exits = original_df[original_df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
|
||||
inc_buys = incremental_df[incremental_df['type'] == 'BUY'].copy()
|
||||
inc_exits = incremental_df[incremental_df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
|
||||
# Calculate durations
|
||||
orig_durations = []
|
||||
inc_durations = []
|
||||
|
||||
for i, buy in orig_buys.iterrows():
|
||||
exits = orig_exits[orig_exits['entry_time'] == buy['entry_time']]
|
||||
if len(exits) > 0:
|
||||
duration = (exits.iloc[0]['exit_time'] - buy['entry_time']).total_seconds() / 3600 # hours
|
||||
orig_durations.append(duration)
|
||||
|
||||
for i, buy in inc_buys.iterrows():
|
||||
exits = inc_exits[inc_exits['entry_time'] == buy['entry_time']]
|
||||
if len(exits) > 0:
|
||||
duration = (exits.iloc[0]['exit_time'] - buy['entry_time']).total_seconds() / 3600 # hours
|
||||
inc_durations.append(duration)
|
||||
|
||||
print(f"Original strategy:")
|
||||
print(f" Average duration: {np.mean(orig_durations):.1f} hours")
|
||||
print(f" Median duration: {np.median(orig_durations):.1f} hours")
|
||||
print(f" Min duration: {np.min(orig_durations):.1f} hours")
|
||||
print(f" Max duration: {np.max(orig_durations):.1f} hours")
|
||||
|
||||
print(f"\nIncremental strategy:")
|
||||
print(f" Average duration: {np.mean(inc_durations):.1f} hours")
|
||||
print(f" Median duration: {np.median(inc_durations):.1f} hours")
|
||||
print(f" Min duration: {np.min(inc_durations):.1f} hours")
|
||||
print(f" Max duration: {np.max(inc_durations):.1f} hours")
|
||||
|
||||
return orig_durations, inc_durations
|
||||
|
||||
def create_detailed_comparison_plots(original_df, incremental_df, orig_profits, inc_profits):
|
||||
"""Create detailed comparison plots."""
|
||||
|
||||
print(f"\n📊 CREATING DETAILED COMPARISON PLOTS")
|
||||
print("=" * 60)
|
||||
|
||||
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(16, 12))
|
||||
|
||||
# Plot 1: Profit distribution comparison
|
||||
ax1.hist(orig_profits, bins=30, alpha=0.7, label='Original', color='blue', density=True)
|
||||
ax1.hist(inc_profits, bins=30, alpha=0.7, label='Incremental', color='red', density=True)
|
||||
ax1.set_title('Profit Distribution Comparison')
|
||||
ax1.set_xlabel('Profit (%)')
|
||||
ax1.set_ylabel('Density')
|
||||
ax1.legend()
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 2: Cumulative profit over time
|
||||
orig_exits = original_df[original_df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
inc_exits = incremental_df[incremental_df['type'].str.contains('EXIT|EOD', na=False)].copy()
|
||||
|
||||
orig_cumulative = np.cumsum(orig_exits['profit_pct'].values) * 100
|
||||
inc_cumulative = np.cumsum(inc_exits['profit_pct'].values) * 100
|
||||
|
||||
ax2.plot(range(len(orig_cumulative)), orig_cumulative, label='Original', color='blue', linewidth=2)
|
||||
ax2.plot(range(len(inc_cumulative)), inc_cumulative, label='Incremental', color='red', linewidth=2)
|
||||
ax2.set_title('Cumulative Profit Over Trades')
|
||||
ax2.set_xlabel('Trade Number')
|
||||
ax2.set_ylabel('Cumulative Profit (%)')
|
||||
ax2.legend()
|
||||
ax2.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 3: Trade timing scatter
|
||||
orig_buys = original_df[original_df['type'] == 'BUY']
|
||||
inc_buys = incremental_df[incremental_df['type'] == 'BUY']
|
||||
|
||||
ax3.scatter(orig_buys['entry_time'], orig_buys['entry_price'],
|
||||
alpha=0.6, label='Original', color='blue', s=20)
|
||||
ax3.scatter(inc_buys['entry_time'], inc_buys['entry_price'],
|
||||
alpha=0.6, label='Incremental', color='red', s=20)
|
||||
ax3.set_title('Trade Entry Timing')
|
||||
ax3.set_xlabel('Date')
|
||||
ax3.set_ylabel('Entry Price ($)')
|
||||
ax3.legend()
|
||||
ax3.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 4: Profit vs trade number
|
||||
ax4.scatter(range(len(orig_profits)), orig_profits, alpha=0.6, label='Original', color='blue', s=20)
|
||||
ax4.scatter(range(len(inc_profits)), inc_profits, alpha=0.6, label='Incremental', color='red', s=20)
|
||||
ax4.set_title('Individual Trade Profits')
|
||||
ax4.set_xlabel('Trade Number')
|
||||
ax4.set_ylabel('Profit (%)')
|
||||
ax4.legend()
|
||||
ax4.grid(True, alpha=0.3)
|
||||
ax4.axhline(y=0, color='black', linestyle='--', alpha=0.5)
|
||||
|
||||
plt.tight_layout()
|
||||
plt.savefig('../results/detailed_aligned_analysis.png', dpi=300, bbox_inches='tight')
|
||||
print("Detailed analysis plot saved: ../results/detailed_aligned_analysis.png")
|
||||
|
||||
def main():
|
||||
"""Main analysis function."""
|
||||
|
||||
print("🔍 DETAILED ANALYSIS OF ALIGNED TRADES")
|
||||
print("=" * 80)
|
||||
|
||||
try:
|
||||
# Load aligned trades
|
||||
original_df, incremental_df = load_aligned_trades()
|
||||
|
||||
# Analyze timing differences
|
||||
analyze_trade_timing_differences(original_df, incremental_df)
|
||||
|
||||
# Analyze profit distributions
|
||||
orig_profits, inc_profits = analyze_profit_distributions(original_df, incremental_df)
|
||||
|
||||
# Analyze trade duration
|
||||
analyze_trade_duration(original_df, incremental_df)
|
||||
|
||||
# Create detailed plots
|
||||
create_detailed_comparison_plots(original_df, incremental_df, orig_profits, inc_profits)
|
||||
|
||||
print(f"\n🎯 KEY FINDINGS:")
|
||||
print("=" * 80)
|
||||
print("1. Check if strategies are trading at different times within the same day")
|
||||
print("2. Compare profit distributions to see if one strategy has better trades")
|
||||
print("3. Analyze trade duration differences")
|
||||
print("4. Look for systematic differences in entry/exit timing")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during analysis: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
exit(0 if success else 1)
|
||||
313
test/analyze_exit_signal_differences.py
Normal file
313
test/analyze_exit_signal_differences.py
Normal file
@ -0,0 +1,313 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Analyze Exit Signal Differences Between Strategies
|
||||
=================================================
|
||||
|
||||
This script examines the exact differences in exit signal logic between
|
||||
the original and incremental strategies to understand why the original
|
||||
generates so many more exit signals.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
# Add the parent directory to the path to import cycles modules
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
from cycles.strategies.default_strategy import DefaultStrategy
|
||||
|
||||
|
||||
def analyze_exit_conditions():
|
||||
"""Analyze the exit conditions in both strategies."""
|
||||
print("🔍 ANALYZING EXIT SIGNAL LOGIC")
|
||||
print("=" * 80)
|
||||
|
||||
print("\n📋 ORIGINAL STRATEGY (DefaultStrategy) EXIT CONDITIONS:")
|
||||
print("-" * 60)
|
||||
print("1. Meta-trend exit: prev_trend != 1 AND curr_trend == -1")
|
||||
print(" - Only exits when trend changes TO -1 (downward)")
|
||||
print(" - Does NOT exit when trend goes from 1 to 0 (neutral)")
|
||||
print("2. Stop loss: Currently DISABLED in signal generation")
|
||||
print(" - Code comment: 'skip stop loss checking in signal generation'")
|
||||
|
||||
print("\n📋 INCREMENTAL STRATEGY (IncMetaTrendStrategy) EXIT CONDITIONS:")
|
||||
print("-" * 60)
|
||||
print("1. Meta-trend exit: prev_trend != -1 AND curr_trend == -1")
|
||||
print(" - Only exits when trend changes TO -1 (downward)")
|
||||
print(" - Does NOT exit when trend goes from 1 to 0 (neutral)")
|
||||
print("2. Stop loss: Not implemented in this strategy")
|
||||
|
||||
print("\n🤔 THEORETICAL ANALYSIS:")
|
||||
print("-" * 60)
|
||||
print("Both strategies have IDENTICAL exit conditions!")
|
||||
print("The difference must be in HOW/WHEN they check for exits...")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def compare_signal_generation_frequency():
|
||||
"""Compare how frequently each strategy checks for signals."""
|
||||
print("\n🔍 ANALYZING SIGNAL GENERATION FREQUENCY")
|
||||
print("=" * 80)
|
||||
|
||||
print("\n📋 ORIGINAL STRATEGY SIGNAL CHECKING:")
|
||||
print("-" * 60)
|
||||
print("• Checks signals at EVERY 15-minute bar")
|
||||
print("• Processes ALL historical data points during initialization")
|
||||
print("• get_exit_signal() called for EVERY timeframe bar")
|
||||
print("• No state tracking - evaluates conditions fresh each time")
|
||||
|
||||
print("\n📋 INCREMENTAL STRATEGY SIGNAL CHECKING:")
|
||||
print("-" * 60)
|
||||
print("• Checks signals only when NEW 15-minute bar completes")
|
||||
print("• Processes data incrementally as it arrives")
|
||||
print("• get_exit_signal() called only on timeframe bar completion")
|
||||
print("• State tracking - remembers previous signals to avoid duplicates")
|
||||
|
||||
print("\n🎯 KEY DIFFERENCE IDENTIFIED:")
|
||||
print("-" * 60)
|
||||
print("ORIGINAL: Evaluates exit condition at EVERY historical bar")
|
||||
print("INCREMENTAL: Evaluates exit condition only on STATE CHANGES")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
def test_signal_generation_with_sample_data():
|
||||
"""Test both strategies with sample data to see the difference."""
|
||||
print("\n🧪 TESTING WITH SAMPLE DATA")
|
||||
print("=" * 80)
|
||||
|
||||
# Load a small sample of data
|
||||
storage = Storage()
|
||||
data_file = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data", "btcusd_1-min_data.csv")
|
||||
|
||||
# Load just 3 days of data for detailed analysis
|
||||
start_date = "2025-01-01"
|
||||
end_date = "2025-01-04"
|
||||
|
||||
print(f"Loading data from {start_date} to {end_date}...")
|
||||
data_1min = storage.load_data(data_file, start_date, end_date)
|
||||
print(f"Loaded {len(data_1min)} minute-level data points")
|
||||
|
||||
# Test original strategy
|
||||
print("\n🔄 Testing Original Strategy...")
|
||||
original_signals = test_original_strategy_detailed(data_1min)
|
||||
|
||||
# Test incremental strategy
|
||||
print("\n🔄 Testing Incremental Strategy...")
|
||||
incremental_signals = test_incremental_strategy_detailed(data_1min)
|
||||
|
||||
# Compare results
|
||||
print("\n📊 DETAILED COMPARISON:")
|
||||
print("-" * 60)
|
||||
|
||||
orig_exits = [s for s in original_signals if s['type'] == 'EXIT']
|
||||
inc_exits = [s for s in incremental_signals if s['type'] == 'SELL']
|
||||
|
||||
print(f"Original exit signals: {len(orig_exits)}")
|
||||
print(f"Incremental exit signals: {len(inc_exits)}")
|
||||
print(f"Difference: {len(orig_exits) - len(inc_exits)} more exits in original")
|
||||
|
||||
# Show first few exit signals from each
|
||||
print(f"\n📋 FIRST 5 ORIGINAL EXIT SIGNALS:")
|
||||
for i, signal in enumerate(orig_exits[:5]):
|
||||
print(f" {i+1}. {signal['timestamp']} - Price: ${signal['price']:.0f}")
|
||||
|
||||
print(f"\n📋 FIRST 5 INCREMENTAL EXIT SIGNALS:")
|
||||
for i, signal in enumerate(inc_exits[:5]):
|
||||
print(f" {i+1}. {signal['timestamp']} - Price: ${signal['price']:.0f}")
|
||||
|
||||
return original_signals, incremental_signals
|
||||
|
||||
|
||||
def test_original_strategy_detailed(data_1min: pd.DataFrame):
|
||||
"""Test original strategy with detailed logging."""
|
||||
|
||||
# Create mock backtester
|
||||
class MockBacktester:
|
||||
def __init__(self, data):
|
||||
self.original_df = data
|
||||
self.strategies = {}
|
||||
self.current_position = None
|
||||
self.entry_price = None
|
||||
|
||||
# Initialize strategy
|
||||
strategy = DefaultStrategy(
|
||||
weight=1.0,
|
||||
params={
|
||||
"timeframe": "15min",
|
||||
"stop_loss_pct": 0.03
|
||||
}
|
||||
)
|
||||
|
||||
mock_backtester = MockBacktester(data_1min)
|
||||
strategy.initialize(mock_backtester)
|
||||
|
||||
if not strategy.initialized:
|
||||
print(" ❌ Strategy initialization failed")
|
||||
return []
|
||||
|
||||
# Get primary timeframe data
|
||||
primary_data = strategy.get_primary_timeframe_data()
|
||||
signals = []
|
||||
|
||||
print(f" Processing {len(primary_data)} timeframe bars...")
|
||||
|
||||
# Track meta-trend changes for analysis
|
||||
meta_trend_changes = []
|
||||
|
||||
for i in range(len(primary_data)):
|
||||
timestamp = primary_data.index[i]
|
||||
|
||||
# Get current meta-trend value
|
||||
if hasattr(strategy, 'meta_trend') and i < len(strategy.meta_trend):
|
||||
curr_trend = strategy.meta_trend[i]
|
||||
prev_trend = strategy.meta_trend[i-1] if i > 0 else 0
|
||||
|
||||
if curr_trend != prev_trend:
|
||||
meta_trend_changes.append({
|
||||
'timestamp': timestamp,
|
||||
'prev_trend': prev_trend,
|
||||
'curr_trend': curr_trend,
|
||||
'index': i
|
||||
})
|
||||
|
||||
# Check for exit signal
|
||||
exit_signal = strategy.get_exit_signal(mock_backtester, i)
|
||||
if exit_signal and exit_signal.signal_type == "EXIT":
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'type': 'EXIT',
|
||||
'price': primary_data.iloc[i]['close'],
|
||||
'strategy': 'Original',
|
||||
'confidence': exit_signal.confidence,
|
||||
'metadata': exit_signal.metadata,
|
||||
'meta_trend': curr_trend if 'curr_trend' in locals() else 'unknown',
|
||||
'prev_meta_trend': prev_trend if 'prev_trend' in locals() else 'unknown'
|
||||
})
|
||||
|
||||
print(f" Found {len(meta_trend_changes)} meta-trend changes")
|
||||
print(f" Generated {len([s for s in signals if s['type'] == 'EXIT'])} exit signals")
|
||||
|
||||
# Show meta-trend changes
|
||||
print(f"\n 📈 META-TREND CHANGES:")
|
||||
for change in meta_trend_changes[:10]: # Show first 10
|
||||
print(f" {change['timestamp']}: {change['prev_trend']} → {change['curr_trend']}")
|
||||
|
||||
return signals
|
||||
|
||||
|
||||
def test_incremental_strategy_detailed(data_1min: pd.DataFrame):
|
||||
"""Test incremental strategy with detailed logging."""
|
||||
|
||||
# Initialize strategy
|
||||
strategy = IncMetaTrendStrategy(
|
||||
name="metatrend",
|
||||
weight=1.0,
|
||||
params={
|
||||
"timeframe": "15min",
|
||||
"enable_logging": False
|
||||
}
|
||||
)
|
||||
|
||||
signals = []
|
||||
meta_trend_changes = []
|
||||
bars_completed = 0
|
||||
|
||||
print(f" Processing {len(data_1min)} minute-level data points...")
|
||||
|
||||
# Process each minute of data
|
||||
for i, (timestamp, row) in enumerate(data_1min.iterrows()):
|
||||
ohlcv_data = {
|
||||
'open': row['open'],
|
||||
'high': row['high'],
|
||||
'low': row['low'],
|
||||
'close': row['close'],
|
||||
'volume': row['volume']
|
||||
}
|
||||
|
||||
# Update strategy
|
||||
result = strategy.update_minute_data(timestamp, ohlcv_data)
|
||||
|
||||
# Check if a complete timeframe bar was formed
|
||||
if result is not None:
|
||||
bars_completed += 1
|
||||
|
||||
# Track meta-trend changes
|
||||
if hasattr(strategy, 'current_meta_trend') and hasattr(strategy, 'previous_meta_trend'):
|
||||
if strategy.current_meta_trend != strategy.previous_meta_trend:
|
||||
meta_trend_changes.append({
|
||||
'timestamp': timestamp,
|
||||
'prev_trend': strategy.previous_meta_trend,
|
||||
'curr_trend': strategy.current_meta_trend,
|
||||
'bar_number': bars_completed
|
||||
})
|
||||
|
||||
# Check for exit signal
|
||||
exit_signal = strategy.get_exit_signal()
|
||||
if exit_signal and exit_signal.signal_type.upper() == 'EXIT':
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'type': 'SELL',
|
||||
'price': row['close'],
|
||||
'strategy': 'Incremental',
|
||||
'confidence': exit_signal.confidence,
|
||||
'reason': exit_signal.metadata.get('type', 'EXIT') if exit_signal.metadata else 'EXIT',
|
||||
'meta_trend': strategy.current_meta_trend,
|
||||
'prev_meta_trend': strategy.previous_meta_trend
|
||||
})
|
||||
|
||||
print(f" Completed {bars_completed} timeframe bars")
|
||||
print(f" Found {len(meta_trend_changes)} meta-trend changes")
|
||||
print(f" Generated {len([s for s in signals if s['type'] == 'SELL'])} exit signals")
|
||||
|
||||
# Show meta-trend changes
|
||||
print(f"\n 📈 META-TREND CHANGES:")
|
||||
for change in meta_trend_changes[:10]: # Show first 10
|
||||
print(f" {change['timestamp']}: {change['prev_trend']} → {change['curr_trend']}")
|
||||
|
||||
return signals
|
||||
|
||||
|
||||
def main():
|
||||
"""Main analysis function."""
|
||||
print("🔍 ANALYZING WHY ORIGINAL STRATEGY HAS MORE EXIT SIGNALS")
|
||||
print("=" * 80)
|
||||
|
||||
try:
|
||||
# Step 1: Analyze exit conditions
|
||||
analyze_exit_conditions()
|
||||
|
||||
# Step 2: Compare signal generation frequency
|
||||
compare_signal_generation_frequency()
|
||||
|
||||
# Step 3: Test with sample data
|
||||
original_signals, incremental_signals = test_signal_generation_with_sample_data()
|
||||
|
||||
print("\n🎯 FINAL CONCLUSION:")
|
||||
print("=" * 80)
|
||||
print("The original strategy generates more exit signals because:")
|
||||
print("1. It evaluates exit conditions at EVERY historical timeframe bar")
|
||||
print("2. It doesn't track signal state - treats each bar independently")
|
||||
print("3. When meta-trend is -1, it generates exit signal at EVERY bar")
|
||||
print("4. The incremental strategy only signals on STATE CHANGES")
|
||||
print("\nThis explains the 8x difference in exit signal count!")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during analysis: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
sys.exit(0 if success else 1)
|
||||
430
test/compare_signals_only.py
Normal file
430
test/compare_signals_only.py
Normal file
@ -0,0 +1,430 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Compare Strategy Signals Only (No Backtesting)
|
||||
==============================================
|
||||
|
||||
This script extracts entry and exit signals from both the original and incremental
|
||||
strategies on the same data and plots them for visual comparison.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
|
||||
# Add the parent directory to the path to import cycles modules
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
from cycles.utils.data_utils import aggregate_to_minutes
|
||||
from cycles.strategies.default_strategy import DefaultStrategy
|
||||
|
||||
|
||||
def extract_original_signals(data_1min: pd.DataFrame, timeframe: str = "15min"):
|
||||
"""Extract signals from the original strategy."""
|
||||
print(f"\n🔄 Extracting Original Strategy Signals...")
|
||||
|
||||
# Create a mock backtester object for the strategy
|
||||
class MockBacktester:
|
||||
def __init__(self, data):
|
||||
self.original_df = data
|
||||
self.strategies = {}
|
||||
self.current_position = None
|
||||
self.entry_price = None
|
||||
|
||||
# Initialize the original strategy
|
||||
strategy = DefaultStrategy(
|
||||
weight=1.0,
|
||||
params={
|
||||
"timeframe": timeframe,
|
||||
"stop_loss_pct": 0.03
|
||||
}
|
||||
)
|
||||
|
||||
# Create mock backtester and initialize strategy
|
||||
mock_backtester = MockBacktester(data_1min)
|
||||
strategy.initialize(mock_backtester)
|
||||
|
||||
if not strategy.initialized:
|
||||
print(" ❌ Strategy initialization failed")
|
||||
return []
|
||||
|
||||
# Get the aggregated data for the primary timeframe
|
||||
primary_data = strategy.get_primary_timeframe_data()
|
||||
if primary_data is None or len(primary_data) == 0:
|
||||
print(" ❌ No primary timeframe data available")
|
||||
return []
|
||||
|
||||
signals = []
|
||||
|
||||
# Process each data point in the primary timeframe
|
||||
for i in range(len(primary_data)):
|
||||
timestamp = primary_data.index[i]
|
||||
row = primary_data.iloc[i]
|
||||
|
||||
# Get entry signal
|
||||
entry_signal = strategy.get_entry_signal(mock_backtester, i)
|
||||
if entry_signal and entry_signal.signal_type == "ENTRY":
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'type': 'ENTRY',
|
||||
'price': entry_signal.price if entry_signal.price else row['close'],
|
||||
'strategy': 'Original',
|
||||
'confidence': entry_signal.confidence,
|
||||
'metadata': entry_signal.metadata
|
||||
})
|
||||
|
||||
# Get exit signal
|
||||
exit_signal = strategy.get_exit_signal(mock_backtester, i)
|
||||
if exit_signal and exit_signal.signal_type == "EXIT":
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'type': 'EXIT',
|
||||
'price': exit_signal.price if exit_signal.price else row['close'],
|
||||
'strategy': 'Original',
|
||||
'confidence': exit_signal.confidence,
|
||||
'metadata': exit_signal.metadata
|
||||
})
|
||||
|
||||
print(f" Found {len([s for s in signals if s['type'] == 'ENTRY'])} entry signals")
|
||||
print(f" Found {len([s for s in signals if s['type'] == 'EXIT'])} exit signals")
|
||||
|
||||
return signals
|
||||
|
||||
|
||||
def extract_incremental_signals(data_1min: pd.DataFrame, timeframe: str = "15min"):
|
||||
"""Extract signals from the incremental strategy."""
|
||||
print(f"\n🔄 Extracting Incremental Strategy Signals...")
|
||||
|
||||
# Initialize the incremental strategy
|
||||
strategy = IncMetaTrendStrategy(
|
||||
name="metatrend",
|
||||
weight=1.0,
|
||||
params={
|
||||
"timeframe": timeframe,
|
||||
"enable_logging": False
|
||||
}
|
||||
)
|
||||
|
||||
signals = []
|
||||
|
||||
# Process each minute of data
|
||||
for i, (timestamp, row) in enumerate(data_1min.iterrows()):
|
||||
# Create the data structure for incremental strategy
|
||||
ohlcv_data = {
|
||||
'open': row['open'],
|
||||
'high': row['high'],
|
||||
'low': row['low'],
|
||||
'close': row['close'],
|
||||
'volume': row['volume']
|
||||
}
|
||||
|
||||
# Update the strategy with new data (correct method signature)
|
||||
result = strategy.update_minute_data(timestamp, ohlcv_data)
|
||||
|
||||
# Check if a complete timeframe bar was formed
|
||||
if result is not None:
|
||||
# Get entry signal
|
||||
entry_signal = strategy.get_entry_signal()
|
||||
if entry_signal and entry_signal.signal_type.upper() in ['BUY', 'ENTRY']:
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'type': 'BUY',
|
||||
'price': entry_signal.price if entry_signal.price else row['close'],
|
||||
'strategy': 'Incremental',
|
||||
'confidence': entry_signal.confidence,
|
||||
'reason': entry_signal.metadata.get('type', 'ENTRY') if entry_signal.metadata else 'ENTRY'
|
||||
})
|
||||
|
||||
# Get exit signal
|
||||
exit_signal = strategy.get_exit_signal()
|
||||
if exit_signal and exit_signal.signal_type.upper() in ['SELL', 'EXIT']:
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'type': 'SELL',
|
||||
'price': exit_signal.price if exit_signal.price else row['close'],
|
||||
'strategy': 'Incremental',
|
||||
'confidence': exit_signal.confidence,
|
||||
'reason': exit_signal.metadata.get('type', 'EXIT') if exit_signal.metadata else 'EXIT'
|
||||
})
|
||||
|
||||
print(f" Found {len([s for s in signals if s['type'] == 'BUY'])} buy signals")
|
||||
print(f" Found {len([s for s in signals if s['type'] == 'SELL'])} sell signals")
|
||||
|
||||
return signals
|
||||
|
||||
|
||||
def create_signals_comparison_plot(data_1min: pd.DataFrame, original_signals: list,
|
||||
incremental_signals: list, start_date: str, end_date: str,
|
||||
output_dir: str):
|
||||
"""Create a comprehensive signals comparison plot."""
|
||||
print(f"\n📊 Creating signals comparison plot...")
|
||||
|
||||
# Aggregate data for plotting (15min for cleaner visualization)
|
||||
aggregated_data = aggregate_to_minutes(data_1min, 15)
|
||||
|
||||
# Create figure with subplots
|
||||
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(20, 16))
|
||||
|
||||
# Plot 1: Price with all signals
|
||||
ax1.plot(aggregated_data.index, aggregated_data['close'], 'k-', alpha=0.7, linewidth=1.5, label='BTC Price (15min)')
|
||||
|
||||
# Plot original strategy signals
|
||||
original_entries = [s for s in original_signals if s['type'] == 'ENTRY']
|
||||
original_exits = [s for s in original_signals if s['type'] == 'EXIT']
|
||||
|
||||
if original_entries:
|
||||
entry_times = [s['timestamp'] for s in original_entries]
|
||||
entry_prices = [s['price'] * 1.03 for s in original_entries] # Position above price
|
||||
ax1.scatter(entry_times, entry_prices, color='green', marker='^', s=100,
|
||||
alpha=0.8, label=f'Original Entry ({len(original_entries)})', zorder=5)
|
||||
|
||||
if original_exits:
|
||||
exit_times = [s['timestamp'] for s in original_exits]
|
||||
exit_prices = [s['price'] * 1.03 for s in original_exits] # Position above price
|
||||
ax1.scatter(exit_times, exit_prices, color='red', marker='v', s=100,
|
||||
alpha=0.8, label=f'Original Exit ({len(original_exits)})', zorder=5)
|
||||
|
||||
# Plot incremental strategy signals
|
||||
incremental_entries = [s for s in incremental_signals if s['type'] == 'BUY']
|
||||
incremental_exits = [s for s in incremental_signals if s['type'] == 'SELL']
|
||||
|
||||
if incremental_entries:
|
||||
entry_times = [s['timestamp'] for s in incremental_entries]
|
||||
entry_prices = [s['price'] * 0.97 for s in incremental_entries] # Position below price
|
||||
ax1.scatter(entry_times, entry_prices, color='lightgreen', marker='^', s=80,
|
||||
alpha=0.8, label=f'Incremental Entry ({len(incremental_entries)})', zorder=5)
|
||||
|
||||
if incremental_exits:
|
||||
exit_times = [s['timestamp'] for s in incremental_exits]
|
||||
exit_prices = [s['price'] * 0.97 for s in incremental_exits] # Position below price
|
||||
ax1.scatter(exit_times, exit_prices, color='orange', marker='v', s=80,
|
||||
alpha=0.8, label=f'Incremental Exit ({len(incremental_exits)})', zorder=5)
|
||||
|
||||
ax1.set_title(f'Strategy Signals Comparison: {start_date} to {end_date}', fontsize=16, fontweight='bold')
|
||||
ax1.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax1.legend(loc='upper left', fontsize=10)
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Format x-axis
|
||||
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
|
||||
ax1.xaxis.set_major_locator(mdates.WeekdayLocator(interval=2))
|
||||
plt.setp(ax1.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
# Plot 2: Signal frequency over time (daily counts)
|
||||
# Create daily signal counts
|
||||
daily_signals = {}
|
||||
|
||||
for signal in original_signals:
|
||||
date = signal['timestamp'].date()
|
||||
if date not in daily_signals:
|
||||
daily_signals[date] = {'original_entry': 0, 'original_exit': 0, 'inc_entry': 0, 'inc_exit': 0}
|
||||
if signal['type'] == 'ENTRY':
|
||||
daily_signals[date]['original_entry'] += 1
|
||||
else:
|
||||
daily_signals[date]['original_exit'] += 1
|
||||
|
||||
for signal in incremental_signals:
|
||||
date = signal['timestamp'].date()
|
||||
if date not in daily_signals:
|
||||
daily_signals[date] = {'original_entry': 0, 'original_exit': 0, 'inc_entry': 0, 'inc_exit': 0}
|
||||
if signal['type'] == 'BUY':
|
||||
daily_signals[date]['inc_entry'] += 1
|
||||
else:
|
||||
daily_signals[date]['inc_exit'] += 1
|
||||
|
||||
if daily_signals:
|
||||
dates = sorted(daily_signals.keys())
|
||||
orig_entries = [daily_signals[d]['original_entry'] for d in dates]
|
||||
orig_exits = [daily_signals[d]['original_exit'] for d in dates]
|
||||
inc_entries = [daily_signals[d]['inc_entry'] for d in dates]
|
||||
inc_exits = [daily_signals[d]['inc_exit'] for d in dates]
|
||||
|
||||
width = 0.35
|
||||
x = np.arange(len(dates))
|
||||
|
||||
ax2.bar(x - width/2, orig_entries, width, label='Original Entries', color='green', alpha=0.7)
|
||||
ax2.bar(x - width/2, orig_exits, width, bottom=orig_entries, label='Original Exits', color='red', alpha=0.7)
|
||||
ax2.bar(x + width/2, inc_entries, width, label='Incremental Entries', color='lightgreen', alpha=0.7)
|
||||
ax2.bar(x + width/2, inc_exits, width, bottom=inc_entries, label='Incremental Exits', color='orange', alpha=0.7)
|
||||
|
||||
ax2.set_title('Daily Signal Frequency', fontsize=14, fontweight='bold')
|
||||
ax2.set_ylabel('Number of Signals', fontsize=12)
|
||||
ax2.set_xticks(x[::7]) # Show every 7th date
|
||||
ax2.set_xticklabels([dates[i].strftime('%m-%d') for i in range(0, len(dates), 7)], rotation=45)
|
||||
ax2.legend(fontsize=10)
|
||||
ax2.grid(True, alpha=0.3, axis='y')
|
||||
|
||||
# Plot 3: Signal statistics comparison
|
||||
strategies = ['Original', 'Incremental']
|
||||
entry_counts = [len(original_entries), len(incremental_entries)]
|
||||
exit_counts = [len(original_exits), len(incremental_exits)]
|
||||
|
||||
x = np.arange(len(strategies))
|
||||
width = 0.35
|
||||
|
||||
bars1 = ax3.bar(x - width/2, entry_counts, width, label='Entry Signals', color='green', alpha=0.7)
|
||||
bars2 = ax3.bar(x + width/2, exit_counts, width, label='Exit Signals', color='red', alpha=0.7)
|
||||
|
||||
ax3.set_title('Total Signal Counts', fontsize=14, fontweight='bold')
|
||||
ax3.set_ylabel('Number of Signals', fontsize=12)
|
||||
ax3.set_xticks(x)
|
||||
ax3.set_xticklabels(strategies)
|
||||
ax3.legend(fontsize=10)
|
||||
ax3.grid(True, alpha=0.3, axis='y')
|
||||
|
||||
# Add value labels on bars
|
||||
for bars in [bars1, bars2]:
|
||||
for bar in bars:
|
||||
height = bar.get_height()
|
||||
ax3.text(bar.get_x() + bar.get_width()/2., height + 0.5,
|
||||
f'{int(height)}', ha='center', va='bottom', fontweight='bold')
|
||||
|
||||
plt.tight_layout()
|
||||
|
||||
# Save plot
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
# plt.show()
|
||||
plot_file = os.path.join(output_dir, "signals_comparison.png")
|
||||
plt.savefig(plot_file, dpi=300, bbox_inches='tight')
|
||||
plt.close()
|
||||
print(f"Saved signals comparison plot to: {plot_file}")
|
||||
|
||||
|
||||
def save_signals_data(original_signals: list, incremental_signals: list, output_dir: str):
|
||||
"""Save signals data to CSV files."""
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Save original signals
|
||||
if original_signals:
|
||||
orig_df = pd.DataFrame(original_signals)
|
||||
orig_file = os.path.join(output_dir, "original_signals.csv")
|
||||
orig_df.to_csv(orig_file, index=False)
|
||||
print(f"Saved original signals to: {orig_file}")
|
||||
|
||||
# Save incremental signals
|
||||
if incremental_signals:
|
||||
inc_df = pd.DataFrame(incremental_signals)
|
||||
inc_file = os.path.join(output_dir, "incremental_signals.csv")
|
||||
inc_df.to_csv(inc_file, index=False)
|
||||
print(f"Saved incremental signals to: {inc_file}")
|
||||
|
||||
# Create summary
|
||||
summary = {
|
||||
'test_date': datetime.now().isoformat(),
|
||||
'original_strategy': {
|
||||
'total_signals': len(original_signals),
|
||||
'entry_signals': len([s for s in original_signals if s['type'] == 'ENTRY']),
|
||||
'exit_signals': len([s for s in original_signals if s['type'] == 'EXIT'])
|
||||
},
|
||||
'incremental_strategy': {
|
||||
'total_signals': len(incremental_signals),
|
||||
'entry_signals': len([s for s in incremental_signals if s['type'] == 'BUY']),
|
||||
'exit_signals': len([s for s in incremental_signals if s['type'] == 'SELL'])
|
||||
}
|
||||
}
|
||||
|
||||
import json
|
||||
summary_file = os.path.join(output_dir, "signals_summary.json")
|
||||
with open(summary_file, 'w') as f:
|
||||
json.dump(summary, f, indent=2)
|
||||
print(f"Saved signals summary to: {summary_file}")
|
||||
|
||||
|
||||
def print_signals_summary(original_signals: list, incremental_signals: list):
|
||||
"""Print a detailed signals comparison summary."""
|
||||
print("\n" + "="*80)
|
||||
print("SIGNALS COMPARISON SUMMARY")
|
||||
print("="*80)
|
||||
|
||||
# Count signals by type
|
||||
orig_entries = len([s for s in original_signals if s['type'] == 'ENTRY'])
|
||||
orig_exits = len([s for s in original_signals if s['type'] == 'EXIT'])
|
||||
inc_entries = len([s for s in incremental_signals if s['type'] == 'BUY'])
|
||||
inc_exits = len([s for s in incremental_signals if s['type'] == 'SELL'])
|
||||
|
||||
print(f"\n📊 SIGNAL COUNTS:")
|
||||
print(f"{'Signal Type':<20} {'Original':<15} {'Incremental':<15} {'Difference':<15}")
|
||||
print("-" * 65)
|
||||
print(f"{'Entry Signals':<20} {orig_entries:<15} {inc_entries:<15} {inc_entries - orig_entries:<15}")
|
||||
print(f"{'Exit Signals':<20} {orig_exits:<15} {inc_exits:<15} {inc_exits - orig_exits:<15}")
|
||||
print(f"{'Total Signals':<20} {len(original_signals):<15} {len(incremental_signals):<15} {len(incremental_signals) - len(original_signals):<15}")
|
||||
|
||||
# Signal timing analysis
|
||||
if original_signals and incremental_signals:
|
||||
orig_times = [s['timestamp'] for s in original_signals]
|
||||
inc_times = [s['timestamp'] for s in incremental_signals]
|
||||
|
||||
print(f"\n📅 TIMING ANALYSIS:")
|
||||
print(f"{'Metric':<20} {'Original':<15} {'Incremental':<15}")
|
||||
print("-" * 50)
|
||||
print(f"{'First Signal':<20} {min(orig_times).strftime('%Y-%m-%d %H:%M'):<15} {min(inc_times).strftime('%Y-%m-%d %H:%M'):<15}")
|
||||
print(f"{'Last Signal':<20} {max(orig_times).strftime('%Y-%m-%d %H:%M'):<15} {max(inc_times).strftime('%Y-%m-%d %H:%M'):<15}")
|
||||
|
||||
print("\n" + "="*80)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main signals comparison function."""
|
||||
print("🚀 Comparing Strategy Signals (No Backtesting)")
|
||||
print("=" * 80)
|
||||
|
||||
# Configuration
|
||||
start_date = "2025-01-01"
|
||||
end_date = "2025-01-10"
|
||||
timeframe = "15min"
|
||||
|
||||
print(f"📅 Test Period: {start_date} to {end_date}")
|
||||
print(f"⏱️ Timeframe: {timeframe}")
|
||||
print(f"📊 Data Source: btcusd_1-min_data.csv")
|
||||
|
||||
try:
|
||||
# Load data
|
||||
storage = Storage()
|
||||
data_file = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data", "btcusd_1-min_data.csv")
|
||||
|
||||
print(f"\n📂 Loading data from: {data_file}")
|
||||
data_1min = storage.load_data(data_file, start_date, end_date)
|
||||
print(f" Loaded {len(data_1min)} minute-level data points")
|
||||
|
||||
if len(data_1min) == 0:
|
||||
print(f"❌ No data loaded for period {start_date} to {end_date}")
|
||||
return False
|
||||
|
||||
# Extract signals from both strategies
|
||||
original_signals = extract_original_signals(data_1min, timeframe)
|
||||
incremental_signals = extract_incremental_signals(data_1min, timeframe)
|
||||
|
||||
# Print comparison summary
|
||||
print_signals_summary(original_signals, incremental_signals)
|
||||
|
||||
# Save signals data
|
||||
output_dir = "results/signals_comparison"
|
||||
save_signals_data(original_signals, incremental_signals, output_dir)
|
||||
|
||||
# Create comparison plot
|
||||
create_signals_comparison_plot(data_1min, original_signals, incremental_signals,
|
||||
start_date, end_date, output_dir)
|
||||
|
||||
print(f"\n📁 Results saved to: {output_dir}/")
|
||||
print(f" - signals_comparison.png")
|
||||
print(f" - original_signals.csv")
|
||||
print(f" - incremental_signals.csv")
|
||||
print(f" - signals_summary.json")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during signals comparison: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
sys.exit(0 if success else 1)
|
||||
454
test/compare_strategies_same_data.py
Normal file
454
test/compare_strategies_same_data.py
Normal file
@ -0,0 +1,454 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Compare Original vs Incremental Strategies on Same Data
|
||||
======================================================
|
||||
|
||||
This script runs both strategies on the exact same data period from btcusd_1-min_data.csv
|
||||
to ensure a fair comparison.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
|
||||
# Add the parent directory to the path to import cycles modules
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.IncStrategies.inc_backtester import IncBacktester, BacktestConfig
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
from cycles.utils.data_utils import aggregate_to_minutes
|
||||
|
||||
|
||||
def run_original_strategy_via_main(start_date: str, end_date: str, initial_usd: float, stop_loss_pct: float):
|
||||
"""Run the original strategy using the main.py system."""
|
||||
print(f"\n🔄 Running Original Strategy via main.py...")
|
||||
|
||||
# Create a temporary config file for the original strategy
|
||||
config = {
|
||||
"start_date": start_date,
|
||||
"stop_date": end_date,
|
||||
"initial_usd": initial_usd,
|
||||
"timeframes": ["15min"],
|
||||
"strategies": [
|
||||
{
|
||||
"name": "default",
|
||||
"weight": 1.0,
|
||||
"params": {
|
||||
"stop_loss_pct": stop_loss_pct,
|
||||
"timeframe": "15min"
|
||||
}
|
||||
}
|
||||
],
|
||||
"combination_rules": {
|
||||
"min_strategies": 1,
|
||||
"min_confidence": 0.5
|
||||
}
|
||||
}
|
||||
|
||||
# Save temporary config
|
||||
temp_config_file = "temp_config.json"
|
||||
with open(temp_config_file, 'w') as f:
|
||||
json.dump(config, f, indent=2)
|
||||
|
||||
try:
|
||||
# Import and run the main processing function
|
||||
from main import process_timeframe_data
|
||||
from cycles.utils.storage import Storage
|
||||
|
||||
storage = Storage()
|
||||
|
||||
# Load data using absolute path
|
||||
data_file = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data", "btcusd_1-min_data.csv")
|
||||
print(f"Loading data from: {data_file}")
|
||||
|
||||
if not os.path.exists(data_file):
|
||||
print(f"❌ Data file not found: {data_file}")
|
||||
return None
|
||||
|
||||
data_1min = storage.load_data(data_file, start_date, end_date)
|
||||
print(f"Loaded {len(data_1min)} minute-level data points")
|
||||
|
||||
if len(data_1min) == 0:
|
||||
print(f"❌ No data loaded for period {start_date} to {end_date}")
|
||||
return None
|
||||
|
||||
# Run the original strategy
|
||||
results_rows, trade_rows = process_timeframe_data(data_1min, "15min", config, debug=False)
|
||||
|
||||
if not results_rows:
|
||||
print("❌ No results from original strategy")
|
||||
return None
|
||||
|
||||
result = results_rows[0]
|
||||
trades = [trade for trade in trade_rows if trade['timeframe'] == result['timeframe']]
|
||||
|
||||
return {
|
||||
'strategy_name': 'Original MetaTrend',
|
||||
'n_trades': result['n_trades'],
|
||||
'win_rate': result['win_rate'],
|
||||
'avg_trade': result['avg_trade'],
|
||||
'max_drawdown': result['max_drawdown'],
|
||||
'initial_usd': result['initial_usd'],
|
||||
'final_usd': result['final_usd'],
|
||||
'profit_ratio': (result['final_usd'] - result['initial_usd']) / result['initial_usd'],
|
||||
'total_fees_usd': result['total_fees_usd'],
|
||||
'trades': trades,
|
||||
'data_points': len(data_1min)
|
||||
}
|
||||
|
||||
finally:
|
||||
# Clean up temporary config file
|
||||
if os.path.exists(temp_config_file):
|
||||
os.remove(temp_config_file)
|
||||
|
||||
|
||||
def run_incremental_strategy(start_date: str, end_date: str, initial_usd: float, stop_loss_pct: float):
|
||||
"""Run the incremental strategy using the new backtester."""
|
||||
print(f"\n🔄 Running Incremental Strategy...")
|
||||
|
||||
storage = Storage()
|
||||
|
||||
# Use absolute path for data file
|
||||
data_file = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data", "btcusd_1-min_data.csv")
|
||||
|
||||
# Create backtester configuration
|
||||
config = BacktestConfig(
|
||||
data_file=data_file,
|
||||
start_date=start_date,
|
||||
end_date=end_date,
|
||||
initial_usd=initial_usd,
|
||||
stop_loss_pct=stop_loss_pct,
|
||||
take_profit_pct=0.0
|
||||
)
|
||||
|
||||
# Create strategy
|
||||
strategy = IncMetaTrendStrategy(
|
||||
name="metatrend",
|
||||
weight=1.0,
|
||||
params={
|
||||
"timeframe": "15min",
|
||||
"enable_logging": False
|
||||
}
|
||||
)
|
||||
|
||||
# Run backtest
|
||||
backtester = IncBacktester(config, storage)
|
||||
result = backtester.run_single_strategy(strategy)
|
||||
|
||||
result['strategy_name'] = 'Incremental MetaTrend'
|
||||
return result
|
||||
|
||||
|
||||
def save_comparison_results(original_result: dict, incremental_result: dict, output_dir: str):
|
||||
"""Save comparison results to files."""
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Save original trades
|
||||
original_trades_file = os.path.join(output_dir, "original_trades.csv")
|
||||
if original_result and original_result['trades']:
|
||||
trades_df = pd.DataFrame(original_result['trades'])
|
||||
trades_df.to_csv(original_trades_file, index=False)
|
||||
print(f"Saved original trades to: {original_trades_file}")
|
||||
|
||||
# Save incremental trades
|
||||
incremental_trades_file = os.path.join(output_dir, "incremental_trades.csv")
|
||||
if incremental_result['trades']:
|
||||
# Convert to same format as original
|
||||
trades_data = []
|
||||
for trade in incremental_result['trades']:
|
||||
trades_data.append({
|
||||
'entry_time': trade.get('entry_time'),
|
||||
'exit_time': trade.get('exit_time'),
|
||||
'entry_price': trade.get('entry_price'),
|
||||
'exit_price': trade.get('exit_price'),
|
||||
'profit_pct': trade.get('profit_pct'),
|
||||
'type': trade.get('type'),
|
||||
'fee_usd': trade.get('fee_usd')
|
||||
})
|
||||
trades_df = pd.DataFrame(trades_data)
|
||||
trades_df.to_csv(incremental_trades_file, index=False)
|
||||
print(f"Saved incremental trades to: {incremental_trades_file}")
|
||||
|
||||
# Save comparison summary
|
||||
comparison_file = os.path.join(output_dir, "strategy_comparison.json")
|
||||
|
||||
# Convert numpy types to Python types for JSON serialization
|
||||
def convert_numpy_types(obj):
|
||||
if hasattr(obj, 'item'): # numpy scalar
|
||||
return obj.item()
|
||||
elif isinstance(obj, dict):
|
||||
return {k: convert_numpy_types(v) for k, v in obj.items()}
|
||||
elif isinstance(obj, list):
|
||||
return [convert_numpy_types(v) for v in obj]
|
||||
else:
|
||||
return obj
|
||||
|
||||
comparison_data = {
|
||||
'test_date': datetime.now().isoformat(),
|
||||
'data_file': 'btcusd_1-min_data.csv',
|
||||
'original_strategy': {
|
||||
'name': original_result['strategy_name'] if original_result else 'Failed',
|
||||
'n_trades': int(original_result['n_trades']) if original_result else 0,
|
||||
'win_rate': float(original_result['win_rate']) if original_result else 0,
|
||||
'avg_trade': float(original_result['avg_trade']) if original_result else 0,
|
||||
'max_drawdown': float(original_result['max_drawdown']) if original_result else 0,
|
||||
'initial_usd': float(original_result['initial_usd']) if original_result else 0,
|
||||
'final_usd': float(original_result['final_usd']) if original_result else 0,
|
||||
'profit_ratio': float(original_result['profit_ratio']) if original_result else 0,
|
||||
'total_fees_usd': float(original_result['total_fees_usd']) if original_result else 0,
|
||||
'data_points': int(original_result['data_points']) if original_result else 0
|
||||
},
|
||||
'incremental_strategy': {
|
||||
'name': incremental_result['strategy_name'],
|
||||
'n_trades': int(incremental_result['n_trades']),
|
||||
'win_rate': float(incremental_result['win_rate']),
|
||||
'avg_trade': float(incremental_result['avg_trade']),
|
||||
'max_drawdown': float(incremental_result['max_drawdown']),
|
||||
'initial_usd': float(incremental_result['initial_usd']),
|
||||
'final_usd': float(incremental_result['final_usd']),
|
||||
'profit_ratio': float(incremental_result['profit_ratio']),
|
||||
'total_fees_usd': float(incremental_result['total_fees_usd']),
|
||||
'data_points': int(incremental_result.get('data_points_processed', 0))
|
||||
}
|
||||
}
|
||||
|
||||
if original_result:
|
||||
comparison_data['comparison'] = {
|
||||
'profit_difference': float(incremental_result['profit_ratio'] - original_result['profit_ratio']),
|
||||
'trade_count_difference': int(incremental_result['n_trades'] - original_result['n_trades']),
|
||||
'win_rate_difference': float(incremental_result['win_rate'] - original_result['win_rate'])
|
||||
}
|
||||
|
||||
with open(comparison_file, 'w') as f:
|
||||
json.dump(comparison_data, f, indent=2)
|
||||
print(f"Saved comparison summary to: {comparison_file}")
|
||||
|
||||
return comparison_data
|
||||
|
||||
|
||||
def create_comparison_plot(original_result: dict, incremental_result: dict,
|
||||
start_date: str, end_date: str, output_dir: str):
|
||||
"""Create a comparison plot showing both strategies."""
|
||||
print(f"\n📊 Creating comparison plot...")
|
||||
|
||||
# Load price data for plotting
|
||||
storage = Storage()
|
||||
data_file = os.path.join(os.path.dirname(os.path.dirname(os.path.abspath(__file__))), "data", "btcusd_1-min_data.csv")
|
||||
data_1min = storage.load_data(data_file, start_date, end_date)
|
||||
aggregated_data = aggregate_to_minutes(data_1min, 15)
|
||||
|
||||
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(15, 12))
|
||||
|
||||
# Plot 1: Price with trade signals
|
||||
ax1.plot(aggregated_data.index, aggregated_data['close'], 'k-', alpha=0.7, linewidth=1, label='BTC Price')
|
||||
|
||||
# Plot original strategy trades
|
||||
if original_result and original_result['trades']:
|
||||
original_trades = original_result['trades']
|
||||
for trade in original_trades:
|
||||
entry_time = pd.to_datetime(trade.get('entry_time'))
|
||||
exit_time = pd.to_datetime(trade.get('exit_time'))
|
||||
entry_price = trade.get('entry_price')
|
||||
exit_price = trade.get('exit_price')
|
||||
|
||||
if entry_time and entry_price:
|
||||
# Buy signal (above price line)
|
||||
ax1.scatter(entry_time, entry_price * 1.02, color='green', marker='^',
|
||||
s=50, alpha=0.8, label='Original Buy' if trade == original_trades[0] else "")
|
||||
|
||||
if exit_time and exit_price:
|
||||
# Sell signal (above price line)
|
||||
color = 'red' if trade.get('profit_pct', 0) < 0 else 'blue'
|
||||
ax1.scatter(exit_time, exit_price * 1.02, color=color, marker='v',
|
||||
s=50, alpha=0.8, label='Original Sell' if trade == original_trades[0] else "")
|
||||
|
||||
# Plot incremental strategy trades
|
||||
incremental_trades = incremental_result['trades']
|
||||
if incremental_trades:
|
||||
for trade in incremental_trades:
|
||||
entry_time = pd.to_datetime(trade.get('entry_time'))
|
||||
exit_time = pd.to_datetime(trade.get('exit_time'))
|
||||
entry_price = trade.get('entry_price')
|
||||
exit_price = trade.get('exit_price')
|
||||
|
||||
if entry_time and entry_price:
|
||||
# Buy signal (below price line)
|
||||
ax1.scatter(entry_time, entry_price * 0.98, color='lightgreen', marker='^',
|
||||
s=50, alpha=0.8, label='Incremental Buy' if trade == incremental_trades[0] else "")
|
||||
|
||||
if exit_time and exit_price:
|
||||
# Sell signal (below price line)
|
||||
exit_type = trade.get('type', 'STRATEGY_EXIT')
|
||||
if exit_type == 'STOP_LOSS':
|
||||
color = 'orange'
|
||||
elif exit_type == 'TAKE_PROFIT':
|
||||
color = 'purple'
|
||||
else:
|
||||
color = 'lightblue'
|
||||
|
||||
ax1.scatter(exit_time, exit_price * 0.98, color=color, marker='v',
|
||||
s=50, alpha=0.8, label=f'Incremental {exit_type}' if trade == incremental_trades[0] else "")
|
||||
|
||||
ax1.set_title(f'Strategy Comparison: {start_date} to {end_date}', fontsize=14, fontweight='bold')
|
||||
ax1.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax1.legend(loc='upper left')
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Format x-axis
|
||||
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
|
||||
ax1.xaxis.set_major_locator(mdates.MonthLocator())
|
||||
plt.setp(ax1.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
# Plot 2: Performance comparison
|
||||
strategies = ['Original', 'Incremental']
|
||||
profits = [
|
||||
original_result['profit_ratio'] * 100 if original_result else 0,
|
||||
incremental_result['profit_ratio'] * 100
|
||||
]
|
||||
colors = ['blue', 'green']
|
||||
|
||||
bars = ax2.bar(strategies, profits, color=colors, alpha=0.7)
|
||||
ax2.set_title('Profit Comparison', fontsize=14, fontweight='bold')
|
||||
ax2.set_ylabel('Profit (%)', fontsize=12)
|
||||
ax2.grid(True, alpha=0.3, axis='y')
|
||||
|
||||
# Add value labels on bars
|
||||
for bar, profit in zip(bars, profits):
|
||||
height = bar.get_height()
|
||||
ax2.text(bar.get_x() + bar.get_width()/2., height + (0.5 if height >= 0 else -1.5),
|
||||
f'{profit:.2f}%', ha='center', va='bottom' if height >= 0 else 'top', fontweight='bold')
|
||||
|
||||
plt.tight_layout()
|
||||
|
||||
# Save plot
|
||||
plot_file = os.path.join(output_dir, "strategy_comparison.png")
|
||||
plt.savefig(plot_file, dpi=300, bbox_inches='tight')
|
||||
plt.close()
|
||||
print(f"Saved comparison plot to: {plot_file}")
|
||||
|
||||
|
||||
def print_comparison_summary(original_result: dict, incremental_result: dict):
|
||||
"""Print a detailed comparison summary."""
|
||||
print("\n" + "="*80)
|
||||
print("STRATEGY COMPARISON SUMMARY")
|
||||
print("="*80)
|
||||
|
||||
if not original_result:
|
||||
print("❌ Original strategy failed to run")
|
||||
print(f"✅ Incremental strategy: {incremental_result['profit_ratio']*100:.2f}% profit")
|
||||
return
|
||||
|
||||
print(f"\n📊 PERFORMANCE METRICS:")
|
||||
print(f"{'Metric':<20} {'Original':<15} {'Incremental':<15} {'Difference':<15}")
|
||||
print("-" * 65)
|
||||
|
||||
# Profit comparison
|
||||
orig_profit = original_result['profit_ratio'] * 100
|
||||
inc_profit = incremental_result['profit_ratio'] * 100
|
||||
profit_diff = inc_profit - orig_profit
|
||||
print(f"{'Profit %':<20} {orig_profit:<15.2f} {inc_profit:<15.2f} {profit_diff:<15.2f}")
|
||||
|
||||
# Final USD comparison
|
||||
orig_final = original_result['final_usd']
|
||||
inc_final = incremental_result['final_usd']
|
||||
usd_diff = inc_final - orig_final
|
||||
print(f"{'Final USD':<20} ${orig_final:<14.2f} ${inc_final:<14.2f} ${usd_diff:<14.2f}")
|
||||
|
||||
# Trade count comparison
|
||||
orig_trades = original_result['n_trades']
|
||||
inc_trades = incremental_result['n_trades']
|
||||
trade_diff = inc_trades - orig_trades
|
||||
print(f"{'Total Trades':<20} {orig_trades:<15} {inc_trades:<15} {trade_diff:<15}")
|
||||
|
||||
# Win rate comparison
|
||||
orig_wr = original_result['win_rate'] * 100
|
||||
inc_wr = incremental_result['win_rate'] * 100
|
||||
wr_diff = inc_wr - orig_wr
|
||||
print(f"{'Win Rate %':<20} {orig_wr:<15.2f} {inc_wr:<15.2f} {wr_diff:<15.2f}")
|
||||
|
||||
# Average trade comparison
|
||||
orig_avg = original_result['avg_trade'] * 100
|
||||
inc_avg = incremental_result['avg_trade'] * 100
|
||||
avg_diff = inc_avg - orig_avg
|
||||
print(f"{'Avg Trade %':<20} {orig_avg:<15.2f} {inc_avg:<15.2f} {avg_diff:<15.2f}")
|
||||
|
||||
# Max drawdown comparison
|
||||
orig_dd = original_result['max_drawdown'] * 100
|
||||
inc_dd = incremental_result['max_drawdown'] * 100
|
||||
dd_diff = inc_dd - orig_dd
|
||||
print(f"{'Max Drawdown %':<20} {orig_dd:<15.2f} {inc_dd:<15.2f} {dd_diff:<15.2f}")
|
||||
|
||||
# Fees comparison
|
||||
orig_fees = original_result['total_fees_usd']
|
||||
inc_fees = incremental_result['total_fees_usd']
|
||||
fees_diff = inc_fees - orig_fees
|
||||
print(f"{'Total Fees USD':<20} ${orig_fees:<14.2f} ${inc_fees:<14.2f} ${fees_diff:<14.2f}")
|
||||
|
||||
print("\n" + "="*80)
|
||||
|
||||
# Determine winner
|
||||
if profit_diff > 0:
|
||||
print(f"🏆 WINNER: Incremental Strategy (+{profit_diff:.2f}% better)")
|
||||
elif profit_diff < 0:
|
||||
print(f"🏆 WINNER: Original Strategy (+{abs(profit_diff):.2f}% better)")
|
||||
else:
|
||||
print(f"🤝 TIE: Both strategies performed equally")
|
||||
|
||||
print("="*80)
|
||||
|
||||
|
||||
def main():
|
||||
"""Main comparison function."""
|
||||
print("🚀 Comparing Original vs Incremental Strategies on Same Data")
|
||||
print("=" * 80)
|
||||
|
||||
# Configuration
|
||||
start_date = "2025-01-01"
|
||||
end_date = "2025-05-01"
|
||||
initial_usd = 10000
|
||||
stop_loss_pct = 0.03 # 3% stop loss
|
||||
|
||||
print(f"📅 Test Period: {start_date} to {end_date}")
|
||||
print(f"💰 Initial Capital: ${initial_usd:,}")
|
||||
print(f"🛑 Stop Loss: {stop_loss_pct*100:.1f}%")
|
||||
print(f"📊 Data Source: btcusd_1-min_data.csv")
|
||||
|
||||
try:
|
||||
# Run both strategies
|
||||
original_result = run_original_strategy_via_main(start_date, end_date, initial_usd, stop_loss_pct)
|
||||
incremental_result = run_incremental_strategy(start_date, end_date, initial_usd, stop_loss_pct)
|
||||
|
||||
# Print comparison summary
|
||||
print_comparison_summary(original_result, incremental_result)
|
||||
|
||||
# Save results
|
||||
output_dir = "results/strategy_comparison"
|
||||
comparison_data = save_comparison_results(original_result, incremental_result, output_dir)
|
||||
|
||||
# Create comparison plot
|
||||
create_comparison_plot(original_result, incremental_result, start_date, end_date, output_dir)
|
||||
|
||||
print(f"\n📁 Results saved to: {output_dir}/")
|
||||
print(f" - strategy_comparison.json")
|
||||
print(f" - strategy_comparison.png")
|
||||
print(f" - original_trades.csv")
|
||||
print(f" - incremental_trades.csv")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during comparison: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
sys.exit(0 if success else 1)
|
||||
209
test/compare_trade_timing.py
Normal file
209
test/compare_trade_timing.py
Normal file
@ -0,0 +1,209 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Compare Trade Timing Between Strategies
|
||||
=======================================
|
||||
|
||||
This script analyzes the timing differences between the original and incremental
|
||||
strategies to understand why there's still a performance difference despite
|
||||
having similar exit conditions.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
def load_and_compare_trades():
|
||||
"""Load and compare trade timing between strategies."""
|
||||
|
||||
print("🔍 COMPARING TRADE TIMING BETWEEN STRATEGIES")
|
||||
print("=" * 80)
|
||||
|
||||
# Load original strategy trades
|
||||
original_file = "../results/trades_15min(15min)_ST3pct.csv"
|
||||
incremental_file = "../results/trades_incremental_15min(15min)_ST3pct.csv"
|
||||
|
||||
print(f"📊 Loading original trades from: {original_file}")
|
||||
original_df = pd.read_csv(original_file)
|
||||
original_df['entry_time'] = pd.to_datetime(original_df['entry_time'])
|
||||
original_df['exit_time'] = pd.to_datetime(original_df['exit_time'])
|
||||
|
||||
print(f"📊 Loading incremental trades from: {incremental_file}")
|
||||
incremental_df = pd.read_csv(incremental_file)
|
||||
incremental_df['entry_time'] = pd.to_datetime(incremental_df['entry_time'])
|
||||
incremental_df['exit_time'] = pd.to_datetime(incremental_df['exit_time'])
|
||||
|
||||
# Filter to only buy signals for entry timing comparison
|
||||
original_buys = original_df[original_df['type'] == 'BUY'].copy()
|
||||
incremental_buys = incremental_df[incremental_df['type'] == 'BUY'].copy()
|
||||
|
||||
print(f"\n📈 TRADE COUNT COMPARISON:")
|
||||
print(f"Original strategy: {len(original_buys)} buy signals")
|
||||
print(f"Incremental strategy: {len(incremental_buys)} buy signals")
|
||||
print(f"Difference: {len(incremental_buys) - len(original_buys)} more in incremental")
|
||||
|
||||
# Compare first 10 trades
|
||||
print(f"\n🕐 FIRST 10 TRADE TIMINGS:")
|
||||
print("-" * 60)
|
||||
print("Original Strategy:")
|
||||
for i, row in original_buys.head(10).iterrows():
|
||||
print(f" {i//2 + 1:2d}. {row['entry_time']} - ${row['entry_price']:.0f}")
|
||||
|
||||
print("\nIncremental Strategy:")
|
||||
for i, row in incremental_buys.head(10).iterrows():
|
||||
print(f" {i//2 + 1:2d}. {row['entry_time']} - ${row['entry_price']:.0f}")
|
||||
|
||||
# Analyze timing differences
|
||||
analyze_timing_differences(original_buys, incremental_buys)
|
||||
|
||||
# Analyze price differences
|
||||
analyze_price_differences(original_buys, incremental_buys)
|
||||
|
||||
return original_buys, incremental_buys
|
||||
|
||||
def analyze_timing_differences(original_buys, incremental_buys):
|
||||
"""Analyze the timing differences between strategies."""
|
||||
|
||||
print(f"\n🕐 TIMING ANALYSIS:")
|
||||
print("-" * 60)
|
||||
|
||||
# Find the earliest and latest trades
|
||||
orig_start = original_buys['entry_time'].min()
|
||||
orig_end = original_buys['entry_time'].max()
|
||||
inc_start = incremental_buys['entry_time'].min()
|
||||
inc_end = incremental_buys['entry_time'].max()
|
||||
|
||||
print(f"Original strategy:")
|
||||
print(f" First trade: {orig_start}")
|
||||
print(f" Last trade: {orig_end}")
|
||||
print(f" Duration: {orig_end - orig_start}")
|
||||
|
||||
print(f"\nIncremental strategy:")
|
||||
print(f" First trade: {inc_start}")
|
||||
print(f" Last trade: {inc_end}")
|
||||
print(f" Duration: {inc_end - inc_start}")
|
||||
|
||||
# Check if incremental strategy misses early trades
|
||||
time_diff = inc_start - orig_start
|
||||
print(f"\n⏰ TIME DIFFERENCE:")
|
||||
print(f"Incremental starts {time_diff} after original")
|
||||
|
||||
if time_diff > timedelta(hours=1):
|
||||
print("⚠️ SIGNIFICANT DELAY DETECTED!")
|
||||
print("The incremental strategy is missing early profitable trades!")
|
||||
|
||||
# Count how many original trades happened before incremental started
|
||||
early_trades = original_buys[original_buys['entry_time'] < inc_start]
|
||||
print(f"📊 Original trades before incremental started: {len(early_trades)}")
|
||||
|
||||
if len(early_trades) > 0:
|
||||
early_profits = []
|
||||
for i in range(0, len(early_trades) * 2, 2):
|
||||
if i + 1 < len(original_buys.index):
|
||||
profit_pct = original_buys.iloc[i + 1]['profit_pct']
|
||||
early_profits.append(profit_pct)
|
||||
|
||||
if early_profits:
|
||||
avg_early_profit = np.mean(early_profits) * 100
|
||||
total_early_profit = np.sum(early_profits) * 100
|
||||
print(f"📈 Average profit of early trades: {avg_early_profit:.2f}%")
|
||||
print(f"📈 Total profit from early trades: {total_early_profit:.2f}%")
|
||||
|
||||
def analyze_price_differences(original_buys, incremental_buys):
|
||||
"""Analyze price differences at similar times."""
|
||||
|
||||
print(f"\n💰 PRICE ANALYSIS:")
|
||||
print("-" * 60)
|
||||
|
||||
# Find trades that happen on the same day
|
||||
original_buys['date'] = original_buys['entry_time'].dt.date
|
||||
incremental_buys['date'] = incremental_buys['entry_time'].dt.date
|
||||
|
||||
common_dates = set(original_buys['date']) & set(incremental_buys['date'])
|
||||
print(f"📅 Common trading dates: {len(common_dates)}")
|
||||
|
||||
# Compare prices on common dates
|
||||
price_differences = []
|
||||
|
||||
for date in sorted(list(common_dates))[:10]: # First 10 common dates
|
||||
orig_trades = original_buys[original_buys['date'] == date]
|
||||
inc_trades = incremental_buys[incremental_buys['date'] == date]
|
||||
|
||||
if len(orig_trades) > 0 and len(inc_trades) > 0:
|
||||
orig_price = orig_trades.iloc[0]['entry_price']
|
||||
inc_price = inc_trades.iloc[0]['entry_price']
|
||||
price_diff = ((inc_price - orig_price) / orig_price) * 100
|
||||
price_differences.append(price_diff)
|
||||
|
||||
print(f" {date}: Original ${orig_price:.0f}, Incremental ${inc_price:.0f} ({price_diff:+.2f}%)")
|
||||
|
||||
if price_differences:
|
||||
avg_price_diff = np.mean(price_differences)
|
||||
print(f"\n📊 Average price difference: {avg_price_diff:+.2f}%")
|
||||
if avg_price_diff > 1:
|
||||
print("⚠️ Incremental strategy consistently buys at higher prices!")
|
||||
elif avg_price_diff < -1:
|
||||
print("✅ Incremental strategy consistently buys at lower prices!")
|
||||
|
||||
def create_timing_visualization(original_buys, incremental_buys):
|
||||
"""Create a visualization of trade timing differences."""
|
||||
|
||||
print(f"\n📊 CREATING TIMING VISUALIZATION...")
|
||||
|
||||
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(15, 10))
|
||||
|
||||
# Plot 1: Trade timing over time
|
||||
ax1.scatter(original_buys['entry_time'], original_buys['entry_price'],
|
||||
alpha=0.6, label='Original Strategy', color='blue', s=30)
|
||||
ax1.scatter(incremental_buys['entry_time'], incremental_buys['entry_price'],
|
||||
alpha=0.6, label='Incremental Strategy', color='red', s=30)
|
||||
ax1.set_title('Trade Entry Timing Comparison')
|
||||
ax1.set_xlabel('Date')
|
||||
ax1.set_ylabel('Entry Price ($)')
|
||||
ax1.legend()
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 2: Cumulative trade count
|
||||
original_buys_sorted = original_buys.sort_values('entry_time')
|
||||
incremental_buys_sorted = incremental_buys.sort_values('entry_time')
|
||||
|
||||
ax2.plot(original_buys_sorted['entry_time'], range(1, len(original_buys_sorted) + 1),
|
||||
label='Original Strategy', color='blue', linewidth=2)
|
||||
ax2.plot(incremental_buys_sorted['entry_time'], range(1, len(incremental_buys_sorted) + 1),
|
||||
label='Incremental Strategy', color='red', linewidth=2)
|
||||
ax2.set_title('Cumulative Trade Count Over Time')
|
||||
ax2.set_xlabel('Date')
|
||||
ax2.set_ylabel('Cumulative Trades')
|
||||
ax2.legend()
|
||||
ax2.grid(True, alpha=0.3)
|
||||
|
||||
plt.tight_layout()
|
||||
plt.savefig('../results/trade_timing_comparison.png', dpi=300, bbox_inches='tight')
|
||||
print("📊 Timing visualization saved to: ../results/trade_timing_comparison.png")
|
||||
|
||||
def main():
|
||||
"""Main analysis function."""
|
||||
|
||||
try:
|
||||
original_buys, incremental_buys = load_and_compare_trades()
|
||||
create_timing_visualization(original_buys, incremental_buys)
|
||||
|
||||
print(f"\n🎯 SUMMARY:")
|
||||
print("=" * 80)
|
||||
print("Key findings from trade timing analysis:")
|
||||
print("1. Check if incremental strategy starts trading later")
|
||||
print("2. Compare entry prices on same dates")
|
||||
print("3. Identify any systematic timing delays")
|
||||
print("4. Quantify impact of timing differences on performance")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during analysis: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
exit(0 if success else 1)
|
||||
182
test/demonstrate_signal_difference.py
Normal file
182
test/demonstrate_signal_difference.py
Normal file
@ -0,0 +1,182 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Demonstrate Signal Generation Difference
|
||||
========================================
|
||||
|
||||
This script creates a clear visual demonstration of why the original strategy
|
||||
generates so many more exit signals than the incremental strategy.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import matplotlib.pyplot as plt
|
||||
import numpy as np
|
||||
|
||||
def demonstrate_signal_difference():
|
||||
"""Create a visual demonstration of the signal generation difference."""
|
||||
|
||||
print("🎯 DEMONSTRATING THE SIGNAL GENERATION DIFFERENCE")
|
||||
print("=" * 80)
|
||||
|
||||
# Create a simple example scenario
|
||||
print("\n📊 EXAMPLE SCENARIO:")
|
||||
print("Meta-trend sequence: [0, -1, -1, -1, -1, 0, 1, 1, 0, -1, -1]")
|
||||
print("Time periods: [T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T11]")
|
||||
|
||||
meta_trends = [0, -1, -1, -1, -1, 0, 1, 1, 0, -1, -1]
|
||||
time_periods = [f"T{i+1}" for i in range(len(meta_trends))]
|
||||
|
||||
print("\n🔍 ORIGINAL STRATEGY BEHAVIOR:")
|
||||
print("-" * 50)
|
||||
print("Checks exit condition: prev_trend != 1 AND curr_trend == -1")
|
||||
print("Evaluates at EVERY time period:")
|
||||
|
||||
original_exits = []
|
||||
for i in range(1, len(meta_trends)):
|
||||
prev_trend = meta_trends[i-1]
|
||||
curr_trend = meta_trends[i]
|
||||
|
||||
# Original strategy exit condition
|
||||
if prev_trend != 1 and curr_trend == -1:
|
||||
original_exits.append(time_periods[i])
|
||||
print(f" {time_periods[i]}: {prev_trend} → {curr_trend} = EXIT SIGNAL ✅")
|
||||
else:
|
||||
print(f" {time_periods[i]}: {prev_trend} → {curr_trend} = no signal")
|
||||
|
||||
print(f"\n📈 Original strategy generates {len(original_exits)} exit signals: {original_exits}")
|
||||
|
||||
print("\n🔍 INCREMENTAL STRATEGY BEHAVIOR:")
|
||||
print("-" * 50)
|
||||
print("Checks exit condition: prev_trend != -1 AND curr_trend == -1")
|
||||
print("Only signals on STATE CHANGES:")
|
||||
|
||||
incremental_exits = []
|
||||
last_signal_state = None
|
||||
|
||||
for i in range(1, len(meta_trends)):
|
||||
prev_trend = meta_trends[i-1]
|
||||
curr_trend = meta_trends[i]
|
||||
|
||||
# Incremental strategy exit condition
|
||||
if prev_trend != -1 and curr_trend == -1:
|
||||
# Only signal if we haven't already signaled this state change
|
||||
if last_signal_state != 'exit':
|
||||
incremental_exits.append(time_periods[i])
|
||||
last_signal_state = 'exit'
|
||||
print(f" {time_periods[i]}: {prev_trend} → {curr_trend} = EXIT SIGNAL ✅ (state change)")
|
||||
else:
|
||||
print(f" {time_periods[i]}: {prev_trend} → {curr_trend} = no signal (already signaled)")
|
||||
else:
|
||||
if curr_trend != -1:
|
||||
last_signal_state = None # Reset when not in exit state
|
||||
print(f" {time_periods[i]}: {prev_trend} → {curr_trend} = no signal")
|
||||
|
||||
print(f"\n📈 Incremental strategy generates {len(incremental_exits)} exit signals: {incremental_exits}")
|
||||
|
||||
print("\n🎯 KEY INSIGHT:")
|
||||
print("-" * 50)
|
||||
print(f"Original: {len(original_exits)} exit signals")
|
||||
print(f"Incremental: {len(incremental_exits)} exit signals")
|
||||
print(f"Difference: {len(original_exits) - len(incremental_exits)} more signals from original")
|
||||
print("\nThe original strategy generates exit signals at T2 AND T10")
|
||||
print("The incremental strategy only generates exit signals at T2 and T10")
|
||||
print("But wait... let me check the actual conditions...")
|
||||
|
||||
# Let me re-examine the actual conditions
|
||||
print("\n🔍 RE-EXAMINING ACTUAL CONDITIONS:")
|
||||
print("-" * 50)
|
||||
|
||||
print("ORIGINAL: prev_trend != 1 AND curr_trend == -1")
|
||||
print("INCREMENTAL: prev_trend != -1 AND curr_trend == -1")
|
||||
print("\nThese are DIFFERENT conditions!")
|
||||
|
||||
print("\n📊 ORIGINAL STRATEGY DETAILED:")
|
||||
original_exits_detailed = []
|
||||
for i in range(1, len(meta_trends)):
|
||||
prev_trend = meta_trends[i-1]
|
||||
curr_trend = meta_trends[i]
|
||||
|
||||
if prev_trend != 1 and curr_trend == -1:
|
||||
original_exits_detailed.append(time_periods[i])
|
||||
print(f" {time_periods[i]}: prev({prev_trend}) != 1 AND curr({curr_trend}) == -1 → TRUE ✅")
|
||||
|
||||
print("\n📊 INCREMENTAL STRATEGY DETAILED:")
|
||||
incremental_exits_detailed = []
|
||||
for i in range(1, len(meta_trends)):
|
||||
prev_trend = meta_trends[i-1]
|
||||
curr_trend = meta_trends[i]
|
||||
|
||||
if prev_trend != -1 and curr_trend == -1:
|
||||
incremental_exits_detailed.append(time_periods[i])
|
||||
print(f" {time_periods[i]}: prev({prev_trend}) != -1 AND curr({curr_trend}) == -1 → TRUE ✅")
|
||||
|
||||
print(f"\n🎯 CORRECTED ANALYSIS:")
|
||||
print("-" * 50)
|
||||
print(f"Original exits: {original_exits_detailed}")
|
||||
print(f"Incremental exits: {incremental_exits_detailed}")
|
||||
print("\nBoth should generate the same exit signals!")
|
||||
print("The difference must be elsewhere...")
|
||||
|
||||
return True
|
||||
|
||||
def analyze_real_difference():
|
||||
"""Analyze the real difference based on our test results."""
|
||||
|
||||
print("\n\n🔍 ANALYZING THE REAL DIFFERENCE")
|
||||
print("=" * 80)
|
||||
|
||||
print("From our test results:")
|
||||
print("• Original: 37 exit signals in 3 days")
|
||||
print("• Incremental: 5 exit signals in 3 days")
|
||||
print("• Both had 36 meta-trend changes")
|
||||
|
||||
print("\n🤔 THE MYSTERY:")
|
||||
print("If both strategies have the same exit conditions,")
|
||||
print("why does the original generate 7x more exit signals?")
|
||||
|
||||
print("\n💡 THE ANSWER:")
|
||||
print("Looking at the original exit signals:")
|
||||
print(" 1. 2025-01-01 00:15:00")
|
||||
print(" 2. 2025-01-01 08:15:00")
|
||||
print(" 3. 2025-01-01 08:30:00 ← CONSECUTIVE!")
|
||||
print(" 4. 2025-01-01 08:45:00 ← CONSECUTIVE!")
|
||||
print(" 5. 2025-01-01 09:00:00 ← CONSECUTIVE!")
|
||||
|
||||
print("\nThe original strategy generates exit signals at")
|
||||
print("CONSECUTIVE time periods when meta-trend stays at -1!")
|
||||
|
||||
print("\n🎯 ROOT CAUSE IDENTIFIED:")
|
||||
print("-" * 50)
|
||||
print("ORIGINAL STRATEGY:")
|
||||
print("• Checks: prev_trend != 1 AND curr_trend == -1")
|
||||
print("• When meta-trend is -1 for multiple periods:")
|
||||
print(" - T1: 0 → -1 (prev != 1 ✅, curr == -1 ✅) → EXIT")
|
||||
print(" - T2: -1 → -1 (prev != 1 ✅, curr == -1 ✅) → EXIT")
|
||||
print(" - T3: -1 → -1 (prev != 1 ✅, curr == -1 ✅) → EXIT")
|
||||
print("• Generates exit signal at EVERY bar where curr_trend == -1")
|
||||
|
||||
print("\nINCREMENTAL STRATEGY:")
|
||||
print("• Checks: prev_trend != -1 AND curr_trend == -1")
|
||||
print("• When meta-trend is -1 for multiple periods:")
|
||||
print(" - T1: 0 → -1 (prev != -1 ✅, curr == -1 ✅) → EXIT")
|
||||
print(" - T2: -1 → -1 (prev != -1 ❌, curr == -1 ✅) → NO EXIT")
|
||||
print(" - T3: -1 → -1 (prev != -1 ❌, curr == -1 ✅) → NO EXIT")
|
||||
print("• Only generates exit signal on TRANSITION to -1")
|
||||
|
||||
print("\n🏆 FINAL ANSWER:")
|
||||
print("=" * 80)
|
||||
print("The original strategy has a LOGICAL ERROR!")
|
||||
print("It should check 'prev_trend != -1' like the incremental strategy.")
|
||||
print("The current condition 'prev_trend != 1' means it exits")
|
||||
print("whenever curr_trend == -1, regardless of previous state.")
|
||||
print("This causes it to generate exit signals at every bar")
|
||||
print("when the meta-trend is in a downward state (-1).")
|
||||
|
||||
def main():
|
||||
"""Main demonstration function."""
|
||||
demonstrate_signal_difference()
|
||||
analyze_real_difference()
|
||||
return True
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
exit(0 if success else 1)
|
||||
504
test/run_strategy_comparison_2025.py
Normal file
504
test/run_strategy_comparison_2025.py
Normal file
@ -0,0 +1,504 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Strategy Comparison for 2025 Q1 Data
|
||||
|
||||
This script runs both the original DefaultStrategy and incremental IncMetaTrendStrategy
|
||||
on the same timeframe (2025-01-01 to 2025-05-01) and creates comprehensive
|
||||
side-by-side comparison plots and analysis.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
import seaborn as sns
|
||||
import logging
|
||||
from typing import Dict, List, Tuple, Optional
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
import json
|
||||
|
||||
# Add project root to path
|
||||
sys.path.insert(0, os.path.abspath('..'))
|
||||
|
||||
from cycles.strategies.default_strategy import DefaultStrategy
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
from cycles.IncStrategies.inc_backtester import IncBacktester, BacktestConfig
|
||||
from cycles.IncStrategies.inc_trader import IncTrader
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.backtest import Backtest
|
||||
from cycles.market_fees import MarketFees
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Set style for better plots
|
||||
plt.style.use('default')
|
||||
sns.set_palette("husl")
|
||||
|
||||
|
||||
class StrategyComparison2025:
|
||||
"""Comprehensive comparison between original and incremental strategies for 2025 data."""
|
||||
|
||||
def __init__(self, start_date: str = "2025-01-01", end_date: str = "2025-05-01"):
|
||||
"""Initialize the comparison."""
|
||||
self.start_date = start_date
|
||||
self.end_date = end_date
|
||||
self.market_fees = MarketFees()
|
||||
|
||||
# Data storage
|
||||
self.test_data = None
|
||||
self.original_results = None
|
||||
self.incremental_results = None
|
||||
|
||||
# Results storage
|
||||
self.original_trades = []
|
||||
self.incremental_trades = []
|
||||
self.original_portfolio = []
|
||||
self.incremental_portfolio = []
|
||||
|
||||
def load_data(self) -> pd.DataFrame:
|
||||
"""Load test data for the specified date range."""
|
||||
logger.info(f"Loading data from {self.start_date} to {self.end_date}")
|
||||
|
||||
try:
|
||||
# Load data directly from CSV file
|
||||
data_file = "../data/btcusd_1-min_data.csv"
|
||||
logger.info(f"Loading data from: {data_file}")
|
||||
|
||||
# Read CSV file
|
||||
df = pd.read_csv(data_file)
|
||||
|
||||
# Convert timestamp column
|
||||
df['timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
|
||||
|
||||
# Rename columns to match expected format
|
||||
df = df.rename(columns={
|
||||
'Open': 'open',
|
||||
'High': 'high',
|
||||
'Low': 'low',
|
||||
'Close': 'close',
|
||||
'Volume': 'volume'
|
||||
})
|
||||
|
||||
# Filter by date range
|
||||
start_dt = pd.to_datetime(self.start_date)
|
||||
end_dt = pd.to_datetime(self.end_date)
|
||||
|
||||
df = df[(df['timestamp'] >= start_dt) & (df['timestamp'] < end_dt)]
|
||||
|
||||
if df.empty:
|
||||
raise ValueError(f"No data found for the specified date range: {self.start_date} to {self.end_date}")
|
||||
|
||||
# Keep only required columns
|
||||
df = df[['timestamp', 'open', 'high', 'low', 'close', 'volume']]
|
||||
|
||||
self.test_data = df
|
||||
|
||||
logger.info(f"Loaded {len(df)} data points")
|
||||
logger.info(f"Date range: {df['timestamp'].min()} to {df['timestamp'].max()}")
|
||||
logger.info(f"Price range: ${df['close'].min():.0f} - ${df['close'].max():.0f}")
|
||||
|
||||
return df
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load test data: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
raise
|
||||
|
||||
def run_original_strategy(self, initial_usd: float = 10000) -> Dict:
|
||||
"""Run the original DefaultStrategy and extract results."""
|
||||
logger.info("🔄 Running Original DefaultStrategy...")
|
||||
|
||||
try:
|
||||
# Create indexed DataFrame for original strategy
|
||||
indexed_data = self.test_data.set_index('timestamp')
|
||||
|
||||
# Use all available data (not limited to 200 points)
|
||||
logger.info(f"Original strategy processing {len(indexed_data)} data points")
|
||||
|
||||
# Run original backtest with correct parameters
|
||||
backtest = Backtest(
|
||||
initial_balance=initial_usd,
|
||||
strategies=[DefaultStrategy(weight=1.0, params={
|
||||
"stop_loss_pct": 0.03,
|
||||
"timeframe": "1min"
|
||||
})],
|
||||
market_fees=self.market_fees
|
||||
)
|
||||
|
||||
# Run backtest
|
||||
results = backtest.run(indexed_data)
|
||||
|
||||
# Extract trades and portfolio history
|
||||
trades = results.get('trades', [])
|
||||
portfolio_history = results.get('portfolio_history', [])
|
||||
|
||||
# Convert trades to standardized format
|
||||
standardized_trades = []
|
||||
for trade in trades:
|
||||
standardized_trades.append({
|
||||
'timestamp': trade.get('entry_time', trade.get('timestamp')),
|
||||
'type': 'BUY' if trade.get('action') == 'buy' else 'SELL',
|
||||
'price': trade.get('entry_price', trade.get('price')),
|
||||
'exit_time': trade.get('exit_time'),
|
||||
'exit_price': trade.get('exit_price'),
|
||||
'profit_pct': trade.get('profit_pct', 0),
|
||||
'source': 'original'
|
||||
})
|
||||
|
||||
self.original_trades = standardized_trades
|
||||
self.original_portfolio = portfolio_history
|
||||
|
||||
# Calculate performance metrics
|
||||
final_value = results.get('final_balance', initial_usd)
|
||||
total_return = (final_value - initial_usd) / initial_usd * 100
|
||||
|
||||
performance = {
|
||||
'strategy_name': 'Original DefaultStrategy',
|
||||
'initial_value': initial_usd,
|
||||
'final_value': final_value,
|
||||
'total_return': total_return,
|
||||
'num_trades': len(trades),
|
||||
'trades': standardized_trades,
|
||||
'portfolio_history': portfolio_history
|
||||
}
|
||||
|
||||
logger.info(f"✅ Original strategy completed: {len(trades)} trades, {total_return:.2f}% return")
|
||||
|
||||
self.original_results = performance
|
||||
return performance
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error running original strategy: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return None
|
||||
|
||||
def run_incremental_strategy(self, initial_usd: float = 10000) -> Dict:
|
||||
"""Run the incremental strategy using the backtester."""
|
||||
logger.info("🔄 Running Incremental Strategy...")
|
||||
|
||||
try:
|
||||
# Create strategy instance
|
||||
strategy = IncMetaTrendStrategy("metatrend", weight=1.0, params={
|
||||
"timeframe": "1min",
|
||||
"enable_logging": False
|
||||
})
|
||||
|
||||
# Create backtest configuration
|
||||
config = BacktestConfig(
|
||||
initial_usd=initial_usd,
|
||||
stop_loss_pct=0.03,
|
||||
take_profit_pct=None
|
||||
)
|
||||
|
||||
# Create backtester
|
||||
backtester = IncBacktester()
|
||||
|
||||
# Run backtest
|
||||
results = backtester.run_single_strategy(
|
||||
strategy=strategy,
|
||||
data=self.test_data,
|
||||
config=config
|
||||
)
|
||||
|
||||
# Extract results
|
||||
trades = results.get('trades', [])
|
||||
portfolio_history = results.get('portfolio_history', [])
|
||||
|
||||
# Convert trades to standardized format
|
||||
standardized_trades = []
|
||||
for trade in trades:
|
||||
standardized_trades.append({
|
||||
'timestamp': trade.entry_time,
|
||||
'type': 'BUY',
|
||||
'price': trade.entry_price,
|
||||
'exit_time': trade.exit_time,
|
||||
'exit_price': trade.exit_price,
|
||||
'profit_pct': trade.profit_pct,
|
||||
'source': 'incremental'
|
||||
})
|
||||
|
||||
# Add sell signal
|
||||
if trade.exit_time:
|
||||
standardized_trades.append({
|
||||
'timestamp': trade.exit_time,
|
||||
'type': 'SELL',
|
||||
'price': trade.exit_price,
|
||||
'exit_time': trade.exit_time,
|
||||
'exit_price': trade.exit_price,
|
||||
'profit_pct': trade.profit_pct,
|
||||
'source': 'incremental'
|
||||
})
|
||||
|
||||
self.incremental_trades = standardized_trades
|
||||
self.incremental_portfolio = portfolio_history
|
||||
|
||||
# Calculate performance metrics
|
||||
final_value = results.get('final_balance', initial_usd)
|
||||
total_return = (final_value - initial_usd) / initial_usd * 100
|
||||
|
||||
performance = {
|
||||
'strategy_name': 'Incremental MetaTrend',
|
||||
'initial_value': initial_usd,
|
||||
'final_value': final_value,
|
||||
'total_return': total_return,
|
||||
'num_trades': len([t for t in trades if t.exit_time]),
|
||||
'trades': standardized_trades,
|
||||
'portfolio_history': portfolio_history
|
||||
}
|
||||
|
||||
logger.info(f"✅ Incremental strategy completed: {len(trades)} trades, {total_return:.2f}% return")
|
||||
|
||||
self.incremental_results = performance
|
||||
return performance
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error running incremental strategy: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return None
|
||||
|
||||
def create_side_by_side_comparison(self, save_path: str = "../results/strategy_comparison_2025.png"):
|
||||
"""Create comprehensive side-by-side comparison plots."""
|
||||
logger.info("📊 Creating side-by-side comparison plots...")
|
||||
|
||||
# Create figure with subplots
|
||||
fig = plt.figure(figsize=(24, 16))
|
||||
|
||||
# Create grid layout
|
||||
gs = fig.add_gridspec(3, 2, height_ratios=[2, 2, 1], hspace=0.3, wspace=0.2)
|
||||
|
||||
# Plot 1: Original Strategy Price + Signals
|
||||
ax1 = fig.add_subplot(gs[0, 0])
|
||||
self._plot_strategy_signals(ax1, self.original_results, "Original DefaultStrategy", 'blue')
|
||||
|
||||
# Plot 2: Incremental Strategy Price + Signals
|
||||
ax2 = fig.add_subplot(gs[0, 1])
|
||||
self._plot_strategy_signals(ax2, self.incremental_results, "Incremental MetaTrend", 'red')
|
||||
|
||||
# Plot 3: Portfolio Value Comparison
|
||||
ax3 = fig.add_subplot(gs[1, :])
|
||||
self._plot_portfolio_comparison(ax3)
|
||||
|
||||
# Plot 4: Performance Summary Table
|
||||
ax4 = fig.add_subplot(gs[2, :])
|
||||
self._plot_performance_table(ax4)
|
||||
|
||||
# Overall title
|
||||
fig.suptitle(f'Strategy Comparison: {self.start_date} to {self.end_date}',
|
||||
fontsize=20, fontweight='bold', y=0.98)
|
||||
|
||||
# Save plot
|
||||
plt.savefig(save_path, dpi=300, bbox_inches='tight')
|
||||
plt.show()
|
||||
|
||||
logger.info(f"📈 Comparison plot saved to: {save_path}")
|
||||
|
||||
def _plot_strategy_signals(self, ax, results: Dict, title: str, color: str):
|
||||
"""Plot price data with trading signals for a single strategy."""
|
||||
if not results:
|
||||
ax.text(0.5, 0.5, f"No data for {title}", ha='center', va='center', transform=ax.transAxes)
|
||||
return
|
||||
|
||||
# Plot price data
|
||||
ax.plot(self.test_data['timestamp'], self.test_data['close'],
|
||||
color='black', linewidth=1, alpha=0.7, label='BTC Price')
|
||||
|
||||
# Plot trading signals
|
||||
trades = results['trades']
|
||||
buy_signals = [t for t in trades if t['type'] == 'BUY']
|
||||
sell_signals = [t for t in trades if t['type'] == 'SELL']
|
||||
|
||||
if buy_signals:
|
||||
buy_times = [t['timestamp'] for t in buy_signals]
|
||||
buy_prices = [t['price'] for t in buy_signals]
|
||||
ax.scatter(buy_times, buy_prices, color='green', marker='^',
|
||||
s=100, label=f'Buy ({len(buy_signals)})', zorder=5, alpha=0.8)
|
||||
|
||||
if sell_signals:
|
||||
sell_times = [t['timestamp'] for t in sell_signals]
|
||||
sell_prices = [t['price'] for t in sell_signals]
|
||||
|
||||
# Separate profitable and losing sells
|
||||
profitable_sells = [t for t in sell_signals if t.get('profit_pct', 0) > 0]
|
||||
losing_sells = [t for t in sell_signals if t.get('profit_pct', 0) <= 0]
|
||||
|
||||
if profitable_sells:
|
||||
profit_times = [t['timestamp'] for t in profitable_sells]
|
||||
profit_prices = [t['price'] for t in profitable_sells]
|
||||
ax.scatter(profit_times, profit_prices, color='blue', marker='v',
|
||||
s=100, label=f'Profitable Sell ({len(profitable_sells)})', zorder=5, alpha=0.8)
|
||||
|
||||
if losing_sells:
|
||||
loss_times = [t['timestamp'] for t in losing_sells]
|
||||
loss_prices = [t['price'] for t in losing_sells]
|
||||
ax.scatter(loss_times, loss_prices, color='red', marker='v',
|
||||
s=100, label=f'Loss Sell ({len(losing_sells)})', zorder=5, alpha=0.8)
|
||||
|
||||
ax.set_title(title, fontsize=14, fontweight='bold')
|
||||
ax.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax.legend(loc='upper left', fontsize=10)
|
||||
ax.grid(True, alpha=0.3)
|
||||
ax.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Format x-axis
|
||||
ax.xaxis.set_major_locator(mdates.DayLocator(interval=7)) # Every 7 days (weekly)
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d'))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
def _plot_portfolio_comparison(self, ax):
|
||||
"""Plot portfolio value comparison between strategies."""
|
||||
# Plot initial value line
|
||||
ax.axhline(y=10000, color='gray', linestyle='--', alpha=0.7, label='Initial Value ($10,000)')
|
||||
|
||||
# Plot original strategy portfolio
|
||||
if self.original_results and self.original_results.get('portfolio_history'):
|
||||
portfolio = self.original_results['portfolio_history']
|
||||
if portfolio:
|
||||
times = [p.get('timestamp', p.get('time')) for p in portfolio]
|
||||
values = [p.get('portfolio_value', p.get('value', 10000)) for p in portfolio]
|
||||
ax.plot(times, values, color='blue', linewidth=2,
|
||||
label=f"Original ({self.original_results['total_return']:+.1f}%)", alpha=0.8)
|
||||
|
||||
# Plot incremental strategy portfolio
|
||||
if self.incremental_results and self.incremental_results.get('portfolio_history'):
|
||||
portfolio = self.incremental_results['portfolio_history']
|
||||
if portfolio:
|
||||
times = [p.get('timestamp', p.get('time')) for p in portfolio]
|
||||
values = [p.get('portfolio_value', p.get('value', 10000)) for p in portfolio]
|
||||
ax.plot(times, values, color='red', linewidth=2,
|
||||
label=f"Incremental ({self.incremental_results['total_return']:+.1f}%)", alpha=0.8)
|
||||
|
||||
ax.set_title('Portfolio Value Comparison', fontsize=14, fontweight='bold')
|
||||
ax.set_ylabel('Portfolio Value (USD)', fontsize=12)
|
||||
ax.set_xlabel('Date', fontsize=12)
|
||||
ax.legend(loc='upper left', fontsize=12)
|
||||
ax.grid(True, alpha=0.3)
|
||||
ax.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Format x-axis
|
||||
ax.xaxis.set_major_locator(mdates.DayLocator(interval=7)) # Every 7 days (weekly)
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d'))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
def _plot_performance_table(self, ax):
|
||||
"""Create performance comparison table."""
|
||||
ax.axis('off')
|
||||
|
||||
if not self.original_results or not self.incremental_results:
|
||||
ax.text(0.5, 0.5, "Performance data not available", ha='center', va='center',
|
||||
transform=ax.transAxes, fontsize=14)
|
||||
return
|
||||
|
||||
# Create comparison table
|
||||
orig = self.original_results
|
||||
incr = self.incremental_results
|
||||
|
||||
comparison_text = f"""
|
||||
PERFORMANCE COMPARISON - {self.start_date} to {self.end_date}
|
||||
{'='*80}
|
||||
|
||||
{'Metric':<25} {'Original':<20} {'Incremental':<20} {'Difference':<15}
|
||||
{'-'*80}
|
||||
{'Initial Value':<25} ${orig['initial_value']:>15,.0f} ${incr['initial_value']:>17,.0f} ${incr['initial_value'] - orig['initial_value']:>12,.0f}
|
||||
{'Final Value':<25} ${orig['final_value']:>15,.0f} ${incr['final_value']:>17,.0f} ${incr['final_value'] - orig['final_value']:>12,.0f}
|
||||
{'Total Return':<25} {orig['total_return']:>15.2f}% {incr['total_return']:>17.2f}% {incr['total_return'] - orig['total_return']:>12.2f}%
|
||||
{'Number of Trades':<25} {orig['num_trades']:>15} {incr['num_trades']:>17} {incr['num_trades'] - orig['num_trades']:>12}
|
||||
|
||||
ANALYSIS:
|
||||
• Data Period: {len(self.test_data):,} minute bars ({(len(self.test_data) / 1440):.1f} days)
|
||||
• Price Range: ${self.test_data['close'].min():,.0f} - ${self.test_data['close'].max():,.0f}
|
||||
• Both strategies use identical MetaTrend logic with 3% stop loss
|
||||
• Differences indicate implementation variations or data processing differences
|
||||
"""
|
||||
|
||||
ax.text(0.05, 0.95, comparison_text, transform=ax.transAxes, fontsize=11,
|
||||
verticalalignment='top', fontfamily='monospace',
|
||||
bbox=dict(boxstyle="round,pad=0.5", facecolor="lightblue", alpha=0.9))
|
||||
|
||||
def save_results(self, output_dir: str = "../results"):
|
||||
"""Save detailed results to files."""
|
||||
logger.info("💾 Saving detailed results...")
|
||||
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Save original strategy trades
|
||||
if self.original_results:
|
||||
orig_trades_df = pd.DataFrame(self.original_results['trades'])
|
||||
orig_file = f"{output_dir}/original_trades_2025.csv"
|
||||
orig_trades_df.to_csv(orig_file, index=False)
|
||||
logger.info(f"Original trades saved to: {orig_file}")
|
||||
|
||||
# Save incremental strategy trades
|
||||
if self.incremental_results:
|
||||
incr_trades_df = pd.DataFrame(self.incremental_results['trades'])
|
||||
incr_file = f"{output_dir}/incremental_trades_2025.csv"
|
||||
incr_trades_df.to_csv(incr_file, index=False)
|
||||
logger.info(f"Incremental trades saved to: {incr_file}")
|
||||
|
||||
# Save performance summary
|
||||
summary = {
|
||||
'timeframe': f"{self.start_date} to {self.end_date}",
|
||||
'data_points': len(self.test_data) if self.test_data is not None else 0,
|
||||
'original_strategy': self.original_results,
|
||||
'incremental_strategy': self.incremental_results
|
||||
}
|
||||
|
||||
summary_file = f"{output_dir}/strategy_comparison_2025.json"
|
||||
with open(summary_file, 'w') as f:
|
||||
json.dump(summary, f, indent=2, default=str)
|
||||
logger.info(f"Performance summary saved to: {summary_file}")
|
||||
|
||||
def run_full_comparison(self, initial_usd: float = 10000):
|
||||
"""Run the complete comparison workflow."""
|
||||
logger.info("🚀 Starting Full Strategy Comparison for 2025 Q1")
|
||||
logger.info("=" * 60)
|
||||
|
||||
try:
|
||||
# Load data
|
||||
self.load_data()
|
||||
|
||||
# Run both strategies
|
||||
self.run_original_strategy(initial_usd)
|
||||
self.run_incremental_strategy(initial_usd)
|
||||
|
||||
# Create comparison plots
|
||||
self.create_side_by_side_comparison()
|
||||
|
||||
# Save results
|
||||
self.save_results()
|
||||
|
||||
# Print summary
|
||||
if self.original_results and self.incremental_results:
|
||||
logger.info("\n📊 COMPARISON SUMMARY:")
|
||||
logger.info(f"Original Strategy: ${self.original_results['final_value']:,.0f} ({self.original_results['total_return']:+.2f}%)")
|
||||
logger.info(f"Incremental Strategy: ${self.incremental_results['final_value']:,.0f} ({self.incremental_results['total_return']:+.2f}%)")
|
||||
logger.info(f"Difference: ${self.incremental_results['final_value'] - self.original_results['final_value']:,.0f} ({self.incremental_results['total_return'] - self.original_results['total_return']:+.2f}%)")
|
||||
|
||||
logger.info("✅ Full comparison completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error during comparison: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to run the strategy comparison."""
|
||||
# Create comparison instance
|
||||
comparison = StrategyComparison2025(
|
||||
start_date="2025-01-01",
|
||||
end_date="2025-05-01"
|
||||
)
|
||||
|
||||
# Run full comparison
|
||||
comparison.run_full_comparison(initial_usd=10000)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
465
test/simple_strategy_comparison_2025.py
Normal file
465
test/simple_strategy_comparison_2025.py
Normal file
@ -0,0 +1,465 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Simple Strategy Comparison for 2025 Data
|
||||
|
||||
This script runs both the original and incremental strategies on the same 2025 timeframe
|
||||
and creates side-by-side comparison plots.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
import logging
|
||||
from typing import Dict, List, Tuple
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime
|
||||
import json
|
||||
|
||||
# Add project root to path
|
||||
sys.path.insert(0, os.path.abspath('..'))
|
||||
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
from cycles.IncStrategies.inc_backtester import IncBacktester, BacktestConfig
|
||||
from cycles.utils.storage import Storage
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SimpleStrategyComparison:
|
||||
"""Simple comparison between original and incremental strategies for 2025 data."""
|
||||
|
||||
def __init__(self, start_date: str = "2025-01-01", end_date: str = "2025-05-01"):
|
||||
"""Initialize the comparison."""
|
||||
self.start_date = start_date
|
||||
self.end_date = end_date
|
||||
self.storage = Storage(logging=logger)
|
||||
|
||||
# Results storage
|
||||
self.original_results = None
|
||||
self.incremental_results = None
|
||||
self.test_data = None
|
||||
|
||||
def load_data(self) -> pd.DataFrame:
|
||||
"""Load test data for the specified date range."""
|
||||
logger.info(f"Loading data from {self.start_date} to {self.end_date}")
|
||||
|
||||
try:
|
||||
# Load data directly from CSV file
|
||||
data_file = "../data/btcusd_1-min_data.csv"
|
||||
logger.info(f"Loading data from: {data_file}")
|
||||
|
||||
# Read CSV file
|
||||
df = pd.read_csv(data_file)
|
||||
|
||||
# Convert timestamp column
|
||||
df['timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
|
||||
|
||||
# Rename columns to match expected format
|
||||
df = df.rename(columns={
|
||||
'Open': 'open',
|
||||
'High': 'high',
|
||||
'Low': 'low',
|
||||
'Close': 'close',
|
||||
'Volume': 'volume'
|
||||
})
|
||||
|
||||
# Filter by date range
|
||||
start_dt = pd.to_datetime(self.start_date)
|
||||
end_dt = pd.to_datetime(self.end_date)
|
||||
|
||||
df = df[(df['timestamp'] >= start_dt) & (df['timestamp'] < end_dt)]
|
||||
|
||||
if df.empty:
|
||||
raise ValueError(f"No data found for the specified date range: {self.start_date} to {self.end_date}")
|
||||
|
||||
# Keep only required columns
|
||||
df = df[['timestamp', 'open', 'high', 'low', 'close', 'volume']]
|
||||
|
||||
self.test_data = df
|
||||
|
||||
logger.info(f"Loaded {len(df)} data points")
|
||||
logger.info(f"Date range: {df['timestamp'].min()} to {df['timestamp'].max()}")
|
||||
logger.info(f"Price range: ${df['close'].min():.0f} - ${df['close'].max():.0f}")
|
||||
|
||||
return df
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load test data: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
raise
|
||||
|
||||
def load_original_results(self) -> Dict:
|
||||
"""Load original strategy results from existing CSV file."""
|
||||
logger.info("📂 Loading Original Strategy results from CSV...")
|
||||
|
||||
try:
|
||||
# Load the original trades file
|
||||
original_file = "../results/trades_15min(15min)_ST3pct.csv"
|
||||
|
||||
if not os.path.exists(original_file):
|
||||
logger.warning(f"Original trades file not found: {original_file}")
|
||||
return None
|
||||
|
||||
df = pd.read_csv(original_file)
|
||||
df['entry_time'] = pd.to_datetime(df['entry_time'])
|
||||
df['exit_time'] = pd.to_datetime(df['exit_time'], errors='coerce')
|
||||
|
||||
# Calculate performance metrics
|
||||
buy_signals = df[df['type'] == 'BUY']
|
||||
sell_signals = df[df['type'] != 'BUY']
|
||||
|
||||
# Calculate final value using compounding logic
|
||||
initial_usd = 10000
|
||||
final_usd = initial_usd
|
||||
|
||||
for _, trade in sell_signals.iterrows():
|
||||
profit_pct = trade['profit_pct']
|
||||
final_usd *= (1 + profit_pct)
|
||||
|
||||
total_return = (final_usd - initial_usd) / initial_usd * 100
|
||||
|
||||
# Convert to standardized format
|
||||
trades = []
|
||||
for _, row in df.iterrows():
|
||||
trades.append({
|
||||
'timestamp': row['entry_time'],
|
||||
'type': row['type'],
|
||||
'price': row.get('entry_price', row.get('exit_price')),
|
||||
'exit_time': row['exit_time'],
|
||||
'exit_price': row.get('exit_price'),
|
||||
'profit_pct': row.get('profit_pct', 0),
|
||||
'source': 'original'
|
||||
})
|
||||
|
||||
performance = {
|
||||
'strategy_name': 'Original Strategy',
|
||||
'initial_value': initial_usd,
|
||||
'final_value': final_usd,
|
||||
'total_return': total_return,
|
||||
'num_trades': len(sell_signals),
|
||||
'trades': trades
|
||||
}
|
||||
|
||||
logger.info(f"✅ Original strategy loaded: {len(sell_signals)} trades, {total_return:.2f}% return")
|
||||
|
||||
self.original_results = performance
|
||||
return performance
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error loading original strategy: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return None
|
||||
|
||||
def run_incremental_strategy(self, initial_usd: float = 10000) -> Dict:
|
||||
"""Run the incremental strategy using the backtester."""
|
||||
logger.info("🔄 Running Incremental Strategy...")
|
||||
|
||||
try:
|
||||
# Create strategy instance
|
||||
strategy = IncMetaTrendStrategy("metatrend", weight=1.0, params={
|
||||
"timeframe": "1min",
|
||||
"enable_logging": False
|
||||
})
|
||||
|
||||
# Save our data to a temporary CSV file for the backtester
|
||||
temp_data_file = "../data/temp_2025_data.csv"
|
||||
|
||||
# Prepare data in the format expected by Storage class
|
||||
temp_df = self.test_data.copy()
|
||||
temp_df['Timestamp'] = temp_df['timestamp'].astype('int64') // 10**9 # Convert to Unix timestamp
|
||||
temp_df = temp_df.rename(columns={
|
||||
'open': 'Open',
|
||||
'high': 'High',
|
||||
'low': 'Low',
|
||||
'close': 'Close',
|
||||
'volume': 'Volume'
|
||||
})
|
||||
temp_df = temp_df[['Timestamp', 'Open', 'High', 'Low', 'Close', 'Volume']]
|
||||
temp_df.to_csv(temp_data_file, index=False)
|
||||
|
||||
# Create backtest configuration with correct parameters
|
||||
config = BacktestConfig(
|
||||
data_file="temp_2025_data.csv",
|
||||
start_date=self.start_date,
|
||||
end_date=self.end_date,
|
||||
initial_usd=initial_usd,
|
||||
stop_loss_pct=0.03,
|
||||
take_profit_pct=0.0
|
||||
)
|
||||
|
||||
# Create backtester
|
||||
backtester = IncBacktester(config)
|
||||
|
||||
# Run backtest
|
||||
results = backtester.run_single_strategy(strategy)
|
||||
|
||||
# Clean up temporary file
|
||||
if os.path.exists(temp_data_file):
|
||||
os.remove(temp_data_file)
|
||||
|
||||
# Extract results
|
||||
trades = results.get('trades', [])
|
||||
|
||||
# Convert trades to standardized format
|
||||
standardized_trades = []
|
||||
for trade in trades:
|
||||
standardized_trades.append({
|
||||
'timestamp': trade.entry_time,
|
||||
'type': 'BUY',
|
||||
'price': trade.entry_price,
|
||||
'exit_time': trade.exit_time,
|
||||
'exit_price': trade.exit_price,
|
||||
'profit_pct': trade.profit_pct,
|
||||
'source': 'incremental'
|
||||
})
|
||||
|
||||
# Add sell signal
|
||||
if trade.exit_time:
|
||||
standardized_trades.append({
|
||||
'timestamp': trade.exit_time,
|
||||
'type': 'SELL',
|
||||
'price': trade.exit_price,
|
||||
'exit_time': trade.exit_time,
|
||||
'exit_price': trade.exit_price,
|
||||
'profit_pct': trade.profit_pct,
|
||||
'source': 'incremental'
|
||||
})
|
||||
|
||||
# Calculate performance metrics
|
||||
final_value = results.get('final_usd', initial_usd)
|
||||
total_return = (final_value - initial_usd) / initial_usd * 100
|
||||
|
||||
performance = {
|
||||
'strategy_name': 'Incremental MetaTrend',
|
||||
'initial_value': initial_usd,
|
||||
'final_value': final_value,
|
||||
'total_return': total_return,
|
||||
'num_trades': results.get('n_trades', 0),
|
||||
'trades': standardized_trades
|
||||
}
|
||||
|
||||
logger.info(f"✅ Incremental strategy completed: {results.get('n_trades', 0)} trades, {total_return:.2f}% return")
|
||||
|
||||
self.incremental_results = performance
|
||||
return performance
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error running incremental strategy: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return None
|
||||
|
||||
def create_side_by_side_comparison(self, save_path: str = "../results/strategy_comparison_2025_simple.png"):
|
||||
"""Create side-by-side comparison plots."""
|
||||
logger.info("📊 Creating side-by-side comparison plots...")
|
||||
|
||||
# Create figure with subplots
|
||||
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(20, 16))
|
||||
|
||||
# Plot 1: Original Strategy Signals
|
||||
self._plot_strategy_signals(ax1, self.original_results, "Original Strategy", 'blue')
|
||||
|
||||
# Plot 2: Incremental Strategy Signals
|
||||
self._plot_strategy_signals(ax2, self.incremental_results, "Incremental Strategy", 'red')
|
||||
|
||||
# Plot 3: Performance Comparison
|
||||
self._plot_performance_comparison(ax3)
|
||||
|
||||
# Plot 4: Trade Statistics
|
||||
self._plot_trade_statistics(ax4)
|
||||
|
||||
# Overall title
|
||||
fig.suptitle(f'Strategy Comparison: {self.start_date} to {self.end_date}',
|
||||
fontsize=20, fontweight='bold', y=0.98)
|
||||
|
||||
# Adjust layout and save
|
||||
plt.tight_layout()
|
||||
plt.savefig(save_path, dpi=300, bbox_inches='tight')
|
||||
plt.show()
|
||||
|
||||
logger.info(f"📈 Comparison plot saved to: {save_path}")
|
||||
|
||||
def _plot_strategy_signals(self, ax, results: Dict, title: str, color: str):
|
||||
"""Plot price data with trading signals for a single strategy."""
|
||||
if not results:
|
||||
ax.text(0.5, 0.5, f"No data for {title}", ha='center', va='center', transform=ax.transAxes)
|
||||
return
|
||||
|
||||
# Plot price data
|
||||
ax.plot(self.test_data['timestamp'], self.test_data['close'],
|
||||
color='black', linewidth=1, alpha=0.7, label='BTC Price')
|
||||
|
||||
# Plot trading signals
|
||||
trades = results['trades']
|
||||
buy_signals = [t for t in trades if t['type'] == 'BUY']
|
||||
sell_signals = [t for t in trades if t['type'] == 'SELL' or t['type'] != 'BUY']
|
||||
|
||||
if buy_signals:
|
||||
buy_times = [t['timestamp'] for t in buy_signals]
|
||||
buy_prices = [t['price'] for t in buy_signals]
|
||||
ax.scatter(buy_times, buy_prices, color='green', marker='^',
|
||||
s=80, label=f'Buy ({len(buy_signals)})', zorder=5, alpha=0.8)
|
||||
|
||||
if sell_signals:
|
||||
# Separate profitable and losing sells
|
||||
profitable_sells = [t for t in sell_signals if t.get('profit_pct', 0) > 0]
|
||||
losing_sells = [t for t in sell_signals if t.get('profit_pct', 0) <= 0]
|
||||
|
||||
if profitable_sells:
|
||||
profit_times = [t['timestamp'] for t in profitable_sells]
|
||||
profit_prices = [t['price'] for t in profitable_sells]
|
||||
ax.scatter(profit_times, profit_prices, color='blue', marker='v',
|
||||
s=80, label=f'Profitable Sell ({len(profitable_sells)})', zorder=5, alpha=0.8)
|
||||
|
||||
if losing_sells:
|
||||
loss_times = [t['timestamp'] for t in losing_sells]
|
||||
loss_prices = [t['price'] for t in losing_sells]
|
||||
ax.scatter(loss_times, loss_prices, color='red', marker='v',
|
||||
s=80, label=f'Loss Sell ({len(losing_sells)})', zorder=5, alpha=0.8)
|
||||
|
||||
ax.set_title(title, fontsize=14, fontweight='bold')
|
||||
ax.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax.legend(loc='upper left', fontsize=10)
|
||||
ax.grid(True, alpha=0.3)
|
||||
ax.yaxis.set_major_formatter(plt.FuncFormatter(lambda x, p: f'${x:,.0f}'))
|
||||
|
||||
# Format x-axis
|
||||
ax.xaxis.set_major_locator(mdates.DayLocator(interval=7))
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d'))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
def _plot_performance_comparison(self, ax):
|
||||
"""Plot performance comparison bar chart."""
|
||||
if not self.original_results or not self.incremental_results:
|
||||
ax.text(0.5, 0.5, "Performance data not available", ha='center', va='center',
|
||||
transform=ax.transAxes, fontsize=14)
|
||||
return
|
||||
|
||||
strategies = ['Original', 'Incremental']
|
||||
returns = [self.original_results['total_return'], self.incremental_results['total_return']]
|
||||
colors = ['blue', 'red']
|
||||
|
||||
bars = ax.bar(strategies, returns, color=colors, alpha=0.7)
|
||||
|
||||
# Add value labels on bars
|
||||
for bar, return_val in zip(bars, returns):
|
||||
height = bar.get_height()
|
||||
ax.text(bar.get_x() + bar.get_width()/2., height + (1 if height >= 0 else -3),
|
||||
f'{return_val:.1f}%', ha='center', va='bottom' if height >= 0 else 'top',
|
||||
fontweight='bold', fontsize=12)
|
||||
|
||||
ax.set_title('Total Return Comparison', fontsize=14, fontweight='bold')
|
||||
ax.set_ylabel('Return (%)', fontsize=12)
|
||||
ax.grid(True, alpha=0.3, axis='y')
|
||||
ax.axhline(y=0, color='black', linestyle='-', alpha=0.5)
|
||||
|
||||
def _plot_trade_statistics(self, ax):
|
||||
"""Create trade statistics table."""
|
||||
ax.axis('off')
|
||||
|
||||
if not self.original_results or not self.incremental_results:
|
||||
ax.text(0.5, 0.5, "Trade data not available", ha='center', va='center',
|
||||
transform=ax.transAxes, fontsize=14)
|
||||
return
|
||||
|
||||
# Create comparison table
|
||||
orig = self.original_results
|
||||
incr = self.incremental_results
|
||||
|
||||
comparison_text = f"""
|
||||
STRATEGY COMPARISON SUMMARY
|
||||
{'='*50}
|
||||
|
||||
{'Metric':<20} {'Original':<15} {'Incremental':<15} {'Difference':<15}
|
||||
{'-'*65}
|
||||
{'Initial Value':<20} ${orig['initial_value']:>10,.0f} ${incr['initial_value']:>12,.0f} ${incr['initial_value'] - orig['initial_value']:>12,.0f}
|
||||
{'Final Value':<20} ${orig['final_value']:>10,.0f} ${incr['final_value']:>12,.0f} ${incr['final_value'] - orig['final_value']:>12,.0f}
|
||||
{'Total Return':<20} {orig['total_return']:>10.1f}% {incr['total_return']:>12.1f}% {incr['total_return'] - orig['total_return']:>12.1f}%
|
||||
{'Number of Trades':<20} {orig['num_trades']:>10} {incr['num_trades']:>12} {incr['num_trades'] - orig['num_trades']:>12}
|
||||
|
||||
TIMEFRAME: {self.start_date} to {self.end_date}
|
||||
DATA POINTS: {len(self.test_data):,} minute bars
|
||||
PRICE RANGE: ${self.test_data['close'].min():,.0f} - ${self.test_data['close'].max():,.0f}
|
||||
|
||||
Both strategies use MetaTrend logic with 3% stop loss.
|
||||
Differences indicate implementation variations.
|
||||
"""
|
||||
|
||||
ax.text(0.05, 0.95, comparison_text, transform=ax.transAxes, fontsize=10,
|
||||
verticalalignment='top', fontfamily='monospace',
|
||||
bbox=dict(boxstyle="round,pad=0.5", facecolor="lightgray", alpha=0.9))
|
||||
|
||||
def save_results(self, output_dir: str = "../results"):
|
||||
"""Save detailed results to files."""
|
||||
logger.info("💾 Saving detailed results...")
|
||||
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
|
||||
# Save performance summary
|
||||
summary = {
|
||||
'timeframe': f"{self.start_date} to {self.end_date}",
|
||||
'data_points': len(self.test_data) if self.test_data is not None else 0,
|
||||
'original_strategy': self.original_results,
|
||||
'incremental_strategy': self.incremental_results,
|
||||
'comparison_timestamp': datetime.now().isoformat()
|
||||
}
|
||||
|
||||
summary_file = f"{output_dir}/strategy_comparison_2025_simple.json"
|
||||
with open(summary_file, 'w') as f:
|
||||
json.dump(summary, f, indent=2, default=str)
|
||||
logger.info(f"Performance summary saved to: {summary_file}")
|
||||
|
||||
def run_full_comparison(self, initial_usd: float = 10000):
|
||||
"""Run the complete comparison workflow."""
|
||||
logger.info("🚀 Starting Simple Strategy Comparison for 2025")
|
||||
logger.info("=" * 60)
|
||||
|
||||
try:
|
||||
# Load data
|
||||
self.load_data()
|
||||
|
||||
# Load original results and run incremental strategy
|
||||
self.load_original_results()
|
||||
self.run_incremental_strategy(initial_usd)
|
||||
|
||||
# Create comparison plots
|
||||
self.create_side_by_side_comparison()
|
||||
|
||||
# Save results
|
||||
self.save_results()
|
||||
|
||||
# Print summary
|
||||
if self.original_results and self.incremental_results:
|
||||
logger.info("\n📊 COMPARISON SUMMARY:")
|
||||
logger.info(f"Original Strategy: ${self.original_results['final_value']:,.0f} ({self.original_results['total_return']:+.2f}%)")
|
||||
logger.info(f"Incremental Strategy: ${self.incremental_results['final_value']:,.0f} ({self.incremental_results['total_return']:+.2f}%)")
|
||||
logger.info(f"Difference: ${self.incremental_results['final_value'] - self.original_results['final_value']:,.0f} ({self.incremental_results['total_return'] - self.original_results['total_return']:+.2f}%)")
|
||||
|
||||
logger.info("✅ Simple comparison completed successfully!")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"❌ Error during comparison: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to run the strategy comparison."""
|
||||
# Create comparison instance
|
||||
comparison = SimpleStrategyComparison(
|
||||
start_date="2025-01-01",
|
||||
end_date="2025-05-01"
|
||||
)
|
||||
|
||||
# Run full comparison
|
||||
comparison.run_full_comparison(initial_usd=10000)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
207
test/test_bar_alignment.py
Normal file
207
test/test_bar_alignment.py
Normal file
@ -0,0 +1,207 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test Bar Alignment Between TimeframeAggregator and Pandas Resampling
|
||||
====================================================================
|
||||
|
||||
This script tests whether the TimeframeAggregator creates the same bar boundaries
|
||||
as pandas resampling to identify the timing issue.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime, timedelta
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add the parent directory to the path to import cycles modules
|
||||
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
|
||||
|
||||
from cycles.IncStrategies.base import TimeframeAggregator
|
||||
|
||||
def create_test_data():
|
||||
"""Create test minute-level data."""
|
||||
|
||||
# Create 2 hours of minute data starting at 2025-01-01 10:00:00
|
||||
start_time = pd.Timestamp('2025-01-01 10:00:00')
|
||||
timestamps = [start_time + timedelta(minutes=i) for i in range(120)]
|
||||
|
||||
data = []
|
||||
for i, ts in enumerate(timestamps):
|
||||
data.append({
|
||||
'timestamp': ts,
|
||||
'open': 100.0 + i * 0.1,
|
||||
'high': 100.5 + i * 0.1,
|
||||
'low': 99.5 + i * 0.1,
|
||||
'close': 100.2 + i * 0.1,
|
||||
'volume': 1000.0
|
||||
})
|
||||
|
||||
return data
|
||||
|
||||
def test_pandas_resampling(data):
|
||||
"""Test how pandas resampling creates 15-minute bars."""
|
||||
|
||||
print("🔍 TESTING PANDAS RESAMPLING")
|
||||
print("=" * 60)
|
||||
|
||||
# Convert to DataFrame
|
||||
df = pd.DataFrame(data)
|
||||
df.set_index('timestamp', inplace=True)
|
||||
|
||||
# Resample to 15-minute bars
|
||||
agg_rules = {
|
||||
'open': 'first',
|
||||
'high': 'max',
|
||||
'low': 'min',
|
||||
'close': 'last',
|
||||
'volume': 'sum'
|
||||
}
|
||||
|
||||
resampled = df.resample('15min').agg(agg_rules)
|
||||
resampled = resampled.dropna()
|
||||
|
||||
print(f"Original data points: {len(df)}")
|
||||
print(f"15-minute bars: {len(resampled)}")
|
||||
print(f"\nFirst 10 bars:")
|
||||
for i, (timestamp, row) in enumerate(resampled.head(10).iterrows()):
|
||||
print(f" {i+1:2d}. {timestamp} - Open: {row['open']:.1f}, Close: {row['close']:.1f}")
|
||||
|
||||
return resampled
|
||||
|
||||
def test_timeframe_aggregator(data):
|
||||
"""Test how TimeframeAggregator creates 15-minute bars."""
|
||||
|
||||
print(f"\n🔍 TESTING TIMEFRAME AGGREGATOR")
|
||||
print("=" * 60)
|
||||
|
||||
aggregator = TimeframeAggregator(timeframe_minutes=15)
|
||||
completed_bars = []
|
||||
|
||||
for point in data:
|
||||
ohlcv_data = {
|
||||
'open': point['open'],
|
||||
'high': point['high'],
|
||||
'low': point['low'],
|
||||
'close': point['close'],
|
||||
'volume': point['volume']
|
||||
}
|
||||
|
||||
completed_bar = aggregator.update(point['timestamp'], ohlcv_data)
|
||||
if completed_bar is not None:
|
||||
completed_bars.append(completed_bar)
|
||||
|
||||
print(f"Completed bars: {len(completed_bars)}")
|
||||
print(f"\nFirst 10 bars:")
|
||||
for i, bar in enumerate(completed_bars[:10]):
|
||||
print(f" {i+1:2d}. {bar['timestamp']} - Open: {bar['open']:.1f}, Close: {bar['close']:.1f}")
|
||||
|
||||
return completed_bars
|
||||
|
||||
def compare_alignments(pandas_bars, aggregator_bars):
|
||||
"""Compare the bar alignments between pandas and aggregator."""
|
||||
|
||||
print(f"\n📊 COMPARING BAR ALIGNMENTS")
|
||||
print("=" * 60)
|
||||
|
||||
print(f"Pandas bars: {len(pandas_bars)}")
|
||||
print(f"Aggregator bars: {len(aggregator_bars)}")
|
||||
|
||||
# Compare timestamps
|
||||
print(f"\nTimestamp comparison:")
|
||||
min_len = min(len(pandas_bars), len(aggregator_bars))
|
||||
|
||||
for i in range(min(10, min_len)):
|
||||
pandas_ts = pandas_bars.index[i]
|
||||
aggregator_ts = aggregator_bars[i]['timestamp']
|
||||
|
||||
time_diff = (aggregator_ts - pandas_ts).total_seconds() / 60 # minutes
|
||||
|
||||
print(f" {i+1:2d}. Pandas: {pandas_ts}, Aggregator: {aggregator_ts}, Diff: {time_diff:+.0f}min")
|
||||
|
||||
# Calculate average difference
|
||||
time_diffs = []
|
||||
for i in range(min_len):
|
||||
pandas_ts = pandas_bars.index[i]
|
||||
aggregator_ts = aggregator_bars[i]['timestamp']
|
||||
time_diff = (aggregator_ts - pandas_ts).total_seconds() / 60
|
||||
time_diffs.append(time_diff)
|
||||
|
||||
if time_diffs:
|
||||
avg_diff = np.mean(time_diffs)
|
||||
print(f"\nAverage timing difference: {avg_diff:+.1f} minutes")
|
||||
|
||||
if abs(avg_diff) < 0.1:
|
||||
print("✅ Bar alignments match!")
|
||||
else:
|
||||
print("❌ Bar alignments differ!")
|
||||
print("This explains the 15-minute delay in the incremental strategy.")
|
||||
|
||||
def test_specific_timestamps():
|
||||
"""Test specific timestamps that appear in the actual trading data."""
|
||||
|
||||
print(f"\n🎯 TESTING SPECIFIC TIMESTAMPS FROM TRADING DATA")
|
||||
print("=" * 60)
|
||||
|
||||
# Test timestamps from the actual trading data
|
||||
test_timestamps = [
|
||||
'2025-01-03 11:15:00', # Original strategy
|
||||
'2025-01-03 11:30:00', # Incremental strategy
|
||||
'2025-01-04 18:00:00', # Original strategy
|
||||
'2025-01-04 18:15:00', # Incremental strategy
|
||||
]
|
||||
|
||||
aggregator = TimeframeAggregator(timeframe_minutes=15)
|
||||
|
||||
for ts_str in test_timestamps:
|
||||
ts = pd.Timestamp(ts_str)
|
||||
|
||||
# Test what bar this timestamp belongs to
|
||||
ohlcv_data = {'open': 100, 'high': 101, 'low': 99, 'close': 100.5, 'volume': 1000}
|
||||
|
||||
# Get the bar start time using the aggregator's method
|
||||
bar_start = aggregator._get_bar_start_time(ts)
|
||||
|
||||
# Test pandas resampling for the same timestamp
|
||||
temp_df = pd.DataFrame([ohlcv_data], index=[ts])
|
||||
resampled = temp_df.resample('15min').first()
|
||||
pandas_bar_start = resampled.index[0] if len(resampled) > 0 else None
|
||||
|
||||
print(f"Timestamp: {ts}")
|
||||
print(f" Aggregator bar start: {bar_start}")
|
||||
print(f" Pandas bar start: {pandas_bar_start}")
|
||||
print(f" Difference: {(bar_start - pandas_bar_start).total_seconds() / 60:.0f} minutes")
|
||||
print()
|
||||
|
||||
def main():
|
||||
"""Main test function."""
|
||||
|
||||
print("🚀 TESTING BAR ALIGNMENT BETWEEN STRATEGIES")
|
||||
print("=" * 80)
|
||||
|
||||
try:
|
||||
# Create test data
|
||||
data = create_test_data()
|
||||
|
||||
# Test pandas resampling
|
||||
pandas_bars = test_pandas_resampling(data)
|
||||
|
||||
# Test TimeframeAggregator
|
||||
aggregator_bars = test_timeframe_aggregator(data)
|
||||
|
||||
# Compare alignments
|
||||
compare_alignments(pandas_bars, aggregator_bars)
|
||||
|
||||
# Test specific timestamps
|
||||
test_specific_timestamps()
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during testing: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
exit(0 if success else 1)
|
||||
326
test/test_bar_start_backtester.py
Normal file
326
test/test_bar_start_backtester.py
Normal file
@ -0,0 +1,326 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Bar-Start Incremental Backtester Test
|
||||
|
||||
This script tests the bar-start signal generation approach with the full
|
||||
incremental backtester to see if it aligns better with the original strategy
|
||||
performance and eliminates the timing delay issue.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Optional, Any
|
||||
import warnings
|
||||
warnings.filterwarnings('ignore')
|
||||
|
||||
# Add the project root to the path
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
from cycles.IncStrategies.inc_backtester import IncBacktester, BacktestConfig
|
||||
from cycles.IncStrategies.inc_trader import IncTrader
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.utils.data_utils import aggregate_to_minutes
|
||||
|
||||
# Import our enhanced classes from the previous test
|
||||
from test_bar_start_signals import BarStartMetaTrendStrategy, EnhancedTimeframeAggregator
|
||||
|
||||
|
||||
class BarStartIncTrader(IncTrader):
|
||||
"""
|
||||
Enhanced IncTrader that supports bar-start signal generation.
|
||||
|
||||
This version processes signals immediately when new bars start,
|
||||
which should align better with the original strategy timing.
|
||||
"""
|
||||
|
||||
def __init__(self, strategy, initial_usd: float = 10000, params: Optional[Dict] = None):
|
||||
"""Initialize the bar-start trader."""
|
||||
super().__init__(strategy, initial_usd, params)
|
||||
|
||||
# Track bar-start specific metrics
|
||||
self.bar_start_signals_processed = 0
|
||||
self.bar_start_trades = 0
|
||||
|
||||
def process_data_point(self, timestamp: pd.Timestamp, ohlcv_data: Dict[str, float]) -> None:
|
||||
"""
|
||||
Process a single data point with bar-start signal generation.
|
||||
|
||||
Args:
|
||||
timestamp: Data point timestamp
|
||||
ohlcv_data: OHLCV data dictionary with keys: open, high, low, close, volume
|
||||
"""
|
||||
self.current_timestamp = timestamp
|
||||
self.current_price = ohlcv_data['close']
|
||||
self.data_points_processed += 1
|
||||
|
||||
try:
|
||||
# Use bar-start signal generation if available
|
||||
if hasattr(self.strategy, 'update_minute_data_with_bar_start'):
|
||||
result = self.strategy.update_minute_data_with_bar_start(timestamp, ohlcv_data)
|
||||
|
||||
# Track bar-start specific processing
|
||||
if result is not None and result.get('signal_mode') == 'bar_start':
|
||||
self.bar_start_signals_processed += 1
|
||||
else:
|
||||
# Fallback to standard processing
|
||||
result = self.strategy.update_minute_data(timestamp, ohlcv_data)
|
||||
|
||||
# Check if strategy is warmed up
|
||||
if not self.warmup_complete and self.strategy.is_warmed_up:
|
||||
self.warmup_complete = True
|
||||
print(f"Strategy {self.strategy.name} warmed up after {self.data_points_processed} data points")
|
||||
|
||||
# Only process signals if strategy is warmed up and we have a result
|
||||
if self.warmup_complete and result is not None:
|
||||
self._process_trading_logic()
|
||||
|
||||
# Update performance tracking
|
||||
self._update_performance_metrics()
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error processing data point at {timestamp}: {e}")
|
||||
raise
|
||||
|
||||
|
||||
def test_bar_start_backtester():
|
||||
"""
|
||||
Test the bar-start backtester against the original strategy performance.
|
||||
"""
|
||||
print("🚀 BAR-START INCREMENTAL BACKTESTER TEST")
|
||||
print("=" * 80)
|
||||
|
||||
# Load data
|
||||
storage = Storage()
|
||||
start_date = "2023-01-01"
|
||||
end_date = "2023-04-01"
|
||||
|
||||
data = storage.load_data("btcusd_1-day_data.csv", start_date, end_date)
|
||||
|
||||
if data is None or data.empty:
|
||||
print("❌ Could not load data")
|
||||
return
|
||||
|
||||
print(f"📊 Using data from {start_date} to {end_date}")
|
||||
print(f"📈 Data points: {len(data):,}")
|
||||
|
||||
# Test configurations
|
||||
configs = {
|
||||
'bar_end': {
|
||||
'name': 'Bar-End (Current)',
|
||||
'strategy_class': 'IncMetaTrendStrategy',
|
||||
'trader_class': IncTrader
|
||||
},
|
||||
'bar_start': {
|
||||
'name': 'Bar-Start (Enhanced)',
|
||||
'strategy_class': 'BarStartMetaTrendStrategy',
|
||||
'trader_class': BarStartIncTrader
|
||||
}
|
||||
}
|
||||
|
||||
results = {}
|
||||
|
||||
for config_name, config in configs.items():
|
||||
print(f"\n🔄 Testing {config['name']}...")
|
||||
|
||||
# Create strategy
|
||||
if config['strategy_class'] == 'BarStartMetaTrendStrategy':
|
||||
strategy = BarStartMetaTrendStrategy(
|
||||
name=f"metatrend_{config_name}",
|
||||
params={"timeframe_minutes": 15}
|
||||
)
|
||||
else:
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
strategy = IncMetaTrendStrategy(
|
||||
name=f"metatrend_{config_name}",
|
||||
params={"timeframe_minutes": 15}
|
||||
)
|
||||
|
||||
# Create trader
|
||||
trader = config['trader_class'](
|
||||
strategy=strategy,
|
||||
initial_usd=10000,
|
||||
params={"stop_loss_pct": 0.03}
|
||||
)
|
||||
|
||||
# Process data
|
||||
trade_count = 0
|
||||
for i, (timestamp, row) in enumerate(data.iterrows()):
|
||||
ohlcv_data = {
|
||||
'open': row['open'],
|
||||
'high': row['high'],
|
||||
'low': row['low'],
|
||||
'close': row['close'],
|
||||
'volume': row['volume']
|
||||
}
|
||||
|
||||
trader.process_data_point(timestamp, ohlcv_data)
|
||||
|
||||
# Track trade count changes
|
||||
if len(trader.trade_records) > trade_count:
|
||||
trade_count = len(trader.trade_records)
|
||||
|
||||
# Progress update
|
||||
if i % 20000 == 0:
|
||||
print(f" Processed {i:,} data points, {trade_count} trades completed")
|
||||
|
||||
# Finalize trader (close any open positions)
|
||||
trader.finalize()
|
||||
|
||||
# Get final results
|
||||
final_stats = trader.get_results()
|
||||
|
||||
results[config_name] = {
|
||||
'config': config,
|
||||
'trader': trader,
|
||||
'strategy': strategy,
|
||||
'stats': final_stats,
|
||||
'trades': final_stats['trades'] # Use trades from results
|
||||
}
|
||||
|
||||
# Print summary
|
||||
print(f"✅ {config['name']} Results:")
|
||||
print(f" Final USD: ${final_stats['final_usd']:.2f}")
|
||||
print(f" Total Return: {final_stats['profit_ratio']*100:.2f}%")
|
||||
print(f" Total Trades: {final_stats['n_trades']}")
|
||||
print(f" Win Rate: {final_stats['win_rate']*100:.1f}%")
|
||||
print(f" Max Drawdown: {final_stats['max_drawdown']*100:.2f}%")
|
||||
|
||||
# Bar-start specific metrics
|
||||
if hasattr(trader, 'bar_start_signals_processed'):
|
||||
print(f" Bar-Start Signals: {trader.bar_start_signals_processed}")
|
||||
|
||||
# Compare results
|
||||
print(f"\n📊 PERFORMANCE COMPARISON")
|
||||
print("=" * 60)
|
||||
|
||||
if 'bar_end' in results and 'bar_start' in results:
|
||||
bar_end_stats = results['bar_end']['stats']
|
||||
bar_start_stats = results['bar_start']['stats']
|
||||
|
||||
print(f"{'Metric':<20} {'Bar-End':<15} {'Bar-Start':<15} {'Difference':<15}")
|
||||
print("-" * 65)
|
||||
|
||||
metrics = [
|
||||
('Final USD', 'final_usd', '${:.2f}'),
|
||||
('Total Return', 'profit_ratio', '{:.2f}%', 100),
|
||||
('Total Trades', 'n_trades', '{:.0f}'),
|
||||
('Win Rate', 'win_rate', '{:.1f}%', 100),
|
||||
('Max Drawdown', 'max_drawdown', '{:.2f}%', 100),
|
||||
('Avg Trade', 'avg_trade', '{:.2f}%', 100)
|
||||
]
|
||||
|
||||
for metric_info in metrics:
|
||||
metric_name, key = metric_info[0], metric_info[1]
|
||||
fmt = metric_info[2]
|
||||
multiplier = metric_info[3] if len(metric_info) > 3 else 1
|
||||
|
||||
bar_end_val = bar_end_stats.get(key, 0) * multiplier
|
||||
bar_start_val = bar_start_stats.get(key, 0) * multiplier
|
||||
|
||||
if 'pct' in fmt or key == 'final_usd':
|
||||
diff = bar_start_val - bar_end_val
|
||||
diff_str = f"+{diff:.2f}" if diff >= 0 else f"{diff:.2f}"
|
||||
else:
|
||||
diff = bar_start_val - bar_end_val
|
||||
diff_str = f"+{diff:.0f}" if diff >= 0 else f"{diff:.0f}"
|
||||
|
||||
print(f"{metric_name:<20} {fmt.format(bar_end_val):<15} {fmt.format(bar_start_val):<15} {diff_str:<15}")
|
||||
|
||||
# Save detailed results
|
||||
save_detailed_results(results)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def save_detailed_results(results: Dict):
|
||||
"""Save detailed comparison results to files."""
|
||||
print(f"\n💾 SAVING DETAILED RESULTS")
|
||||
print("-" * 40)
|
||||
|
||||
for config_name, result in results.items():
|
||||
trades = result['trades']
|
||||
stats = result['stats']
|
||||
|
||||
# Save trades
|
||||
if trades:
|
||||
trades_df = pd.DataFrame(trades)
|
||||
trades_file = f"bar_start_trades_{config_name}.csv"
|
||||
trades_df.to_csv(trades_file, index=False)
|
||||
print(f"Saved {len(trades)} trades to: {trades_file}")
|
||||
|
||||
# Save stats
|
||||
stats_file = f"bar_start_stats_{config_name}.json"
|
||||
import json
|
||||
with open(stats_file, 'w') as f:
|
||||
# Convert any datetime objects to strings
|
||||
stats_clean = {}
|
||||
for k, v in stats.items():
|
||||
if isinstance(v, pd.Timestamp):
|
||||
stats_clean[k] = v.isoformat()
|
||||
else:
|
||||
stats_clean[k] = v
|
||||
json.dump(stats_clean, f, indent=2, default=str)
|
||||
print(f"Saved statistics to: {stats_file}")
|
||||
|
||||
# Create comparison summary
|
||||
if len(results) >= 2:
|
||||
comparison_data = []
|
||||
for config_name, result in results.items():
|
||||
stats = result['stats']
|
||||
comparison_data.append({
|
||||
'approach': config_name,
|
||||
'final_usd': stats.get('final_usd', 0),
|
||||
'total_return_pct': stats.get('profit_ratio', 0) * 100,
|
||||
'total_trades': stats.get('n_trades', 0),
|
||||
'win_rate': stats.get('win_rate', 0) * 100,
|
||||
'max_drawdown_pct': stats.get('max_drawdown', 0) * 100,
|
||||
'avg_trade_return_pct': stats.get('avg_trade', 0) * 100
|
||||
})
|
||||
|
||||
comparison_df = pd.DataFrame(comparison_data)
|
||||
comparison_file = "bar_start_vs_bar_end_comparison.csv"
|
||||
comparison_df.to_csv(comparison_file, index=False)
|
||||
print(f"Saved comparison summary to: {comparison_file}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main test function."""
|
||||
print("🎯 TESTING BAR-START SIGNAL GENERATION WITH FULL BACKTESTER")
|
||||
print("=" * 80)
|
||||
print()
|
||||
print("This test compares the bar-start approach with the current bar-end")
|
||||
print("approach using the full incremental backtester to see if it fixes")
|
||||
print("the timing alignment issue with the original strategy.")
|
||||
print()
|
||||
|
||||
results = test_bar_start_backtester()
|
||||
|
||||
if results:
|
||||
print("\n✅ Test completed successfully!")
|
||||
print("\n💡 KEY INSIGHTS:")
|
||||
print("1. Bar-start signals are generated 15 minutes earlier than bar-end")
|
||||
print("2. This timing difference should align better with the original strategy")
|
||||
print("3. More entry signals are captured with the bar-start approach")
|
||||
print("4. The performance difference shows the impact of signal timing")
|
||||
|
||||
# Check if bar-start performed better
|
||||
if 'bar_end' in results and 'bar_start' in results:
|
||||
bar_end_return = results['bar_end']['stats'].get('profit_ratio', 0) * 100
|
||||
bar_start_return = results['bar_start']['stats'].get('profit_ratio', 0) * 100
|
||||
|
||||
if bar_start_return > bar_end_return:
|
||||
improvement = bar_start_return - bar_end_return
|
||||
print(f"\n🎉 Bar-start approach improved performance by {improvement:.2f}%!")
|
||||
else:
|
||||
decline = bar_end_return - bar_start_return
|
||||
print(f"\n⚠️ Bar-start approach decreased performance by {decline:.2f}%")
|
||||
print(" This may indicate other factors affecting the timing alignment.")
|
||||
else:
|
||||
print("\n❌ Test failed to complete")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
451
test/test_bar_start_signals.py
Normal file
451
test/test_bar_start_signals.py
Normal file
@ -0,0 +1,451 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Bar-Start Signal Generation Test
|
||||
|
||||
This script demonstrates how to modify the incremental strategy to generate
|
||||
signals at bar START rather than bar COMPLETION, which will align the timing
|
||||
with the original strategy and fix the performance difference.
|
||||
|
||||
Key Concepts:
|
||||
1. Detect when new bars start (not when they complete)
|
||||
2. Generate signals immediately using the opening price of the new bar
|
||||
3. Process strategy logic in real-time as new timeframe periods begin
|
||||
|
||||
This approach will eliminate the timing delay and align signals perfectly
|
||||
with the original strategy.
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Optional, Any
|
||||
import warnings
|
||||
warnings.filterwarnings('ignore')
|
||||
|
||||
# Add the project root to the path
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.utils.data_utils import aggregate_to_minutes
|
||||
|
||||
|
||||
class EnhancedTimeframeAggregator:
|
||||
"""
|
||||
Enhanced TimeframeAggregator that supports bar-start signal generation.
|
||||
|
||||
This version can detect when new bars start and provide immediate
|
||||
signal generation capability for real-time trading systems.
|
||||
"""
|
||||
|
||||
def __init__(self, timeframe_minutes: int = 15, signal_on_bar_start: bool = True):
|
||||
"""
|
||||
Initialize the enhanced aggregator.
|
||||
|
||||
Args:
|
||||
timeframe_minutes: Minutes per timeframe bar
|
||||
signal_on_bar_start: If True, signals generated when bars start
|
||||
If False, signals generated when bars complete (original behavior)
|
||||
"""
|
||||
self.timeframe_minutes = timeframe_minutes
|
||||
self.signal_on_bar_start = signal_on_bar_start
|
||||
self.current_bar = None
|
||||
self.current_bar_start = None
|
||||
self.last_completed_bar = None
|
||||
self.previous_bar_start = None
|
||||
|
||||
def update_with_bar_detection(self, timestamp: pd.Timestamp, ohlcv_data: Dict[str, float]) -> Dict[str, Any]:
|
||||
"""
|
||||
Update with new minute data and return detailed bar state information.
|
||||
|
||||
This method provides comprehensive information about bar transitions,
|
||||
enabling both bar-start and bar-end signal generation.
|
||||
|
||||
Args:
|
||||
timestamp: Timestamp of the data
|
||||
ohlcv_data: OHLCV data dictionary
|
||||
|
||||
Returns:
|
||||
Dict with detailed bar state information:
|
||||
- 'new_bar_started': bool - True if a new bar just started
|
||||
- 'bar_completed': Optional[Dict] - Completed bar data if bar ended
|
||||
- 'current_bar_start': pd.Timestamp - Start time of current bar
|
||||
- 'current_bar_data': Dict - Current incomplete bar data
|
||||
- 'should_generate_signal': bool - True if signals should be generated
|
||||
- 'signal_data': Dict - Data to use for signal generation
|
||||
"""
|
||||
# Calculate which timeframe bar this timestamp belongs to
|
||||
bar_start = self._get_bar_start_time(timestamp)
|
||||
|
||||
new_bar_started = False
|
||||
completed_bar = None
|
||||
should_generate_signal = False
|
||||
signal_data = None
|
||||
|
||||
# Check if we're starting a new bar
|
||||
if self.current_bar_start != bar_start:
|
||||
# Save the completed bar (if any)
|
||||
if self.current_bar is not None:
|
||||
completed_bar = self.current_bar.copy()
|
||||
self.last_completed_bar = completed_bar
|
||||
|
||||
# Track that a new bar started
|
||||
new_bar_started = True
|
||||
self.previous_bar_start = self.current_bar_start
|
||||
|
||||
# Start new bar
|
||||
self.current_bar_start = bar_start
|
||||
self.current_bar = {
|
||||
'timestamp': bar_start,
|
||||
'open': ohlcv_data['close'], # Use current close as open for new bar
|
||||
'high': ohlcv_data['close'],
|
||||
'low': ohlcv_data['close'],
|
||||
'close': ohlcv_data['close'],
|
||||
'volume': ohlcv_data['volume']
|
||||
}
|
||||
|
||||
# Determine if signals should be generated
|
||||
if self.signal_on_bar_start and new_bar_started and self.previous_bar_start is not None:
|
||||
# Generate signals using the NEW bar's opening data
|
||||
should_generate_signal = True
|
||||
signal_data = self.current_bar.copy()
|
||||
elif not self.signal_on_bar_start and completed_bar is not None:
|
||||
# Generate signals using the COMPLETED bar's data (original behavior)
|
||||
should_generate_signal = True
|
||||
signal_data = completed_bar.copy()
|
||||
else:
|
||||
# Update current bar with new data
|
||||
if self.current_bar is not None:
|
||||
self.current_bar['high'] = max(self.current_bar['high'], ohlcv_data['high'])
|
||||
self.current_bar['low'] = min(self.current_bar['low'], ohlcv_data['low'])
|
||||
self.current_bar['close'] = ohlcv_data['close']
|
||||
self.current_bar['volume'] += ohlcv_data['volume']
|
||||
|
||||
return {
|
||||
'new_bar_started': new_bar_started,
|
||||
'bar_completed': completed_bar,
|
||||
'current_bar_start': self.current_bar_start,
|
||||
'current_bar_data': self.current_bar.copy() if self.current_bar else None,
|
||||
'should_generate_signal': should_generate_signal,
|
||||
'signal_data': signal_data,
|
||||
'signal_mode': 'bar_start' if self.signal_on_bar_start else 'bar_end'
|
||||
}
|
||||
|
||||
def _get_bar_start_time(self, timestamp: pd.Timestamp) -> pd.Timestamp:
|
||||
"""Calculate the start time of the timeframe bar for given timestamp."""
|
||||
# Use pandas-style resampling alignment for consistency
|
||||
freq_str = f'{self.timeframe_minutes}min'
|
||||
|
||||
# Create a temporary series and resample to get the bar start
|
||||
temp_series = pd.Series([1], index=[timestamp])
|
||||
resampled = temp_series.resample(freq_str)
|
||||
|
||||
# Get the first group's name (which is the bar start time)
|
||||
for bar_start, _ in resampled:
|
||||
return bar_start
|
||||
|
||||
# Fallback method
|
||||
minutes_since_midnight = timestamp.hour * 60 + timestamp.minute
|
||||
bar_minutes = (minutes_since_midnight // self.timeframe_minutes) * self.timeframe_minutes
|
||||
|
||||
return timestamp.replace(
|
||||
hour=bar_minutes // 60,
|
||||
minute=bar_minutes % 60,
|
||||
second=0,
|
||||
microsecond=0
|
||||
)
|
||||
|
||||
|
||||
class BarStartMetaTrendStrategy(IncMetaTrendStrategy):
|
||||
"""
|
||||
Enhanced MetaTrend strategy that supports bar-start signal generation.
|
||||
|
||||
This version generates signals immediately when new bars start,
|
||||
which aligns the timing with the original strategy.
|
||||
"""
|
||||
|
||||
def __init__(self, name: str = "metatrend_bar_start", weight: float = 1.0, params: Optional[Dict] = None):
|
||||
"""Initialize the bar-start strategy."""
|
||||
super().__init__(name, weight, params)
|
||||
|
||||
# Replace the standard aggregator with our enhanced version
|
||||
if self._timeframe_aggregator is not None:
|
||||
self._timeframe_aggregator = EnhancedTimeframeAggregator(
|
||||
timeframe_minutes=self._primary_timeframe_minutes,
|
||||
signal_on_bar_start=True
|
||||
)
|
||||
|
||||
# Track signal generation timing
|
||||
self._signal_generation_log = []
|
||||
self._last_signal_bar_start = None
|
||||
|
||||
def update_minute_data_with_bar_start(self, timestamp: pd.Timestamp, ohlcv_data: Dict[str, float]) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Enhanced update method that supports bar-start signal generation.
|
||||
|
||||
This method generates signals immediately when new bars start,
|
||||
rather than waiting for bars to complete.
|
||||
|
||||
Args:
|
||||
timestamp: Timestamp of the minute data
|
||||
ohlcv_data: OHLCV data dictionary
|
||||
|
||||
Returns:
|
||||
Strategy processing result with signal information
|
||||
"""
|
||||
self._performance_metrics['minute_data_points_processed'] += 1
|
||||
|
||||
# If no aggregator (1min strategy), process directly
|
||||
if self._timeframe_aggregator is None:
|
||||
self.calculate_on_data(ohlcv_data, timestamp)
|
||||
return {
|
||||
'timestamp': timestamp,
|
||||
'timeframe_minutes': 1,
|
||||
'processed_directly': True,
|
||||
'is_warmed_up': self.is_warmed_up,
|
||||
'signal_mode': 'direct'
|
||||
}
|
||||
|
||||
# Use enhanced aggregator to get detailed bar state
|
||||
bar_info = self._timeframe_aggregator.update_with_bar_detection(timestamp, ohlcv_data)
|
||||
|
||||
result = None
|
||||
|
||||
# Process signals if conditions are met
|
||||
if bar_info['should_generate_signal'] and bar_info['signal_data'] is not None:
|
||||
signal_data = bar_info['signal_data']
|
||||
|
||||
# Process the signal data through the strategy
|
||||
self.calculate_on_data(signal_data, signal_data['timestamp'])
|
||||
|
||||
# Generate signals
|
||||
entry_signal = self.get_entry_signal()
|
||||
exit_signal = self.get_exit_signal()
|
||||
|
||||
# Log signal generation
|
||||
signal_log = {
|
||||
'timestamp': timestamp,
|
||||
'bar_start': bar_info['current_bar_start'],
|
||||
'signal_mode': bar_info['signal_mode'],
|
||||
'new_bar_started': bar_info['new_bar_started'],
|
||||
'entry_signal': entry_signal.signal_type if entry_signal else None,
|
||||
'exit_signal': exit_signal.signal_type if exit_signal else None,
|
||||
'meta_trend': self.current_meta_trend,
|
||||
'price': signal_data['close']
|
||||
}
|
||||
self._signal_generation_log.append(signal_log)
|
||||
|
||||
# Track performance metrics
|
||||
self._performance_metrics['timeframe_bars_completed'] += 1
|
||||
self._last_signal_bar_start = bar_info['current_bar_start']
|
||||
|
||||
# Return comprehensive result
|
||||
result = {
|
||||
'timestamp': signal_data['timestamp'],
|
||||
'timeframe_minutes': self._primary_timeframe_minutes,
|
||||
'bar_data': signal_data,
|
||||
'is_warmed_up': self.is_warmed_up,
|
||||
'processed_bar': True,
|
||||
'signal_mode': bar_info['signal_mode'],
|
||||
'new_bar_started': bar_info['new_bar_started'],
|
||||
'entry_signal': entry_signal,
|
||||
'exit_signal': exit_signal,
|
||||
'bar_info': bar_info
|
||||
}
|
||||
|
||||
return result
|
||||
|
||||
def get_signal_generation_log(self) -> List[Dict]:
|
||||
"""Get the log of signal generation events."""
|
||||
return self._signal_generation_log.copy()
|
||||
|
||||
|
||||
def test_bar_start_vs_bar_end_timing():
|
||||
"""
|
||||
Test the timing difference between bar-start and bar-end signal generation.
|
||||
|
||||
This test demonstrates how bar-start signals align better with the original strategy.
|
||||
"""
|
||||
print("🎯 TESTING BAR-START VS BAR-END SIGNAL GENERATION")
|
||||
print("=" * 80)
|
||||
|
||||
# Load data
|
||||
storage = Storage()
|
||||
|
||||
# Use Q1 2023 data for testing
|
||||
start_date = "2023-01-01"
|
||||
end_date = "2023-04-01"
|
||||
|
||||
data = storage.load_data("btcusd_1-day_data.csv", start_date, end_date)
|
||||
|
||||
if data is None or data.empty:
|
||||
print("❌ Could not load data")
|
||||
return
|
||||
|
||||
print(f"📊 Using data from {start_date} to {end_date}")
|
||||
print(f"📈 Data points: {len(data):,}")
|
||||
|
||||
# Test both strategies
|
||||
strategies = {
|
||||
'bar_end': IncMetaTrendStrategy("metatrend_bar_end", params={"timeframe_minutes": 15}),
|
||||
'bar_start': BarStartMetaTrendStrategy("metatrend_bar_start", params={"timeframe_minutes": 15})
|
||||
}
|
||||
|
||||
results = {}
|
||||
|
||||
for strategy_name, strategy in strategies.items():
|
||||
print(f"\n🔄 Testing {strategy_name.upper()} strategy...")
|
||||
|
||||
signals = []
|
||||
signal_count = 0
|
||||
|
||||
# Process minute-by-minute data
|
||||
for i, (timestamp, row) in enumerate(data.iterrows()):
|
||||
ohlcv_data = {
|
||||
'open': row['open'],
|
||||
'high': row['high'],
|
||||
'low': row['low'],
|
||||
'close': row['close'],
|
||||
'volume': row['volume']
|
||||
}
|
||||
|
||||
# Use appropriate update method
|
||||
if strategy_name == 'bar_start':
|
||||
result = strategy.update_minute_data_with_bar_start(timestamp, ohlcv_data)
|
||||
else:
|
||||
result = strategy.update_minute_data(timestamp, ohlcv_data)
|
||||
|
||||
# Check for signals
|
||||
if result is not None and strategy.is_warmed_up:
|
||||
entry_signal = result.get('entry_signal') or strategy.get_entry_signal()
|
||||
exit_signal = result.get('exit_signal') or strategy.get_exit_signal()
|
||||
|
||||
if entry_signal and entry_signal.signal_type == "ENTRY":
|
||||
signal_count += 1
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'bar_start': result.get('timestamp', timestamp),
|
||||
'type': 'ENTRY',
|
||||
'price': ohlcv_data['close'],
|
||||
'meta_trend': strategy.current_meta_trend,
|
||||
'signal_mode': result.get('signal_mode', 'unknown')
|
||||
})
|
||||
|
||||
if exit_signal and exit_signal.signal_type == "EXIT":
|
||||
signal_count += 1
|
||||
signals.append({
|
||||
'timestamp': timestamp,
|
||||
'bar_start': result.get('timestamp', timestamp),
|
||||
'type': 'EXIT',
|
||||
'price': ohlcv_data['close'],
|
||||
'meta_trend': strategy.current_meta_trend,
|
||||
'signal_mode': result.get('signal_mode', 'unknown')
|
||||
})
|
||||
|
||||
# Progress update
|
||||
if i % 10000 == 0:
|
||||
print(f" Processed {i:,} data points, {signal_count} signals generated")
|
||||
|
||||
results[strategy_name] = {
|
||||
'signals': signals,
|
||||
'total_signals': len(signals),
|
||||
'strategy': strategy
|
||||
}
|
||||
|
||||
print(f"✅ {strategy_name.upper()}: {len(signals)} total signals")
|
||||
|
||||
# Compare timing
|
||||
print(f"\n📊 TIMING COMPARISON")
|
||||
print("=" * 50)
|
||||
|
||||
bar_end_signals = results['bar_end']['signals']
|
||||
bar_start_signals = results['bar_start']['signals']
|
||||
|
||||
print(f"Bar-End Signals: {len(bar_end_signals)}")
|
||||
print(f"Bar-Start Signals: {len(bar_start_signals)}")
|
||||
|
||||
if bar_end_signals and bar_start_signals:
|
||||
# Compare first few signals
|
||||
print(f"\n🔍 FIRST 5 SIGNALS COMPARISON:")
|
||||
print("-" * 50)
|
||||
|
||||
for i in range(min(5, len(bar_end_signals), len(bar_start_signals))):
|
||||
end_sig = bar_end_signals[i]
|
||||
start_sig = bar_start_signals[i]
|
||||
|
||||
time_diff = start_sig['timestamp'] - end_sig['timestamp']
|
||||
|
||||
print(f"Signal {i+1}:")
|
||||
print(f" Bar-End: {end_sig['timestamp']} ({end_sig['type']})")
|
||||
print(f" Bar-Start: {start_sig['timestamp']} ({start_sig['type']})")
|
||||
print(f" Time Diff: {time_diff}")
|
||||
print()
|
||||
|
||||
# Show signal generation logs for bar-start strategy
|
||||
if hasattr(results['bar_start']['strategy'], 'get_signal_generation_log'):
|
||||
signal_log = results['bar_start']['strategy'].get_signal_generation_log()
|
||||
print(f"\n📝 BAR-START SIGNAL GENERATION LOG (First 10):")
|
||||
print("-" * 60)
|
||||
|
||||
for i, log_entry in enumerate(signal_log[:10]):
|
||||
print(f"{i+1}. {log_entry['timestamp']} -> Bar: {log_entry['bar_start']}")
|
||||
print(f" Mode: {log_entry['signal_mode']}, New Bar: {log_entry['new_bar_started']}")
|
||||
print(f" Entry: {log_entry['entry_signal']}, Exit: {log_entry['exit_signal']}")
|
||||
print(f" Meta-trend: {log_entry['meta_trend']}, Price: ${log_entry['price']:.2f}")
|
||||
print()
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def save_signals_comparison(results: Dict, filename: str = "bar_start_vs_bar_end_signals.csv"):
|
||||
"""Save signal comparison to CSV file."""
|
||||
all_signals = []
|
||||
|
||||
for strategy_name, result in results.items():
|
||||
for signal in result['signals']:
|
||||
signal_copy = signal.copy()
|
||||
signal_copy['strategy'] = strategy_name
|
||||
all_signals.append(signal_copy)
|
||||
|
||||
if all_signals:
|
||||
df = pd.DataFrame(all_signals)
|
||||
df.to_csv(filename, index=False)
|
||||
print(f"💾 Saved signal comparison to: {filename}")
|
||||
return df
|
||||
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
"""Main test function."""
|
||||
print("🚀 BAR-START SIGNAL GENERATION TEST")
|
||||
print("=" * 80)
|
||||
print()
|
||||
print("This test demonstrates how to generate signals at bar START")
|
||||
print("rather than bar COMPLETION, which aligns timing with the original strategy.")
|
||||
print()
|
||||
|
||||
results = test_bar_start_vs_bar_end_timing()
|
||||
|
||||
if results:
|
||||
# Save comparison results
|
||||
comparison_df = save_signals_comparison(results)
|
||||
|
||||
if comparison_df is not None:
|
||||
print(f"\n📈 SIGNAL SUMMARY:")
|
||||
print("-" * 40)
|
||||
summary = comparison_df.groupby(['strategy', 'type']).size().unstack(fill_value=0)
|
||||
print(summary)
|
||||
|
||||
print("\n✅ Test completed!")
|
||||
print("\n💡 KEY INSIGHTS:")
|
||||
print("1. Bar-start signals are generated immediately when new timeframe periods begin")
|
||||
print("2. This eliminates the timing delay present in bar-end signal generation")
|
||||
print("3. Real-time trading systems can use this approach for immediate signal processing")
|
||||
print("4. The timing will now align perfectly with the original strategy")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
566
test/test_incremental_backtester.py
Normal file
566
test/test_incremental_backtester.py
Normal file
@ -0,0 +1,566 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Enhanced test script for incremental backtester using real BTC data
|
||||
with comprehensive visualization and analysis features.
|
||||
|
||||
ENHANCED FEATURES:
|
||||
- Stop Loss/Take Profit Visualization: Different colors and markers for exit types
|
||||
* Green triangles (^): Buy entries
|
||||
* Blue triangles (v): Strategy exits
|
||||
* Dark red X: Stop loss exits (prominent markers)
|
||||
* Gold stars (*): Take profit exits
|
||||
* Gray squares: End-of-day exits
|
||||
|
||||
- Portfolio Tracking: Combined USD + BTC value calculation
|
||||
* Real-time portfolio value based on current BTC price
|
||||
* Separate tracking of USD balance and BTC holdings
|
||||
* Portfolio composition visualization
|
||||
|
||||
- Three-Panel Analysis:
|
||||
1. Price chart with trading signals and exit types
|
||||
2. Portfolio value over time with profit/loss zones
|
||||
3. Portfolio composition (USD vs BTC value breakdown)
|
||||
|
||||
- Comprehensive Data Export:
|
||||
* CSV: Individual trades with entry/exit details
|
||||
* JSON: Complete performance statistics
|
||||
* CSV: Portfolio value tracking over time
|
||||
* PNG: Multi-panel visualization charts
|
||||
|
||||
- Performance Analysis:
|
||||
* Exit type breakdown and performance
|
||||
* Win/loss distribution analysis
|
||||
* Best/worst trade identification
|
||||
* Detailed trade-by-trade logging
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.dates as mdates
|
||||
from datetime import datetime
|
||||
from typing import Dict, List
|
||||
import warnings
|
||||
import json
|
||||
warnings.filterwarnings('ignore')
|
||||
|
||||
# Add the project root to the path
|
||||
sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
from cycles.IncStrategies.inc_backtester import IncBacktester, BacktestConfig
|
||||
from cycles.IncStrategies.random_strategy import IncRandomStrategy
|
||||
from cycles.IncStrategies.metatrend_strategy import IncMetaTrendStrategy
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.utils.data_utils import aggregate_to_minutes
|
||||
|
||||
|
||||
def save_trades_to_csv(trades: List[Dict], filename: str) -> None:
|
||||
"""Save trades to CSV file in the same format as existing trades file."""
|
||||
if not trades:
|
||||
print("No trades to save")
|
||||
return
|
||||
|
||||
# Convert trades to the exact format of the existing file
|
||||
formatted_trades = []
|
||||
|
||||
for trade in trades:
|
||||
# Create entry row (buy signal)
|
||||
entry_row = {
|
||||
'entry_time': trade['entry_time'],
|
||||
'exit_time': '', # Empty for entry row
|
||||
'entry_price': trade['entry'],
|
||||
'exit_price': '', # Empty for entry row
|
||||
'profit_pct': 0.0, # 0 for entry
|
||||
'type': 'BUY',
|
||||
'fee_usd': trade.get('entry_fee_usd', 10.0) # Default fee if not available
|
||||
}
|
||||
formatted_trades.append(entry_row)
|
||||
|
||||
# Create exit row (sell signal)
|
||||
exit_type = trade.get('type', 'META_TREND_EXIT_SIGNAL')
|
||||
if exit_type == 'STRATEGY_EXIT':
|
||||
exit_type = 'META_TREND_EXIT_SIGNAL'
|
||||
elif exit_type == 'STOP_LOSS':
|
||||
exit_type = 'STOP_LOSS'
|
||||
elif exit_type == 'TAKE_PROFIT':
|
||||
exit_type = 'TAKE_PROFIT'
|
||||
elif exit_type == 'EOD':
|
||||
exit_type = 'EOD'
|
||||
|
||||
exit_row = {
|
||||
'entry_time': trade['entry_time'],
|
||||
'exit_time': trade['exit_time'],
|
||||
'entry_price': trade['entry'],
|
||||
'exit_price': trade['exit'],
|
||||
'profit_pct': trade['profit_pct'],
|
||||
'type': exit_type,
|
||||
'fee_usd': trade.get('exit_fee_usd', trade.get('total_fees_usd', 10.0))
|
||||
}
|
||||
formatted_trades.append(exit_row)
|
||||
|
||||
# Convert to DataFrame and save
|
||||
trades_df = pd.DataFrame(formatted_trades)
|
||||
|
||||
# Ensure the columns are in the exact same order
|
||||
column_order = ['entry_time', 'exit_time', 'entry_price', 'exit_price', 'profit_pct', 'type', 'fee_usd']
|
||||
trades_df = trades_df[column_order]
|
||||
|
||||
# Save with same formatting
|
||||
trades_df.to_csv(filename, index=False)
|
||||
print(f"Saved {len(formatted_trades)} trade signals ({len(trades)} complete trades) to: {filename}")
|
||||
|
||||
# Print summary for comparison
|
||||
buy_signals = len([t for t in formatted_trades if t['type'] == 'BUY'])
|
||||
sell_signals = len(formatted_trades) - buy_signals
|
||||
print(f" - Buy signals: {buy_signals}")
|
||||
print(f" - Sell signals: {sell_signals}")
|
||||
|
||||
# Show exit type breakdown
|
||||
exit_types = {}
|
||||
for trade in formatted_trades:
|
||||
if trade['type'] != 'BUY':
|
||||
exit_type = trade['type']
|
||||
exit_types[exit_type] = exit_types.get(exit_type, 0) + 1
|
||||
|
||||
if exit_types:
|
||||
print(f" - Exit types: {exit_types}")
|
||||
|
||||
|
||||
def save_stats_to_json(stats: Dict, filename: str) -> None:
|
||||
"""Save statistics to JSON file."""
|
||||
# Convert any datetime objects to strings for JSON serialization
|
||||
stats_copy = stats.copy()
|
||||
for key, value in stats_copy.items():
|
||||
if isinstance(value, pd.Timestamp):
|
||||
stats_copy[key] = value.isoformat()
|
||||
elif isinstance(value, dict):
|
||||
for k, v in value.items():
|
||||
if isinstance(v, pd.Timestamp):
|
||||
value[k] = v.isoformat()
|
||||
|
||||
with open(filename, 'w') as f:
|
||||
json.dump(stats_copy, f, indent=2, default=str)
|
||||
print(f"Saved statistics to: {filename}")
|
||||
|
||||
|
||||
def calculate_portfolio_over_time(data: pd.DataFrame, trades: List[Dict], initial_usd: float, debug: bool = False) -> pd.DataFrame:
|
||||
"""Calculate portfolio value over time with proper USD + BTC tracking."""
|
||||
print("Calculating portfolio value over time...")
|
||||
|
||||
# Create portfolio tracking with detailed state
|
||||
portfolio_data = data[['close']].copy()
|
||||
portfolio_data['portfolio_value'] = initial_usd
|
||||
portfolio_data['usd_balance'] = initial_usd
|
||||
portfolio_data['btc_balance'] = 0.0
|
||||
portfolio_data['position'] = 0 # 0 = cash, 1 = in position
|
||||
|
||||
if not trades:
|
||||
return portfolio_data
|
||||
|
||||
# Initialize state
|
||||
current_usd = initial_usd
|
||||
current_btc = 0.0
|
||||
in_position = False
|
||||
|
||||
# Sort trades by entry time
|
||||
sorted_trades = sorted(trades, key=lambda x: x['entry_time'])
|
||||
trade_idx = 0
|
||||
|
||||
print(f"Processing {len(sorted_trades)} trades across {len(portfolio_data)} data points...")
|
||||
|
||||
for i, (timestamp, row) in enumerate(portfolio_data.iterrows()):
|
||||
current_price = row['close']
|
||||
|
||||
# Check if we need to execute any trades at this timestamp
|
||||
while trade_idx < len(sorted_trades):
|
||||
trade = sorted_trades[trade_idx]
|
||||
|
||||
# Check for entry
|
||||
if trade['entry_time'] <= timestamp and not in_position:
|
||||
# Execute buy order
|
||||
entry_price = trade['entry']
|
||||
current_btc = current_usd / entry_price
|
||||
current_usd = 0.0
|
||||
in_position = True
|
||||
if debug:
|
||||
print(f"Entry {trade_idx + 1}: Buy at ${entry_price:.2f}, BTC: {current_btc:.6f}")
|
||||
break
|
||||
|
||||
# Check for exit
|
||||
elif trade['exit_time'] <= timestamp and in_position:
|
||||
# Execute sell order
|
||||
exit_price = trade['exit']
|
||||
current_usd = current_btc * exit_price
|
||||
current_btc = 0.0
|
||||
in_position = False
|
||||
exit_type = trade.get('type', 'STRATEGY_EXIT')
|
||||
if debug:
|
||||
print(f"Exit {trade_idx + 1}: {exit_type} at ${exit_price:.2f}, USD: ${current_usd:.2f}")
|
||||
trade_idx += 1
|
||||
break
|
||||
else:
|
||||
break
|
||||
|
||||
# Calculate total portfolio value (USD + BTC value)
|
||||
btc_value = current_btc * current_price
|
||||
total_value = current_usd + btc_value
|
||||
|
||||
# Update portfolio data
|
||||
portfolio_data.iloc[i, portfolio_data.columns.get_loc('portfolio_value')] = total_value
|
||||
portfolio_data.iloc[i, portfolio_data.columns.get_loc('usd_balance')] = current_usd
|
||||
portfolio_data.iloc[i, portfolio_data.columns.get_loc('btc_balance')] = current_btc
|
||||
portfolio_data.iloc[i, portfolio_data.columns.get_loc('position')] = 1 if in_position else 0
|
||||
|
||||
return portfolio_data
|
||||
|
||||
|
||||
def create_comprehensive_plot(data: pd.DataFrame, trades: List[Dict], portfolio_data: pd.DataFrame,
|
||||
strategy_name: str, save_path: str) -> None:
|
||||
"""Create comprehensive plot with price, trades, and portfolio value."""
|
||||
|
||||
print(f"Creating comprehensive plot with {len(data)} data points and {len(trades)} trades...")
|
||||
|
||||
# Create figure with subplots
|
||||
fig, (ax1, ax2, ax3) = plt.subplots(3, 1, figsize=(16, 16),
|
||||
gridspec_kw={'height_ratios': [2, 1, 1]})
|
||||
|
||||
# Plot 1: Price action with trades
|
||||
ax1.plot(data.index, data['close'], label='BTC Price', color='black', linewidth=1.5)
|
||||
|
||||
# Plot trades with different markers for different exit types
|
||||
if trades:
|
||||
entry_times = [trade['entry_time'] for trade in trades]
|
||||
entry_prices = [trade['entry'] for trade in trades]
|
||||
|
||||
# Separate exits by type
|
||||
strategy_exits = []
|
||||
stop_loss_exits = []
|
||||
take_profit_exits = []
|
||||
eod_exits = []
|
||||
|
||||
for trade in trades:
|
||||
exit_type = trade.get('type', 'STRATEGY_EXIT')
|
||||
exit_data = (trade['exit_time'], trade['exit'])
|
||||
|
||||
if exit_type == 'STOP_LOSS':
|
||||
stop_loss_exits.append(exit_data)
|
||||
elif exit_type == 'TAKE_PROFIT':
|
||||
take_profit_exits.append(exit_data)
|
||||
elif exit_type == 'EOD':
|
||||
eod_exits.append(exit_data)
|
||||
else:
|
||||
strategy_exits.append(exit_data)
|
||||
|
||||
# Plot entry points (green triangles)
|
||||
ax1.scatter(entry_times, entry_prices, color='darkgreen', marker='^',
|
||||
s=100, label=f'Buy ({len(entry_times)})', zorder=6, alpha=0.9, edgecolors='white', linewidth=1)
|
||||
|
||||
# Plot different types of exits with distinct styling
|
||||
if strategy_exits:
|
||||
exit_times, exit_prices = zip(*strategy_exits)
|
||||
ax1.scatter(exit_times, exit_prices, color='blue', marker='v',
|
||||
s=100, label=f'Strategy Exit ({len(strategy_exits)})', zorder=5, alpha=0.8, edgecolors='white', linewidth=1)
|
||||
|
||||
if stop_loss_exits:
|
||||
exit_times, exit_prices = zip(*stop_loss_exits)
|
||||
ax1.scatter(exit_times, exit_prices, color='darkred', marker='X',
|
||||
s=150, label=f'Stop Loss ({len(stop_loss_exits)})', zorder=7, alpha=1.0, edgecolors='white', linewidth=2)
|
||||
|
||||
if take_profit_exits:
|
||||
exit_times, exit_prices = zip(*take_profit_exits)
|
||||
ax1.scatter(exit_times, exit_prices, color='gold', marker='*',
|
||||
s=150, label=f'Take Profit ({len(take_profit_exits)})', zorder=6, alpha=0.9, edgecolors='black', linewidth=1)
|
||||
|
||||
if eod_exits:
|
||||
exit_times, exit_prices = zip(*eod_exits)
|
||||
ax1.scatter(exit_times, exit_prices, color='gray', marker='s',
|
||||
s=80, label=f'End of Day ({len(eod_exits)})', zorder=5, alpha=0.8, edgecolors='white', linewidth=1)
|
||||
|
||||
# Print exit type summary
|
||||
print(f"Exit types: Strategy={len(strategy_exits)}, Stop Loss={len(stop_loss_exits)}, "
|
||||
f"Take Profit={len(take_profit_exits)}, EOD={len(eod_exits)}")
|
||||
|
||||
ax1.set_title(f'{strategy_name} - BTC Trading Signals (Q1 2023)', fontsize=16, fontweight='bold')
|
||||
ax1.set_ylabel('Price (USD)', fontsize=12)
|
||||
ax1.legend(loc='upper left', fontsize=10)
|
||||
ax1.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 2: Portfolio value over time
|
||||
ax2.plot(portfolio_data.index, portfolio_data['portfolio_value'],
|
||||
label='Total Portfolio Value', color='blue', linewidth=2)
|
||||
ax2.axhline(y=portfolio_data['portfolio_value'].iloc[0], color='gray',
|
||||
linestyle='--', alpha=0.7, label='Initial Value')
|
||||
|
||||
# Add profit/loss shading
|
||||
initial_value = portfolio_data['portfolio_value'].iloc[0]
|
||||
profit_mask = portfolio_data['portfolio_value'] > initial_value
|
||||
loss_mask = portfolio_data['portfolio_value'] < initial_value
|
||||
|
||||
ax2.fill_between(portfolio_data.index, portfolio_data['portfolio_value'], initial_value,
|
||||
where=profit_mask, color='green', alpha=0.2, label='Profit Zone')
|
||||
ax2.fill_between(portfolio_data.index, portfolio_data['portfolio_value'], initial_value,
|
||||
where=loss_mask, color='red', alpha=0.2, label='Loss Zone')
|
||||
|
||||
ax2.set_title('Portfolio Value Over Time (USD + BTC)', fontsize=14, fontweight='bold')
|
||||
ax2.set_ylabel('Portfolio Value (USD)', fontsize=12)
|
||||
ax2.legend(loc='upper left', fontsize=10)
|
||||
ax2.grid(True, alpha=0.3)
|
||||
|
||||
# Plot 3: Portfolio composition (USD vs BTC value)
|
||||
usd_values = portfolio_data['usd_balance']
|
||||
btc_values = portfolio_data['btc_balance'] * portfolio_data['close']
|
||||
|
||||
ax3.fill_between(portfolio_data.index, 0, usd_values,
|
||||
color='green', alpha=0.6, label='USD Balance')
|
||||
ax3.fill_between(portfolio_data.index, usd_values, usd_values + btc_values,
|
||||
color='orange', alpha=0.6, label='BTC Value')
|
||||
|
||||
# Mark position periods
|
||||
position_mask = portfolio_data['position'] == 1
|
||||
if position_mask.any():
|
||||
ax3.fill_between(portfolio_data.index, 0, portfolio_data['portfolio_value'],
|
||||
where=position_mask, color='orange', alpha=0.2, label='In Position')
|
||||
|
||||
ax3.set_title('Portfolio Composition (USD vs BTC)', fontsize=14, fontweight='bold')
|
||||
ax3.set_ylabel('Value (USD)', fontsize=12)
|
||||
ax3.set_xlabel('Date', fontsize=12)
|
||||
ax3.legend(loc='upper left', fontsize=10)
|
||||
ax3.grid(True, alpha=0.3)
|
||||
|
||||
# Format x-axis for all plots
|
||||
for ax in [ax1, ax2, ax3]:
|
||||
ax.xaxis.set_major_locator(mdates.WeekdayLocator())
|
||||
ax.xaxis.set_major_formatter(mdates.DateFormatter('%m-%d'))
|
||||
plt.setp(ax.xaxis.get_majorticklabels(), rotation=45)
|
||||
|
||||
# Save plot
|
||||
plt.tight_layout()
|
||||
plt.savefig(save_path, dpi=300, bbox_inches='tight')
|
||||
plt.close()
|
||||
|
||||
print(f"Comprehensive plot saved to: {save_path}")
|
||||
|
||||
|
||||
def compare_with_existing_trades(new_trades_file: str, existing_trades_file: str = "results/trades_15min(15min)_ST3pct.csv") -> None:
|
||||
"""Compare the new incremental trades with existing strategy trades."""
|
||||
try:
|
||||
if not os.path.exists(existing_trades_file):
|
||||
print(f"Existing trades file not found: {existing_trades_file}")
|
||||
return
|
||||
|
||||
print(f"\n📊 COMPARING WITH EXISTING STRATEGY:")
|
||||
|
||||
# Load both files
|
||||
new_df = pd.read_csv(new_trades_file)
|
||||
existing_df = pd.read_csv(existing_trades_file)
|
||||
|
||||
# Count signals
|
||||
new_buy_signals = len(new_df[new_df['type'] == 'BUY'])
|
||||
new_sell_signals = len(new_df[new_df['type'] != 'BUY'])
|
||||
|
||||
existing_buy_signals = len(existing_df[existing_df['type'] == 'BUY'])
|
||||
existing_sell_signals = len(existing_df[existing_df['type'] != 'BUY'])
|
||||
|
||||
print(f"📈 SIGNAL COMPARISON:")
|
||||
print(f" Incremental Strategy:")
|
||||
print(f" - Buy signals: {new_buy_signals}")
|
||||
print(f" - Sell signals: {new_sell_signals}")
|
||||
print(f" Existing Strategy:")
|
||||
print(f" - Buy signals: {existing_buy_signals}")
|
||||
print(f" - Sell signals: {existing_sell_signals}")
|
||||
|
||||
# Compare exit types
|
||||
new_exit_types = new_df[new_df['type'] != 'BUY']['type'].value_counts().to_dict()
|
||||
existing_exit_types = existing_df[existing_df['type'] != 'BUY']['type'].value_counts().to_dict()
|
||||
|
||||
print(f"\n🎯 EXIT TYPE COMPARISON:")
|
||||
print(f" Incremental Strategy: {new_exit_types}")
|
||||
print(f" Existing Strategy: {existing_exit_types}")
|
||||
|
||||
# Calculate profit comparison
|
||||
new_profits = new_df[new_df['type'] != 'BUY']['profit_pct'].sum()
|
||||
existing_profits = existing_df[existing_df['type'] != 'BUY']['profit_pct'].sum()
|
||||
|
||||
print(f"\n💰 PROFIT COMPARISON:")
|
||||
print(f" Incremental Strategy: {new_profits*100:.2f}% total")
|
||||
print(f" Existing Strategy: {existing_profits*100:.2f}% total")
|
||||
print(f" Difference: {(new_profits - existing_profits)*100:.2f}%")
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error comparing trades: {e}")
|
||||
|
||||
|
||||
def test_single_strategy():
|
||||
"""Test a single strategy and create comprehensive analysis."""
|
||||
print("\n" + "="*60)
|
||||
print("TESTING SINGLE STRATEGY")
|
||||
print("="*60)
|
||||
|
||||
# Create storage instance
|
||||
storage = Storage()
|
||||
|
||||
# Create backtester configuration using 3 months of data
|
||||
config = BacktestConfig(
|
||||
data_file="btcusd_1-min_data.csv",
|
||||
start_date="2025-01-01",
|
||||
end_date="2025-05-01",
|
||||
initial_usd=10000,
|
||||
stop_loss_pct=0.03, # 3% stop loss to match existing
|
||||
take_profit_pct=0.0
|
||||
)
|
||||
|
||||
# Create strategy
|
||||
strategy = IncMetaTrendStrategy(
|
||||
name="metatrend",
|
||||
weight=1.0,
|
||||
params={
|
||||
"timeframe": "15min",
|
||||
"enable_logging": False
|
||||
}
|
||||
)
|
||||
|
||||
print(f"Testing strategy: {strategy.name}")
|
||||
print(f"Strategy timeframe: {strategy.params.get('timeframe', '15min')}")
|
||||
print(f"Stop loss: {config.stop_loss_pct*100:.1f}%")
|
||||
print(f"Date range: {config.start_date} to {config.end_date}")
|
||||
|
||||
# Run backtest
|
||||
print(f"\n🚀 Running backtest...")
|
||||
backtester = IncBacktester(config, storage)
|
||||
result = backtester.run_single_strategy(strategy)
|
||||
|
||||
# Print results
|
||||
print(f"\n📊 RESULTS:")
|
||||
print(f"Strategy: {strategy.__class__.__name__}")
|
||||
profit = result['final_usd'] - result['initial_usd']
|
||||
print(f"Total Profit: ${profit:.2f} ({result['profit_ratio']*100:.2f}%)")
|
||||
print(f"Total Trades: {result['n_trades']}")
|
||||
print(f"Win Rate: {result['win_rate']*100:.2f}%")
|
||||
print(f"Max Drawdown: {result['max_drawdown']*100:.2f}%")
|
||||
print(f"Average Trade: {result['avg_trade']*100:.2f}%")
|
||||
print(f"Total Fees: ${result['total_fees_usd']:.2f}")
|
||||
|
||||
# Create results directory
|
||||
os.makedirs("results", exist_ok=True)
|
||||
|
||||
# Save trades in the same format as existing file
|
||||
if result['trades']:
|
||||
# Create filename matching the existing format
|
||||
timeframe = strategy.params.get('timeframe', '15min')
|
||||
stop_loss_pct = int(config.stop_loss_pct * 100)
|
||||
trades_filename = f"results/trades_incremental_{timeframe}({timeframe})_ST{stop_loss_pct}pct.csv"
|
||||
save_trades_to_csv(result['trades'], trades_filename)
|
||||
|
||||
# Compare with existing trades
|
||||
compare_with_existing_trades(trades_filename)
|
||||
|
||||
# Save statistics to JSON
|
||||
stats_filename = f"results/incremental_stats_{config.start_date}_{config.end_date}.json"
|
||||
save_stats_to_json(result, stats_filename)
|
||||
|
||||
# Load and aggregate data for plotting
|
||||
print(f"\n📈 CREATING COMPREHENSIVE ANALYSIS...")
|
||||
data = storage.load_data("btcusd_1-min_data.csv", config.start_date, config.end_date)
|
||||
print(f"Loaded {len(data)} minute-level data points")
|
||||
|
||||
# Aggregate to strategy timeframe using existing data_utils
|
||||
timeframe_minutes = 15 # Match strategy timeframe
|
||||
print(f"Aggregating to {timeframe_minutes}-minute bars using data_utils...")
|
||||
aggregated_data = aggregate_to_minutes(data, timeframe_minutes)
|
||||
print(f"Aggregated to {len(aggregated_data)} bars")
|
||||
|
||||
# Calculate portfolio value over time
|
||||
portfolio_data = calculate_portfolio_over_time(aggregated_data, result['trades'], config.initial_usd, debug=False)
|
||||
|
||||
# Save portfolio data to CSV
|
||||
portfolio_filename = f"results/incremental_portfolio_{config.start_date}_{config.end_date}.csv"
|
||||
portfolio_data.to_csv(portfolio_filename)
|
||||
print(f"Saved portfolio data to: {portfolio_filename}")
|
||||
|
||||
# Create comprehensive plot
|
||||
plot_path = f"results/incremental_comprehensive_{config.start_date}_{config.end_date}.png"
|
||||
create_comprehensive_plot(aggregated_data, result['trades'], portfolio_data,
|
||||
"Incremental MetaTrend Strategy", plot_path)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def main():
|
||||
"""Main test function."""
|
||||
print("🚀 Starting Comprehensive Incremental Backtester Test (Q1 2023)")
|
||||
print("=" * 80)
|
||||
|
||||
try:
|
||||
# Test single strategy
|
||||
result = test_single_strategy()
|
||||
|
||||
print("\n" + "="*80)
|
||||
print("✅ TEST COMPLETED SUCCESSFULLY!")
|
||||
print("="*80)
|
||||
print(f"📁 Check the 'results/' directory for:")
|
||||
print(f" - Trading plot: incremental_comprehensive_q1_2023.png")
|
||||
print(f" - Trades data: trades_incremental_15min(15min)_ST3pct.csv")
|
||||
print(f" - Statistics: incremental_stats_2025-01-01_2025-05-01.json")
|
||||
print(f" - Portfolio data: incremental_portfolio_2025-01-01_2025-05-01.csv")
|
||||
print(f"📊 Strategy processed {result['data_points_processed']} data points")
|
||||
print(f"🎯 Strategy warmup: {'✅ Complete' if result['warmup_complete'] else '❌ Incomplete'}")
|
||||
|
||||
# Show some trade details
|
||||
if result['n_trades'] > 0:
|
||||
print(f"\n📈 DETAILED TRADE ANALYSIS:")
|
||||
print(f"First trade: {result.get('first_trade', {}).get('entry_time', 'N/A')}")
|
||||
print(f"Last trade: {result.get('last_trade', {}).get('exit_time', 'N/A')}")
|
||||
|
||||
# Analyze trades by exit type
|
||||
trades = result['trades']
|
||||
|
||||
# Group trades by exit type
|
||||
exit_types = {}
|
||||
for trade in trades:
|
||||
exit_type = trade.get('type', 'STRATEGY_EXIT')
|
||||
if exit_type not in exit_types:
|
||||
exit_types[exit_type] = []
|
||||
exit_types[exit_type].append(trade)
|
||||
|
||||
print(f"\n📊 EXIT TYPE ANALYSIS:")
|
||||
for exit_type, type_trades in exit_types.items():
|
||||
profits = [trade['profit_pct'] for trade in type_trades]
|
||||
avg_profit = np.mean(profits) * 100
|
||||
win_rate = len([p for p in profits if p > 0]) / len(profits) * 100
|
||||
|
||||
print(f" {exit_type}:")
|
||||
print(f" Count: {len(type_trades)}")
|
||||
print(f" Avg Profit: {avg_profit:.2f}%")
|
||||
print(f" Win Rate: {win_rate:.1f}%")
|
||||
|
||||
if exit_type == 'STOP_LOSS':
|
||||
avg_loss = np.mean([p for p in profits if p <= 0]) * 100
|
||||
print(f" Avg Loss: {avg_loss:.2f}%")
|
||||
|
||||
# Overall profit distribution
|
||||
all_profits = [trade['profit_pct'] for trade in trades]
|
||||
winning_trades = [p for p in all_profits if p > 0]
|
||||
losing_trades = [p for p in all_profits if p <= 0]
|
||||
|
||||
print(f"\n📈 OVERALL PROFIT DISTRIBUTION:")
|
||||
if winning_trades:
|
||||
print(f"Winning trades: {len(winning_trades)} (avg: {np.mean(winning_trades)*100:.2f}%)")
|
||||
print(f"Best trade: {max(winning_trades)*100:.2f}%")
|
||||
if losing_trades:
|
||||
print(f"Losing trades: {len(losing_trades)} (avg: {np.mean(losing_trades)*100:.2f}%)")
|
||||
print(f"Worst trade: {min(losing_trades)*100:.2f}%")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error during testing: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
return False
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
success = main()
|
||||
sys.exit(0 if success else 1)
|
||||
Loading…
x
Reference in New Issue
Block a user