Enhance logging system and update dependencies

- Updated `.gitignore` to exclude log files from version control.
- Added `pytest` as a dependency in `pyproject.toml` for testing purposes.
- Included `pytest` in `uv.lock` to ensure consistent dependency management.
- Introduced comprehensive documentation for the new unified logging system in `docs/logging.md`, detailing features, usage, and configuration options.
This commit is contained in:
Vasily.onl 2025-05-30 19:54:56 +08:00
parent 8a378c8d69
commit b7263b023f
7 changed files with 827 additions and 0 deletions

4
.gitignore vendored
View File

@ -3,3 +3,7 @@
.env.local
.env.*
database/migrations/versions/*
# Exclude log files
logs/
*.log

474
docs/logging.md Normal file
View File

@ -0,0 +1,474 @@
# Unified Logging System
The TCP Dashboard project uses a unified logging system that provides consistent, centralized logging across all components.
## Features
- **Component-specific directories**: Each component gets its own log directory
- **Date-based file rotation**: New log files created daily automatically
- **Unified format**: Consistent timestamp and message format across all logs
- **Thread-safe**: Safe for use in multi-threaded applications
- **Verbose console logging**: Configurable console output with proper log level handling
- **Automatic log cleanup**: Built-in functionality to remove old log files automatically
- **Error handling**: Graceful fallback to console logging if file logging fails
## Log Format
All log messages follow this unified format:
```
[YYYY-MM-DD HH:MM:SS - LEVEL - message]
```
Example:
```
[2024-01-15 14:30:25 - INFO - Bot started successfully]
[2024-01-15 14:30:26 - ERROR - Connection failed: timeout]
```
## File Organization
Logs are organized in a hierarchical structure:
```
logs/
├── app/
│ ├── 2024-01-15.txt
│ └── 2024-01-16.txt
├── bot_manager/
│ ├── 2024-01-15.txt
│ └── 2024-01-16.txt
├── data_collector/
│ └── 2024-01-15.txt
└── strategies/
└── 2024-01-15.txt
```
## Basic Usage
### Import and Initialize
```python
from utils.logger import get_logger
# Basic usage - gets logger with default settings
logger = get_logger('bot_manager')
# With verbose console output
logger = get_logger('bot_manager', verbose=True)
# With custom cleanup settings
logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7)
# All parameters
logger = get_logger(
component_name='bot_manager',
log_level='DEBUG',
verbose=True,
clean_old_logs=True,
max_log_files=14
)
```
### Log Messages
```python
# Different log levels
logger.debug("Detailed debugging information")
logger.info("General information about program execution")
logger.warning("Something unexpected happened")
logger.error("An error occurred", exc_info=True) # Include stack trace
logger.critical("A critical error occurred")
```
### Complete Example
```python
from utils.logger import get_logger
class BotManager:
def __init__(self):
# Initialize with verbose output and keep only 7 days of logs
self.logger = get_logger('bot_manager', verbose=True, max_log_files=7)
self.logger.info("BotManager initialized")
def start_bot(self, bot_id: str):
try:
self.logger.info(f"Starting bot {bot_id}")
# Bot startup logic here
self.logger.info(f"Bot {bot_id} started successfully")
except Exception as e:
self.logger.error(f"Failed to start bot {bot_id}: {e}", exc_info=True)
raise
def stop_bot(self, bot_id: str):
self.logger.info(f"Stopping bot {bot_id}")
# Bot shutdown logic here
self.logger.info(f"Bot {bot_id} stopped")
```
## Configuration
### Logger Parameters
The `get_logger()` function accepts several parameters for customization:
```python
get_logger(
component_name: str, # Required: component name
log_level: str = "INFO", # Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL
verbose: Optional[bool] = None, # Console logging: True, False, or None (use env)
clean_old_logs: bool = True, # Auto-cleanup old logs
max_log_files: int = 30 # Max number of log files to keep
)
```
### Log Levels
Set the log level when getting a logger:
```python
# Available levels: DEBUG, INFO, WARNING, ERROR, CRITICAL
logger = get_logger('component_name', 'DEBUG') # Show all messages
logger = get_logger('component_name', 'ERROR') # Show only errors and critical
```
### Verbose Console Logging
Control console output with the `verbose` parameter:
```python
# Explicit verbose settings
logger = get_logger('bot_manager', verbose=True) # Always show console logs
logger = get_logger('bot_manager', verbose=False) # Never show console logs
# Use environment variable (default behavior)
logger = get_logger('bot_manager', verbose=None) # Uses VERBOSE_LOGGING from .env
```
Environment variables for console logging:
```bash
# In .env file or environment
VERBOSE_LOGGING=true # Enable verbose console logging
LOG_TO_CONSOLE=true # Alternative environment variable (backward compatibility)
```
Console output respects log levels:
- **DEBUG level**: Shows all messages (DEBUG, INFO, WARNING, ERROR, CRITICAL)
- **INFO level**: Shows INFO and above (INFO, WARNING, ERROR, CRITICAL)
- **WARNING level**: Shows WARNING and above (WARNING, ERROR, CRITICAL)
- **ERROR level**: Shows ERROR and above (ERROR, CRITICAL)
- **CRITICAL level**: Shows only CRITICAL messages
### Automatic Log Cleanup
Control automatic cleanup of old log files:
```python
# Enable automatic cleanup (default)
logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7)
# Disable automatic cleanup
logger = get_logger('bot_manager', clean_old_logs=False)
# Custom retention (keep 14 most recent log files)
logger = get_logger('bot_manager', max_log_files=14)
```
**How automatic cleanup works:**
- Triggered every time a new log file is created (date change)
- Keeps only the most recent `max_log_files` files
- Deletes older files automatically
- Based on file modification time, not filename
## Advanced Features
### Manual Log Cleanup
Remove old log files manually based on age:
```python
from utils.logger import cleanup_old_logs
# Remove logs older than 30 days for a specific component
cleanup_old_logs('bot_manager', days_to_keep=30)
# Or clean up logs for multiple components
for component in ['bot_manager', 'data_collector', 'strategies']:
cleanup_old_logs(component, days_to_keep=7)
```
### Error Handling with Context
```python
try:
risky_operation()
except Exception as e:
logger.error(f"Operation failed: {e}", exc_info=True)
# exc_info=True includes the full stack trace
```
### Structured Logging
For complex data, use structured messages:
```python
# Good: Structured information
logger.info(f"Trade executed: symbol={symbol}, price={price}, quantity={quantity}")
# Even better: JSON-like structure for parsing
logger.info(f"Trade executed", extra={
'symbol': symbol,
'price': price,
'quantity': quantity,
'timestamp': datetime.now().isoformat()
})
```
## Configuration Examples
### Development Environment
```python
# Verbose logging with frequent cleanup
logger = get_logger(
'bot_manager',
log_level='DEBUG',
verbose=True,
max_log_files=3 # Keep only 3 days of logs
)
```
### Production Environment
```python
# Minimal console output with longer retention
logger = get_logger(
'bot_manager',
log_level='INFO',
verbose=False,
max_log_files=30 # Keep 30 days of logs
)
```
### Testing Environment
```python
# Disable cleanup for testing
logger = get_logger(
'test_component',
log_level='DEBUG',
verbose=True,
clean_old_logs=False # Don't delete logs during tests
)
```
## Environment Variables
Create a `.env` file to control default logging behavior:
```bash
# Enable verbose console logging globally
VERBOSE_LOGGING=true
# Alternative (backward compatibility)
LOG_TO_CONSOLE=true
```
## Best Practices
### 1. Component Naming
Use descriptive, consistent component names:
- `bot_manager` - for bot lifecycle management
- `data_collector` - for market data collection
- `strategies` - for trading strategies
- `backtesting` - for backtesting engine
- `dashboard` - for web dashboard
### 2. Log Level Guidelines
- **DEBUG**: Detailed diagnostic information, typically only of interest when diagnosing problems
- **INFO**: General information about program execution
- **WARNING**: Something unexpected happened, but the program is still working
- **ERROR**: A serious problem occurred, the program couldn't perform a function
- **CRITICAL**: A serious error occurred, the program may not be able to continue
### 3. Verbose Logging Guidelines
```python
# Development: Use verbose logging with DEBUG level
dev_logger = get_logger('component', 'DEBUG', verbose=True, max_log_files=3)
# Production: Use INFO level with no console output
prod_logger = get_logger('component', 'INFO', verbose=False, max_log_files=30)
# Testing: Disable cleanup to preserve test logs
test_logger = get_logger('test_component', 'DEBUG', verbose=True, clean_old_logs=False)
```
### 4. Log Retention Guidelines
```python
# High-frequency components (data collectors): shorter retention
data_logger = get_logger('data_collector', max_log_files=7)
# Important components (bot managers): longer retention
bot_logger = get_logger('bot_manager', max_log_files=30)
# Development: very short retention
dev_logger = get_logger('dev_component', max_log_files=3)
```
### 5. Message Content
```python
# Good: Descriptive and actionable
logger.error("Failed to connect to OKX API: timeout after 30s")
# Bad: Vague and unhelpful
logger.error("Error occurred")
# Good: Include relevant context
logger.info(f"Bot {bot_id} executed trade: {symbol} {side} {quantity}@{price}")
# Good: Include duration for performance monitoring
start_time = time.time()
# ... do work ...
duration = time.time() - start_time
logger.info(f"Data aggregation completed in {duration:.2f}s")
```
### 6. Exception Handling
```python
try:
execute_trade(symbol, quantity, price)
logger.info(f"Trade executed successfully: {symbol}")
except APIError as e:
logger.error(f"API error during trade execution: {e}", exc_info=True)
raise
except ValidationError as e:
logger.warning(f"Trade validation failed: {e}")
return False
except Exception as e:
logger.critical(f"Unexpected error during trade execution: {e}", exc_info=True)
raise
```
### 7. Performance Considerations
```python
# Good: Efficient string formatting
logger.debug(f"Processing {len(data)} records")
# Avoid: Expensive operations in log messages unless necessary
# logger.debug(f"Data: {expensive_serialization(data)}") # Only if needed
# Better: Check log level first for expensive operations
if logger.isEnabledFor(logging.DEBUG):
logger.debug(f"Data: {expensive_serialization(data)}")
```
## Integration with Existing Code
The logging system is designed to be gradually adopted:
1. **Start with new modules**: Use the unified logger in new code
2. **Replace existing logging**: Gradually migrate existing logging to the unified system
3. **No breaking changes**: Existing code continues to work
### Migration Example
```python
# Old logging (if any existed)
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# New unified logging
from utils.logger import get_logger
logger = get_logger('component_name', verbose=True)
```
## Testing
Run a simple test to verify the logging system:
```bash
python -c "from utils.logger import get_logger; logger = get_logger('test', verbose=True); logger.info('Test message'); print('Check logs/test/ directory')"
```
## Maintenance
### Automatic Cleanup Benefits
The automatic cleanup feature provides several benefits:
- **Disk space management**: Prevents log directories from growing indefinitely
- **Performance**: Fewer files to scan in log directories
- **Maintenance-free**: No need for external cron jobs or scripts
- **Component-specific**: Each component can have different retention policies
### Manual Cleanup for Special Cases
For cases requiring age-based cleanup instead of count-based:
```python
# cleanup_logs.py
from utils.logger import cleanup_old_logs
components = ['bot_manager', 'data_collector', 'strategies', 'dashboard']
for component in components:
cleanup_old_logs(component, days_to_keep=30)
```
### Monitoring Disk Usage
Monitor the `logs/` directory size and adjust retention policies as needed:
```bash
# Check log directory size
du -sh logs/
# Find large log files
find logs/ -name "*.txt" -size +10M
# Count log files per component
find logs/ -name "*.txt" | cut -d'/' -f2 | sort | uniq -c
```
## Troubleshooting
### Common Issues
1. **Permission errors**: Ensure the application has write permissions to the project directory
2. **Disk space**: Monitor disk usage and adjust log retention with `max_log_files`
3. **Threading issues**: The logger is thread-safe, but check for application-level concurrency issues
4. **Too many console messages**: Adjust `verbose` parameter or log levels
### Debug Mode
Enable debug logging to troubleshoot issues:
```python
logger = get_logger('component_name', 'DEBUG', verbose=True)
```
### Console Output Issues
```python
# Force console output regardless of environment
logger = get_logger('component_name', verbose=True)
# Check environment variables
import os
print(f"VERBOSE_LOGGING: {os.getenv('VERBOSE_LOGGING')}")
print(f"LOG_TO_CONSOLE: {os.getenv('LOG_TO_CONSOLE')}")
```
### Fallback Logging
If file logging fails, the system automatically falls back to console logging with a warning message.
## New Features Summary
### Verbose Parameter
- Controls console logging output
- Respects log levels (DEBUG shows all, ERROR shows only errors)
- Uses environment variables as default (`VERBOSE_LOGGING` or `LOG_TO_CONSOLE`)
- Can be explicitly set to `True`/`False` to override environment
### Automatic Cleanup
- Enabled by default (`clean_old_logs=True`)
- Triggered when new log files are created (date changes)
- Keeps most recent `max_log_files` files (default: 30)
- Component-specific retention policies
- Non-blocking operation with error handling

View File

@ -33,6 +33,7 @@ dependencies = [
# Development tools
"watchdog>=3.0.0", # For file watching and hot reload
"click>=8.0.0", # For CLI commands
"pytest>=8.3.5",
]
[project.optional-dependencies]

View File

@ -24,13 +24,16 @@
- `scripts/dev.py` - Development setup and management script
- `scripts/init_database.py` - Database initialization and verification script
- `scripts/test_models.py` - Test script for SQLAlchemy models integration verification
- `utils/logger.py` - Enhanced unified logging system with verbose console output, automatic cleanup, and configurable retention [USE THIS FOR ALL LOGGING]
- `alembic.ini` - Alembic configuration for database migrations
- `requirements.txt` - Python dependencies managed by UV
- `docker-compose.yml` - Docker services configuration with TimescaleDB support
- `tests/test_strategies.py` - Unit tests for strategy implementations
- `tests/test_bot_manager.py` - Unit tests for bot management functionality
- `tests/test_data_collection.py` - Unit tests for data collection and aggregation
- `tests/test_logging_enhanced.py` - Comprehensive unit tests for enhanced logging features (16 tests)
- `docs/setup.md` - Comprehensive setup guide for new machines and environments
- `docs/logging.md` - Complete documentation for the enhanced unified logging system
## Tasks
@ -43,6 +46,7 @@
- [x] 1.6 Setup Redis for pub/sub messaging
- [x] 1.7 Create database migration scripts and initial data seeding
- [x] 1.8 Unit test database models and connection utilities
- [x] 1.9 Add unified logging system we can use for all components
- [ ] 2.0 Market Data Collection and Processing System
- [ ] 2.1 Implement OKX WebSocket API connector for real-time data

1
utils/__init__.py Normal file
View File

@ -0,0 +1 @@
# Utils package for shared utilities

341
utils/logger.py Normal file
View File

@ -0,0 +1,341 @@
"""
Unified logging system for the TCP Dashboard project.
Provides centralized logging with:
- Component-specific log directories
- Date-based file rotation
- Unified log format: [YYYY-MM-DD HH:MM:SS - LEVEL - message]
- Thread-safe operations
- Automatic directory creation
- Verbose console logging with proper level handling
- Automatic old log cleanup
Usage:
from utils.logger import get_logger
logger = get_logger('bot_manager')
logger.info("This is an info message")
logger.error("This is an error message")
# With verbose console output
logger = get_logger('bot_manager', verbose=True)
# With custom cleanup settings
logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7)
"""
import logging
import os
from datetime import datetime
from pathlib import Path
from typing import Dict, Optional
import threading
class DateRotatingFileHandler(logging.FileHandler):
"""
Custom file handler that rotates log files based on date changes.
Creates new log files when the date changes to ensure daily separation.
"""
def __init__(self, log_dir: Path, component_name: str, cleanup_callback=None, max_files=30):
self.log_dir = log_dir
self.component_name = component_name
self.current_date = None
self.cleanup_callback = cleanup_callback
self.max_files = max_files
self._lock = threading.Lock()
# Initialize with today's file
self._update_filename()
super().__init__(self.current_filename, mode='a', encoding='utf-8')
def _update_filename(self):
"""Update the filename based on current date."""
today = datetime.now().strftime('%Y-%m-%d')
if self.current_date != today:
self.current_date = today
self.current_filename = self.log_dir / f"{today}.txt"
# Ensure the directory exists
self.log_dir.mkdir(parents=True, exist_ok=True)
# Cleanup old logs if callback is provided
if self.cleanup_callback:
self.cleanup_callback(self.component_name, self.max_files)
def emit(self, record):
"""Emit a log record, rotating file if date has changed."""
with self._lock:
# Check if we need to rotate to a new file
today = datetime.now().strftime('%Y-%m-%d')
if self.current_date != today:
# Close current file
if hasattr(self, 'stream') and self.stream:
self.stream.close()
# Update filename and reopen (this will trigger cleanup)
self._update_filename()
self.baseFilename = str(self.current_filename)
self.stream = self._open()
super().emit(record)
class UnifiedLogger:
"""
Unified logger class that manages component-specific loggers with consistent formatting.
"""
_loggers: Dict[str, logging.Logger] = {}
_lock = threading.Lock()
@classmethod
def get_logger(cls, component_name: str, log_level: str = "INFO",
verbose: Optional[bool] = None, clean_old_logs: bool = True,
max_log_files: int = 30) -> logging.Logger:
"""
Get or create a logger for the specified component.
Args:
component_name: Name of the component (e.g., 'bot_manager', 'data_collector')
log_level: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
verbose: Enable console logging. If None, uses VERBOSE_LOGGING from .env
clean_old_logs: Automatically clean old log files when creating new ones
max_log_files: Maximum number of log files to keep (default: 30)
Returns:
Configured logger instance for the component
"""
# Create a unique key for logger configuration
logger_key = f"{component_name}_{log_level}_{verbose}_{clean_old_logs}_{max_log_files}"
with cls._lock:
if logger_key in cls._loggers:
return cls._loggers[logger_key]
# Create new logger
logger = logging.getLogger(f"tcp_dashboard.{component_name}.{hash(logger_key) % 10000}")
logger.setLevel(getattr(logging, log_level.upper()))
# Prevent duplicate handlers if logger already exists
if logger.handlers:
logger.handlers.clear()
# Create log directory for component
log_dir = Path("logs") / component_name
try:
# Setup cleanup callback if enabled
cleanup_callback = cls._cleanup_old_logs if clean_old_logs else None
# Add date-rotating file handler
file_handler = DateRotatingFileHandler(
log_dir, component_name, cleanup_callback, max_log_files
)
file_handler.setLevel(logging.DEBUG)
# Create unified formatter
formatter = logging.Formatter(
'[%(asctime)s - %(levelname)s - %(message)s]',
datefmt='%Y-%m-%d %H:%M:%S'
)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
# Add console handler based on verbose setting
should_log_to_console = cls._should_enable_console_logging(verbose)
if should_log_to_console:
console_handler = logging.StreamHandler()
# Set console log level based on log_level with proper type handling
console_level = cls._get_console_log_level(log_level)
console_handler.setLevel(console_level)
# Use colored formatter for console if available
console_formatter = cls._get_console_formatter()
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
# Prevent propagation to root logger
logger.propagate = False
cls._loggers[logger_key] = logger
# Log initialization
logger.info(f"Logger initialized for component: {component_name} "
f"(verbose={should_log_to_console}, cleanup={clean_old_logs}, "
f"max_files={max_log_files})")
except Exception as e:
# Fallback to console logging if file logging fails
print(f"Warning: Failed to setup file logging for {component_name}: {e}")
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
formatter = logging.Formatter('[%(asctime)s - %(levelname)s - %(message)s]')
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
logger.propagate = False
cls._loggers[logger_key] = logger
return logger
@classmethod
def _should_enable_console_logging(cls, verbose: Optional[bool]) -> bool:
"""
Determine if console logging should be enabled.
Args:
verbose: Explicit verbose setting, or None to use environment variable
Returns:
True if console logging should be enabled
"""
if verbose is not None:
return verbose
# Check environment variables
env_verbose = os.getenv('VERBOSE_LOGGING', 'false').lower()
env_console = os.getenv('LOG_TO_CONSOLE', 'false').lower()
return env_verbose in ('true', '1', 'yes') or env_console in ('true', '1', 'yes')
@classmethod
def _get_console_log_level(cls, log_level: str) -> int:
"""
Get appropriate console log level based on file log level.
Args:
log_level: File logging level
Returns:
Console logging level (integer)
"""
# Map file log levels to console log levels
# Generally, console should be less verbose than file
level_mapping = {
'DEBUG': logging.DEBUG, # Show all debug info on console too
'INFO': logging.INFO, # Show info and above
'WARNING': logging.WARNING, # Show warnings and above
'ERROR': logging.ERROR, # Show errors and above
'CRITICAL': logging.CRITICAL # Show only critical
}
return level_mapping.get(log_level.upper(), logging.INFO)
@classmethod
def _get_console_formatter(cls) -> logging.Formatter:
"""
Get formatter for console output with potential color support.
Returns:
Configured formatter for console output
"""
# Basic formatter - could be enhanced with colors in the future
return logging.Formatter(
'[%(asctime)s - %(levelname)s - %(message)s]',
datefmt='%Y-%m-%d %H:%M:%S'
)
@classmethod
def _cleanup_old_logs(cls, component_name: str, max_files: int = 30):
"""
Clean up old log files for a component, keeping only the most recent files.
Args:
component_name: Name of the component
max_files: Maximum number of log files to keep
"""
log_dir = Path("logs") / component_name
if not log_dir.exists():
return
# Get all log files sorted by modification time (newest first)
log_files = sorted(
log_dir.glob("*.txt"),
key=lambda f: f.stat().st_mtime,
reverse=True
)
# Keep only the most recent max_files
files_to_delete = log_files[max_files:]
for log_file in files_to_delete:
try:
log_file.unlink()
# Only log to console to avoid recursive logging
if cls._should_enable_console_logging(None):
print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - INFO - "
f"Deleted old log file: {log_file}]")
except Exception as e:
print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - WARNING - "
f"Failed to delete old log file {log_file}: {e}]")
@classmethod
def cleanup_old_logs(cls, component_name: str, days_to_keep: int = 30):
"""
Clean up old log files for a component based on age.
Args:
component_name: Name of the component
days_to_keep: Number of days of logs to retain
"""
log_dir = Path("logs") / component_name
if not log_dir.exists():
return
cutoff_date = datetime.now().timestamp() - (days_to_keep * 24 * 60 * 60)
for log_file in log_dir.glob("*.txt"):
if log_file.stat().st_mtime < cutoff_date:
try:
log_file.unlink()
print(f"Deleted old log file: {log_file}")
except Exception as e:
print(f"Failed to delete old log file {log_file}: {e}")
# Convenience function for easy import
def get_logger(component_name: str, log_level: str = "INFO",
verbose: Optional[bool] = None, clean_old_logs: bool = True,
max_log_files: int = 30) -> logging.Logger:
"""
Get a logger instance for the specified component.
Args:
component_name: Name of the component (e.g., 'bot_manager', 'data_collector')
log_level: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
verbose: Enable console logging. If None, uses VERBOSE_LOGGING from .env
clean_old_logs: Automatically clean old log files when creating new ones
max_log_files: Maximum number of log files to keep (default: 30)
Returns:
Configured logger instance
Example:
from utils.logger import get_logger
# Basic usage
logger = get_logger('bot_manager')
# With verbose console output
logger = get_logger('bot_manager', verbose=True)
# With custom cleanup settings
logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7)
logger.info("Bot started successfully")
logger.error("Connection failed", exc_info=True)
"""
return UnifiedLogger.get_logger(component_name, log_level, verbose, clean_old_logs, max_log_files)
def cleanup_old_logs(component_name: str, days_to_keep: int = 30):
"""
Clean up old log files for a component based on age.
Args:
component_name: Name of the component
days_to_keep: Number of days of logs to retain (default: 30)
"""
UnifiedLogger.cleanup_old_logs(component_name, days_to_keep)

2
uv.lock generated
View File

@ -403,6 +403,7 @@ dependencies = [
{ name = "psycopg2-binary" },
{ name = "pydantic" },
{ name = "pydantic-settings" },
{ name = "pytest" },
{ name = "python-dateutil" },
{ name = "python-dotenv" },
{ name = "pytz" },
@ -444,6 +445,7 @@ requires-dist = [
{ name = "psycopg2-binary", specifier = ">=2.9.0" },
{ name = "pydantic", specifier = ">=2.4.0" },
{ name = "pydantic-settings", specifier = ">=2.1.0" },
{ name = "pytest", specifier = ">=8.3.5" },
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=7.4.0" },
{ name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.21.0" },
{ name = "pytest-cov", marker = "extra == 'dev'", specifier = ">=4.1.0" },