Update logging documentation and refactor logger implementation

- Revised the logging documentation to clarify the unified logging system's features and usage patterns.
- Simplified the logger implementation by removing the custom `DateRotatingFileHandler` and utilizing the standard library's `TimedRotatingFileHandler` for date-based log rotation.
- Enhanced the `get_logger` function to ensure thread-safe logger configuration and prevent duplicate handlers.
- Introduced a new `cleanup_old_logs` function for age-based log cleanup, while retaining the existing count-based cleanup mechanism.
- Improved error handling and logging setup to ensure robust logging behavior across components.

These changes enhance the clarity and maintainability of the logging system, making it easier for developers to implement and utilize logging in their components.
This commit is contained in:
Vasily.onl 2025-06-06 21:02:08 +08:00
parent c1118eaf2b
commit e147aa1873
2 changed files with 200 additions and 545 deletions

View File

@ -1,246 +1,33 @@
# Unified Logging System
The TCP Dashboard project uses a unified logging system that provides consistent, centralized logging across all components with advanced conditional logging capabilities.
The TCP Dashboard project uses a unified logging system built on Python's standard `logging` library. It provides consistent, centralized logging across all components.
## Key Features
- **Component-based logging**: Each component (e.g., `bot_manager`, `data_collector`) gets its own dedicated logger and log directory under `logs/`.
- **Centralized control**: `UnifiedLogger` class manages all logger instances, ensuring consistent configuration.
- **Date-based rotation**: Log files are automatically rotated daily (e.g., `2023-11-15.txt`).
- **Unified format**: All log messages follow `[YYYY-MM-DD HH:MM:SS - LEVEL - message]`.
- **Verbose console logging**: Optional verbose console output for real-time monitoring, controlled by environment variables.
- **Automatic cleanup**: Old log files are automatically removed to save disk space.
## Features
- **Component-specific directories**: Each component gets its own log directory
- **Date-based file rotation**: New log files created daily automatically
- **Unified format**: Consistent timestamp and message format across all logs
- **Thread-safe**: Safe for use in multi-threaded applications
- **Verbose console logging**: Configurable console output with proper log level handling
- **Automatic log cleanup**: Built-in functionality to remove old log files automatically
- **Error handling**: Graceful fallback to console logging if file logging fails
- **Conditional logging**: Components can operate with or without loggers
- **Error-only logging**: Option to log only error-level messages
- **Hierarchical logging**: Parent components can pass loggers to children
- **Logger inheritance**: Consistent logging across component hierarchies
## Conditional Logging System
The TCP Dashboard implements a sophisticated conditional logging system that allows components to work with or without loggers, providing maximum flexibility for different deployment scenarios.
### Key Concepts
1. **Optional Logging**: Components accept `logger=None` and function normally without logging
2. **Error-Only Mode**: Components can log only error-level messages with `log_errors_only=True`
3. **Logger Inheritance**: Parent components pass their logger to child components
4. **Hierarchical Structure**: Log files are organized by component hierarchy
### Component Hierarchy
```
Top-level Application (individual logger)
├── ProductionManager (individual logger)
│ ├── DataSaver (receives logger from ProductionManager)
│ ├── DataValidator (receives logger from ProductionManager)
│ ├── DatabaseConnection (receives logger from ProductionManager)
│ └── CollectorManager (individual logger)
│ ├── OKX collector BTC-USD (individual logger)
│ │ ├── DataAggregator (receives logger from OKX collector)
│ │ ├── DataTransformer (receives logger from OKX collector)
│ │ └── DataProcessor (receives logger from OKX collector)
│ └── Another collector...
```
### Usage Patterns
#### 1. No Logging
```python
from data.collector_manager import CollectorManager
from data.exchanges.okx.collector import OKXCollector
# Components work without any logging
manager = CollectorManager(logger=None)
collector = OKXCollector("BTC-USDT", logger=None)
# No log files created, no console output
# Components function normally without exceptions
```
#### 2. Normal Logging
```python
from utils.logger import get_logger
from data.collector_manager import CollectorManager
# Create logger for the manager
logger = get_logger('production_manager')
# Manager logs all activities
manager = CollectorManager(logger=logger)
# Child components inherit the logger
collector = manager.add_okx_collector("BTC-USDT") # Uses manager's logger
```
#### 3. Error-Only Logging
```python
from utils.logger import get_logger
from data.exchanges.okx.collector import OKXCollector
# Create logger but only log errors
logger = get_logger('critical_only')
# Only error and critical messages are logged
collector = OKXCollector(
"BTC-USDT",
logger=logger,
log_errors_only=True
)
# Debug, info, warning messages are suppressed
# Error and critical messages are always logged
```
#### 4. Hierarchical Logging
```python
from utils.logger import get_logger
from data.collector_manager import CollectorManager
# Top-level application logger
app_logger = get_logger('tcp_dashboard')
# Production manager with its own logger
prod_logger = get_logger('production_manager')
manager = CollectorManager(logger=prod_logger)
# Individual collectors with specific loggers
btc_logger = get_logger('btc_collector')
btc_collector = OKXCollector("BTC-USDT", logger=btc_logger)
eth_collector = OKXCollector("ETH-USDT", logger=None) # No logging
# Results in organized log structure:
# logs/tcp_dashboard/
# logs/production_manager/
# logs/btc_collector/
# (no logs for ETH collector)
```
#### 5. Mixed Configuration
```python
from utils.logger import get_logger
from data.collector_manager import CollectorManager
# System logger for normal operations
system_logger = get_logger('system')
# Critical logger for error-only components
critical_logger = get_logger('critical_only')
manager = CollectorManager(logger=system_logger)
# Different logging strategies for different collectors
btc_collector = OKXCollector("BTC-USDT", logger=system_logger) # Full logging
eth_collector = OKXCollector("ETH-USDT", logger=critical_logger, log_errors_only=True) # Errors only
ada_collector = OKXCollector("ADA-USDT", logger=None) # No logging
manager.add_collector(btc_collector)
manager.add_collector(eth_collector)
manager.add_collector(ada_collector)
```
### Implementation Details
#### Component Constructor Pattern
All major components follow this pattern:
```python
class ComponentExample:
def __init__(self, logger=None, log_errors_only=False):
self.logger = logger
self.log_errors_only = log_errors_only
def _log_debug(self, message: str) -> None:
"""Log debug message if logger is available and not in errors-only mode."""
if self.logger and not self.log_errors_only:
self.logger.debug(message)
def _log_info(self, message: str) -> None:
"""Log info message if logger is available and not in errors-only mode."""
if self.logger and not self.log_errors_only:
self.logger.info(message)
def _log_warning(self, message: str) -> None:
"""Log warning message if logger is available and not in errors-only mode."""
if self.logger and not self.log_errors_only:
self.logger.warning(message)
def _log_error(self, message: str, exc_info: bool = False) -> None:
"""Log error message if logger is available (always logs errors)."""
if self.logger:
self.logger.error(message, exc_info=exc_info)
def _log_critical(self, message: str, exc_info: bool = False) -> None:
"""Log critical message if logger is available (always logs critical)."""
if self.logger:
self.logger.critical(message, exc_info=exc_info)
```
#### Child Component Pattern
Child components receive logger from parent:
```python
class OKXCollector(BaseDataCollector):
def __init__(self, symbol: str, logger=None, log_errors_only=False):
super().__init__(..., logger=logger, log_errors_only=log_errors_only)
# Pass logger to child components
self._data_processor = OKXDataProcessor(
symbol,
logger=self.logger # Pass parent's logger
)
self._data_validator = DataValidator(logger=self.logger)
self._data_transformer = DataTransformer(logger=self.logger)
```
#### Supported Components
The following components support conditional logging:
1. **BaseDataCollector** (`data/base_collector.py`)
- Parameters: `logger=None, log_errors_only=False`
- Conditional logging for all collector operations
2. **CollectorManager** (`data/collector_manager.py`)
- Parameters: `logger=None, log_errors_only=False`
- Manages multiple collectors with consistent logging
3. **OKXCollector** (`data/exchanges/okx/collector.py`)
- Parameters: `logger=None, log_errors_only=False`
- Exchange-specific data collection with conditional logging
4. **BaseDataValidator** (`data/common/validation.py`)
- Parameters: `logger=None`
- Data validation with optional logging
5. **OKXDataTransformer** (`data/exchanges/okx/data_processor.py`)
- Parameters: `logger=None`
- Data processing with conditional logging
- **Component-based Logging**: Each component (e.g., `bot_manager`, `data_collector`) gets its own dedicated logger, with logs organized into separate directories under `logs/`.
- **Standardized & Simple**: Relies on standard Python `logging` handlers, making it robust and easy to maintain.
- **Date-based Rotation**: Log files are automatically rotated daily at midnight by `TimedRotatingFileHandler`.
- **Automatic Cleanup**: Log file retention is managed automatically based on the number of backup files to keep (`backupCount`), preventing excessive disk usage.
- **Unified Format**: All log messages follow a consistent format: `[YYYY-MM-DD HH:MM:SS - LEVEL - message]`.
- **Configurable Console Output**: Optional verbose console output for real-time monitoring, configurable via function arguments or environment variables.
## Usage
### Getting a Logger
The primary way to get a logger is via the `get_logger` function. It is thread-safe and ensures that loggers are configured only once.
```python
from utils.logger import get_logger
# Get logger for bot manager
# Get a logger for the bot manager component
# This will create a file logger and, if verbose=True, a console logger.
logger = get_logger('bot_manager', verbose=True)
logger.info("Bot started successfully")
logger.debug("Connecting to database...")
logger.warning("API response time is high")
logger.error("Failed to execute trade", extra={'trade_id': 12345})
logger.error("Failed to execute trade", exc_info=True)
```
### Configuration
@ -249,41 +36,97 @@ The `get_logger` function accepts the following parameters:
| Parameter | Type | Default | Description |
|-------------------|---------------------|---------|-----------------------------------------------------------------------------|
| `component_name` | `str` | - | Name of the component (e.g., `bot_manager`, `data_collector`) |
| `log_level` | `str` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
| `verbose` | `Optional[bool]` | `None` | Enable console logging. If `None`, uses `VERBOSE_LOGGING` from `.env` |
| `clean_old_logs` | `bool` | `True` | Automatically clean old log files when creating new ones |
| `max_log_files` | `int` | `30` | Maximum number of log files to keep per component |
| `component_name` | `str` | - | Name of the component (e.g., `bot_manager`). Used for the logger name and directory. |
| `log_level` | `str` | `INFO` | The minimum logging level to be processed (DEBUG, INFO, WARNING, ERROR, CRITICAL). |
| `verbose` | `Optional[bool]` | `None` | If `True`, enables console logging. If `None`, uses `VERBOSE_LOGGING` or `LOG_TO_CONSOLE` from environment variables. |
| `max_log_files` | `int` | `30` | The maximum number of backup log files to keep. The core of the log cleanup mechanism. |
| `clean_old_logs` | `bool` | `True` | **Deprecated**. Kept for backward compatibility but has no effect. Cleanup is controlled by `max_log_files`. |
## Log Cleanup
For centralized control, you can use environment variables:
- `VERBOSE_LOGGING`: Set to `true` to enable console logging for all loggers.
- `LOG_TO_CONSOLE`: An alias for `VERBOSE_LOGGING`.
Log cleanup is now based on the number of files, not age.
- **Enabled by default**: `clean_old_logs=True`
- **Default retention**: Keeps the most recent 30 log files (`max_log_files=30`)
### Log File Structure
## Centralized Control
For consistent logging behavior across the application, it is recommended to use environment variables in an `.env` file instead of passing parameters to `get_logger`.
- `LOG_LEVEL`: "INFO", "DEBUG", etc.
- `VERBOSE_LOGGING`: "true" or "false"
- `CLEAN_OLD_LOGS`: "true" or "false"
- `MAX_LOG_FILES`: e.g., "15"
## File Structure
The logger creates a directory for each component inside `logs/`. The main log file is named `component_name.log`. When rotated, old logs are renamed with a date suffix.
```
logs/
├── bot_manager/
│ ├── 2023-11-14.txt
│ └── 2023-11-15.txt
│ ├── bot_manager.log (current log file)
│ └── bot_manager.log.2023-11-15
├── data_collector/
│ ├── 2023-11-14.txt
│ └── 2023-11-15.txt
└── default_logger/
└── 2023-11-15.txt
│ ├── data_collector.log
│ └── data_collector.log.2023-11-15
└── test_component/
└── test_component.log
```
## Advanced Usage
### Age-Based Log Cleanup
While the primary cleanup mechanism is count-based (via `max_log_files`), a separate utility function, `cleanup_old_logs`, is available for age-based cleanup if you have specific retention policies.
```python
from utils.logger import cleanup_old_logs
# Deletes all log files in the 'bot_manager' directory older than 15 days
cleanup_old_logs('bot_manager', days_to_keep=15)
```
### Shutting Down Logging
In some cases, especially in tests or when an application is shutting down gracefully, you may need to explicitly close all log file handlers.
```python
from utils.logger import shutdown_logging
# Closes all open file handlers managed by the logging system
shutdown_logging()
```
## Component Integration Pattern (Conditional Logging)
While the logger utility is simple, it is designed to support a powerful conditional logging pattern at the application level. This allows components to be developed to run with or without logging, making them more flexible and easier to test.
### Key Concepts
1. **Optional Logging**: Components are designed to accept `logger=None` in their constructor and function normally without producing any logs.
2. **Error-Only Mode**: A component can be designed to only log messages of level `ERROR` or higher. This is a component-level implementation pattern, not a feature of `get_logger`.
3. **Logger Inheritance**: Parent components can pass their logger instance to child components, ensuring a consistent logging context.
### Example: Component Constructor
All major components should follow this pattern to support conditional logging.
```python
class ComponentExample:
def __init__(self, logger=None, log_errors_only=False):
self.logger = logger
self.log_errors_only = log_errors_only
def _log_info(self, message: str) -> None:
"""Log info message if logger is available and not in errors-only mode."""
if self.logger and not self.log_errors_only:
self.logger.info(message)
def _log_error(self, message: str, exc_info: bool = False) -> None:
"""Log error message if logger is available."""
if self.logger:
self.logger.error(message, exc_info=exc_info)
# ... other helper methods for debug, warning, critical ...
```
This pattern decouples the component's logic from the global logging configuration and makes its logging behavior explicit and easy to manage.
## Troubleshooting
- **Permissions**: Ensure the application has write permissions to the `logs/` directory.
- **No Logs**: If file logging fails (e.g., due to permissions), a warning is printed to the console. If `verbose` is not enabled, no further logs will be produced. Ensure the `logs/` directory is writable.
- **Console Spam**: If the console is too noisy, set `verbose=False` when calling `get_logger` and ensure `VERBOSE_LOGGING` is not set to `true` in your environment.
## Best Practices
### 1. Component Naming

View File

@ -3,339 +3,151 @@ Unified logging system for the TCP Dashboard project.
Provides centralized logging with:
- Component-specific log directories
- Date-based file rotation
- Date-based file rotation using standard library handlers
- Unified log format: [YYYY-MM-DD HH:MM:SS - LEVEL - message]
- Thread-safe operations
- Automatic directory creation
- Verbose console logging with proper level handling
- Automatic old log cleanup
Usage:
from utils.logger import get_logger
from utils.logger import get_logger, cleanup_old_logs
logger = get_logger('bot_manager')
logger.info("This is an info message")
logger.error("This is an error message")
# With verbose console output
logger = get_logger('bot_manager', verbose=True)
# With custom cleanup settings
logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7)
# Clean up logs older than 7 days
cleanup_old_logs('bot_manager', days_to_keep=7)
"""
import logging
import logging.handlers
import os
from datetime import datetime
from pathlib import Path
from typing import Dict, Optional
from typing import Optional
import threading
class DateRotatingFileHandler(logging.FileHandler):
"""
Custom file handler that rotates log files based on date changes.
Creates new log files when the date changes to ensure daily separation.
"""
def __init__(self, log_dir: Path, component_name: str, cleanup_callback=None, max_files=30):
self.log_dir = log_dir
self.component_name = component_name
self.current_date = None
self.cleanup_callback = cleanup_callback
self.max_files = max_files
self._lock = threading.Lock()
# Initialize with today's file
self._update_filename()
super().__init__(self.current_filename, mode='a', encoding='utf-8')
def _update_filename(self):
"""Update the filename based on current date."""
today = datetime.now().strftime('%Y-%m-%d')
if self.current_date != today:
self.current_date = today
self.current_filename = self.log_dir / f"{today}.txt"
# Ensure the directory exists
self.log_dir.mkdir(parents=True, exist_ok=True)
# Cleanup old logs if callback is provided
if self.cleanup_callback:
self.cleanup_callback(self.component_name, self.max_files)
def emit(self, record):
"""Emit a log record, rotating file if date has changed."""
with self._lock:
# Check if we need to rotate to a new file
today = datetime.now().strftime('%Y-%m-%d')
if self.current_date != today:
# Close current file
if hasattr(self, 'stream') and self.stream:
self.stream.close()
# Update filename and reopen (this will trigger cleanup)
self._update_filename()
self.baseFilename = str(self.current_filename)
self.stream = self._open()
super().emit(record)
class UnifiedLogger:
"""
Unified logger class that manages component-specific loggers with consistent formatting.
"""
_loggers: Dict[str, logging.Logger] = {}
# Lock for thread-safe logger configuration
_lock = threading.Lock()
@classmethod
def get_logger(cls, component_name: str, log_level: str = "INFO",
def get_logger(component_name: str, log_level: str = "INFO",
verbose: Optional[bool] = None, clean_old_logs: bool = True,
max_log_files: int = 30) -> logging.Logger:
"""
Get or create a logger for the specified component.
This function is thread-safe and ensures that handlers are not duplicated.
Args:
component_name: Name of the component (e.g., 'bot_manager', 'data_collector')
log_level: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
verbose: Enable console logging. If None, uses VERBOSE_LOGGING from .env
clean_old_logs: Automatically clean old log files when creating new ones
clean_old_logs: (Deprecated) This is now handled by max_log_files.
The parameter is kept for backward compatibility.
max_log_files: Maximum number of log files to keep (default: 30)
Returns:
Configured logger instance for the component
"""
# Create a unique key for logger configuration
logger_key = f"{component_name}_{log_level}_{verbose}_{clean_old_logs}_{max_log_files}"
with _lock:
logger_name = f"tcp_dashboard.{component_name}"
logger = logging.getLogger(logger_name)
with cls._lock:
if logger_key in cls._loggers:
return cls._loggers[logger_key]
# Create new logger
logger = logging.getLogger(f"tcp_dashboard.{component_name}.{hash(logger_key) % 10000}")
logger.setLevel(getattr(logging, log_level.upper()))
# Prevent duplicate handlers if logger already exists
# Avoid re-configuring if logger already has handlers
if logger.handlers:
logger.handlers.clear()
# Create log directory for component
log_dir = Path("logs") / component_name
return logger
# Set logger level
try:
# Setup cleanup callback if enabled
cleanup_callback = cls._cleanup_old_logs if clean_old_logs else None
# Add date-rotating file handler
file_handler = DateRotatingFileHandler(
log_dir, component_name, cleanup_callback, max_log_files
)
file_handler.setLevel(logging.DEBUG)
# Create unified formatter
formatter = logging.Formatter(
'[%(asctime)s - %(levelname)s - %(message)s]',
datefmt='%Y-%m-%d %H:%M:%S'
)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
# Add console handler based on verbose setting
should_log_to_console = cls._should_enable_console_logging(verbose)
if should_log_to_console:
console_handler = logging.StreamHandler()
# Set console log level based on log_level with proper type handling
console_level = cls._get_console_log_level(log_level)
console_handler.setLevel(console_level)
# Use colored formatter for console if available
console_formatter = cls._get_console_formatter()
console_handler.setFormatter(console_formatter)
logger.addHandler(console_handler)
level = getattr(logging, log_level.upper())
logger.setLevel(level)
except AttributeError:
print(f"Warning: Invalid log level '{log_level}'. Defaulting to INFO.")
logger.setLevel(logging.INFO)
# Prevent propagation to root logger
logger.propagate = False
cls._loggers[logger_key] = logger
# Create log directory for component
log_dir = Path("logs") / component_name
log_dir.mkdir(parents=True, exist_ok=True)
# Log initialization
logger.info(f"Logger initialized for component: {component_name} "
f"(verbose={should_log_to_console}, cleanup={clean_old_logs}, "
f"max_files={max_log_files})")
except Exception as e:
# Fallback to console logging if file logging fails
print(f"Warning: Failed to setup file logging for {component_name}: {e}")
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
formatter = logging.Formatter('[%(asctime)s - %(levelname)s - %(message)s]')
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
logger.propagate = False
cls._loggers[logger_key] = logger
return logger
@classmethod
def _should_enable_console_logging(cls, verbose: Optional[bool]) -> bool:
"""
Determine if console logging should be enabled.
Args:
verbose: Explicit verbose setting, or None to use environment variable
Returns:
True if console logging should be enabled
"""
if verbose is not None:
return verbose
# Check environment variables
env_verbose = os.getenv('VERBOSE_LOGGING', 'false').lower()
env_console = os.getenv('LOG_TO_CONSOLE', 'false').lower()
return env_verbose in ('true', '1', 'yes') or env_console in ('true', '1', 'yes')
@classmethod
def _get_console_log_level(cls, log_level: str) -> int:
"""
Get appropriate console log level based on file log level.
Args:
log_level: File logging level
Returns:
Console logging level (integer)
"""
# Map file log levels to console log levels
# Generally, console should be less verbose than file
level_mapping = {
'DEBUG': logging.DEBUG, # Show all debug info on console too
'INFO': logging.INFO, # Show info and above
'WARNING': logging.WARNING, # Show warnings and above
'ERROR': logging.ERROR, # Show errors and above
'CRITICAL': logging.CRITICAL # Show only critical
}
return level_mapping.get(log_level.upper(), logging.INFO)
@classmethod
def _get_console_formatter(cls) -> logging.Formatter:
"""
Get formatter for console output with potential color support.
Returns:
Configured formatter for console output
"""
# Basic formatter - could be enhanced with colors in the future
return logging.Formatter(
# Unified formatter
formatter = logging.Formatter(
'[%(asctime)s - %(levelname)s - %(message)s]',
datefmt='%Y-%m-%d %H:%M:%S'
)
@classmethod
def _cleanup_old_logs(cls, component_name: str, max_files: int = 30):
"""
Clean up old log files for a component, keeping only the most recent files.
Args:
component_name: Name of the component
max_files: Maximum number of log files to keep
"""
log_dir = Path("logs") / component_name
if not log_dir.exists():
return
# Get all log files sorted by modification time (newest first)
log_files = sorted(
log_dir.glob("*.txt"),
key=lambda f: f.stat().st_mtime,
reverse=True
)
# Keep only the most recent max_files
files_to_delete = log_files[max_files:]
for log_file in files_to_delete:
# Add date-rotating file handler
try:
log_file.unlink()
# Only log to console to avoid recursive logging
if cls._should_enable_console_logging(None):
print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - INFO - "
f"Deleted old log file: {log_file}]")
log_file = log_dir / f"{component_name}.log"
# Rotates at midnight, keeps 'max_log_files' backups
file_handler = logging.handlers.TimedRotatingFileHandler(
log_file, when='midnight', interval=1, backupCount=max_log_files,
encoding='utf-8'
)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
except Exception as e:
print(f"[{datetime.now().strftime('%Y-%m-%d %H:%M:%S')} - WARNING - "
f"Failed to delete old log file {log_file}: {e}]")
print(f"Warning: Failed to setup file logging for {component_name}: {e}")
@classmethod
def cleanup_old_logs(cls, component_name: str, days_to_keep: int = 30):
# Add console handler based on verbose setting
if _should_enable_console_logging(verbose):
console_handler = logging.StreamHandler()
console_level = _get_console_log_level(log_level)
console_handler.setLevel(console_level)
console_handler.setFormatter(formatter)
logger.addHandler(console_handler)
return logger
def _should_enable_console_logging(verbose: Optional[bool]) -> bool:
"""Determine if console logging should be enabled."""
if verbose is not None:
return verbose
env_verbose = os.getenv('VERBOSE_LOGGING', 'false').lower()
env_console = os.getenv('LOG_TO_CONSOLE', 'false').lower()
return env_verbose in ('true', '1', 'yes') or env_console in ('true', '1', 'yes')
def _get_console_log_level(log_level: str) -> int:
"""Get appropriate console log level."""
level_mapping = {
'DEBUG': logging.DEBUG,
'INFO': logging.INFO,
'WARNING': logging.WARNING,
'ERROR': logging.ERROR,
'CRITICAL': logging.CRITICAL
}
return level_mapping.get(log_level.upper(), logging.INFO)
def cleanup_old_logs(component_name: str, days_to_keep: int = 30):
"""
Clean up old log files for a component based on age.
Note: TimedRotatingFileHandler already manages log file counts. This function
is for age-based cleanup, which might be redundant but is kept for specific use cases.
Args:
component_name: Name of the component
days_to_keep: Number of days of logs to retain
"""
log_dir = Path("logs") / component_name
if not log_dir.exists():
if not log_dir.is_dir():
return
cutoff_date = datetime.now().timestamp() - (days_to_keep * 24 * 60 * 60)
for log_file in log_dir.glob("*.txt"):
if log_file.stat().st_mtime < cutoff_date:
for log_file in log_dir.glob("*"):
try:
if log_file.is_file() and log_file.stat().st_mtime < cutoff_date:
log_file.unlink()
print(f"Deleted old log file: {log_file}")
except Exception as e:
print(f"Failed to delete old log file {log_file}: {e}")
# Convenience function for easy import
def get_logger(component_name: str, log_level: str = "INFO",
verbose: Optional[bool] = None, clean_old_logs: bool = True,
max_log_files: int = 30) -> logging.Logger:
def shutdown_logging():
"""
Get a logger instance for the specified component.
Args:
component_name: Name of the component (e.g., 'bot_manager', 'data_collector')
log_level: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
verbose: Enable console logging. If None, uses VERBOSE_LOGGING from .env
clean_old_logs: Automatically clean old log files when creating new ones
max_log_files: Maximum number of log files to keep (default: 30)
Returns:
Configured logger instance
Example:
from utils.logger import get_logger
# Basic usage
logger = get_logger('bot_manager')
# With verbose console output
logger = get_logger('bot_manager', verbose=True)
# With custom cleanup settings
logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7)
logger.info("Bot started successfully")
logger.error("Connection failed", exc_info=True)
Shuts down the logging system, closing all file handlers.
This is important for clean exit, especially in tests.
"""
return UnifiedLogger.get_logger(component_name, log_level, verbose, clean_old_logs, max_log_files)
def cleanup_old_logs(component_name: str, days_to_keep: int = 30):
"""
Clean up old log files for a component based on age.
Args:
component_name: Name of the component
days_to_keep: Number of days of logs to retain (default: 30)
"""
UnifiedLogger.cleanup_old_logs(component_name, days_to_keep)
logging.shutdown()