
Image by author
# Introduction
Most Python developers consider logging an afterthought. they throw around print() statement during development, maybe switch to basic logging later, and assume that’s enough. But when problems arise in production, they discover they are missing the context needed to efficiently diagnose problems.
Proper logging techniques give you visibility into application behavior, performance patterns, and error conditions. With the right approach, you can trace user actions, identify bottlenecks, and debug problems without reproducing them locally. Good logging transforms debugging from guesswork to systematic problem-solving.
This article covers essential logging patterns that Python developers can use. You’ll learn how to structure log messages for searchability, how to handle exceptions without losing context, and configure logging for different environments. We’ll start with the basics and work our way up to more advanced logging strategies that you can use in projects right away. we will only use logging module.
You can find the code on GitHub.
# Setting up your first logger
Instead of jumping straight to complex configuration, let’s understand what a logger actually does. We will create a basic logger that writes to both the console and file.
import logging
logger = logging.getLogger('my_app')
logger.setLevel(logging.DEBUG)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
file_handler = logging.FileHandler('app.log')
file_handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
console_handler.setFormatter(formatter)
file_handler.setFormatter(formatter)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
logger.debug('This is a debug message')
logger.info('Application started')
logger.warning('Disk space running low')
logger.error('Failed to connect to database')
logger.critical('System shutting down')
Here’s what each piece of code does.
getLogger() The function creates a named logger instance. Think of it as creating a channel for your logs. The name ‘my_app’ helps you identify where the logs come from in larger applications.
We have set the logger level DEBUGWhich means it will process all messages. Then we create two handlers: one for console output and one for file output. Handlers control where the logs go.
console handler shows only INFO level and above, while the file handler captures everything, including DEBUG Message. This is useful because you want detailed logs in files but clean output on the screen.
The formatter determines how your log messages will appear. The format string uses placeholders such as %(asctime)s for timestamp and %(levelname)s For seriousness.
# Understanding Log Levels and When to Use Each
python’s logging module There are five standard levels, and for useful logs it is important to know when to use each.
Here is an example:
logger = logging.getLogger('payment_processor')
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)
def process_payment(user_id, amount):
logger.debug(f'Starting payment processing for user {user_id}')
if amount <= 0:
logger.error(f'Invalid payment amount: {amount}')
return False
logger.info(f'Processing ${amount} payment for user {user_id}')
if amount > 10000:
logger.warning(f'Large transaction detected: ${amount}')
try:
# Simulate payment processing
success = charge_card(user_id, amount)
if success:
logger.info(f'Payment successful for user {user_id}')
return True
else:
logger.error(f'Payment failed for user {user_id}')
return False
except Exception as e:
logger.critical(f'Payment system crashed: {e}', exc_info=True)
return False
def charge_card(user_id, amount):
# Simulated payment logic
return True
process_payment(12345, 150.00)
process_payment(12345, 15000.00)
Let’s learn when to use each level:
- debug For detailed information useful during development. You’ll use it to trace variable values, loop iterations, or step-by-step execution. These are generally inefficient in production.
- Information Marks the common operations you want to record. Starting a server, completing a task, or a successful transaction is known here. These confirm that your application is working as expected.
- alert Suggests something unexpected but not breaking. This includes low disk space, obsolete API usage, or unusual but controlled conditions. The application continues to run, but someone must check it.
- Mistake This means that something failed but the application can continue. Failed database queries, validation errors, or network timeouts are related here. The specific operation failed, but the app continues to run.
- Serious Indicates serious problems that may cause the application to crash or lose data. Use it sparingly for catastrophic failures that require immediate attention.
When you run the above code, you will get:
DEBUG: Starting payment processing for user 12345
DEBUG:payment_processor:Starting payment processing for user 12345
INFO: Processing $150.0 payment for user 12345
INFO:payment_processor:Processing $150.0 payment for user 12345
INFO: Payment successful for user 12345
INFO:payment_processor:Payment successful for user 12345
DEBUG: Starting payment processing for user 12345
DEBUG:payment_processor:Starting payment processing for user 12345
INFO: Processing $15000.0 payment for user 12345
INFO:payment_processor:Processing $15000.0 payment for user 12345
WARNING: Large transaction detected: $15000.0
WARNING:payment_processor:Large transaction detected: $15000.0
INFO: Payment successful for user 12345
INFO:payment_processor:Payment successful for user 12345
True
Next, let’s move on to understand more about logging exceptions.
# logging exceptions properly
When exceptions occur, you need more than just the error message; You need the full stack trace. Here’s how to catch exceptions effectively.
import json
logger = logging.getLogger('api_handler')
logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('errors.log')
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
def fetch_user_data(user_id):
logger.info(f'Fetching data for user {user_id}')
try:
# Simulate API call
response = call_external_api(user_id)
data = json.loads(response)
logger.debug(f'Received data: {data}')
return data
except json.JSONDecodeError as e:
logger.error(
f'Failed to parse JSON for user {user_id}: {e}',
exc_info=True
)
return None
except ConnectionError as e:
logger.error(
f'Network error while fetching user {user_id}',
exc_info=True
)
return None
except Exception as e:
logger.critical(
f'Unexpected error in fetch_user_data: {e}',
exc_info=True
)
raise
def call_external_api(user_id):
# Simulated API response
return '{"id": ' + str(user_id) + ', "name": "John"}'
fetch_user_data(123)
here is the key exc_info=True Parameters. This tells the logger to include the full exception traceback in your logs. Without it, you just get the error message, which is often not enough to fix the problem.
Notice how we catch specific exceptions first, then catch general ones Exception Handler. Specialized handlers let us provide context-appropriate error messages. The typical handler catches anything unexpected and picks it up again because we don’t know how to handle it safely.
Also note that we log on ERROR For expected exceptions (such as network errors) but CRITICAL For unexpected people. This distinction helps you prioritize when reviewing logs.
# Creating a Reusable Logger Configuration
Copying logger setup code to files is difficult and error-prone. Let’s create a configuration function that you can import anywhere in your project.
# logger_config.py
import logging
import os
from datetime import datetime
def setup_logger(name, log_dir="logs", level=logging.INFO):
"""
Create a configured logger instance
Args:
name: Logger name (usually __name__ from calling module)
log_dir: Directory to store log files
level: Minimum logging level
Returns:
Configured logger instance
"""
# Create logs directory if it doesn't exist
if not os.path.exists(log_dir):
os.makedirs(log_dir)
logger = logging.getLogger(name)
# Avoid adding handlers multiple times
if logger.handlers:
return logger
logger.setLevel(level)
# Console handler - INFO and above
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.INFO)
console_format = logging.Formatter("%(levelname)s - %(name)s - %(message)s")
console_handler.setFormatter(console_format)
# File handler - everything
log_filename = os.path.join(
log_dir, f"{name.replace('.', '_')}_{datetime.now().strftime('%Y%m%d')}.log"
)
file_handler = logging.FileHandler(log_filename)
file_handler.setLevel(logging.DEBUG)
file_format = logging.Formatter(
"%(asctime)s - %(name)s - %(levelname)s - %(funcName)s:%(lineno)d - %(message)s"
)
file_handler.setFormatter(file_format)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
return logger
Now that you have setup logger_configYou can use it in your Python script like this:
from logger_config import setup_logger
logger = setup_logger(__name__)
def calculate_discount(price, discount_percent):
logger.debug(f'Calculating discount: {price} * {discount_percent}%')
if discount_percent < 0 or discount_percent > 100:
logger.warning(f'Invalid discount percentage: {discount_percent}')
discount_percent = max(0, min(100, discount_percent))
discount = price * (discount_percent / 100)
final_price = price - discount
logger.info(f'Applied {discount_percent}% discount: ${price} -> ${final_price}')
return final_price
calculate_discount(100, 20)
calculate_discount(100, 150)
This setup function handles several important things. First, it creates the log directory if necessary, preventing crashes with missing directories.
The function checks if handlers already exist before adding new ones. Without this check, call setup_logger Many times duplicate log entries will be created.
We automatically generate dated log file names. This prevents log files from growing infinitely and makes it easier to find logs from specific dates.
File handlers include more details than console handlers, including function names and line numbers. This is invaluable when debugging but will clutter the console output.
using the __name__ Because the logger name creates a hierarchy that matches your module structure. This lets you independently control logging for specific parts of your application.
# Structuring logs with context
Plain text logs are fine for simple applications, but structured logs with context make debugging much easier. Let us add relevant information to our logs.
import json
from datetime import datetime, timezone
class ContextLogger:
"""Logger wrapper that adds contextual information to all log messages"""
def __init__(self, name, context=None):
self.logger = logging.getLogger(name)
self.context = context or {}
handler = logging.StreamHandler()
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
# Check if handler already exists to avoid duplicate handlers
if not any(isinstance(h, logging.StreamHandler) and h.formatter._fmt == '%(message)s' for h in self.logger.handlers):
self.logger.addHandler(handler)
self.logger.setLevel(logging.DEBUG)
def _format_message(self, message, level, extra_context=None):
"""Format message with context as JSON"""
log_data = {
'timestamp': datetime.now(timezone.utc).isoformat(),
'level': level,
'message': message,
'context': {**self.context, **(extra_context or {})}
}
return json.dumps(log_data)
def debug(self, message, **kwargs):
self.logger.debug(self._format_message(message, 'DEBUG', kwargs))
def info(self, message, **kwargs):
self.logger.info(self._format_message(message, 'INFO', kwargs))
def warning(self, message, **kwargs):
self.logger.warning(self._format_message(message, 'WARNING', kwargs))
def error(self, message, **kwargs):
self.logger.error(self._format_message(message, 'ERROR', kwargs))
you can use it ContextLogger like so:
def process_order(order_id, user_id):
logger = ContextLogger(__name__, context={
'order_id': order_id,
'user_id': user_id
})
logger.info('Order processing started')
try:
items = fetch_order_items(order_id)
logger.info('Items fetched', item_count=len(items))
total = calculate_total(items)
logger.info('Total calculated', total=total)
if total > 1000:
logger.warning('High value order', total=total, flagged=True)
return True
except Exception as e:
logger.error('Order processing failed', error=str(e))
return False
def fetch_order_items(order_id):
return ({'id': 1, 'price': 50}, {'id': 2, 'price': 75})
def calculate_total(items):
return sum(item('price') for item in items)
process_order('ORD-12345', 'USER-789')
it ContextLogger The wrapper does some useful things: it automatically includes the reference in every log message. order_id And user_id Add to all logs without repeating them in every logging call.
JSON The format makes these logs easy to parse and search.
**kwargs Each logging method lets you add additional context to specific log messages. It adds global context (order_id, user_id) with local reference (item_count, total) Automatically.
This pattern is particularly useful in web applications where you want the request ID, user ID, or session ID in every log message from a request.
# Rotating log files to prevent disk space issues
Log files grow rapidly in production. Without rotation, they will eventually fill up your disk. Here’s how to implement automatic log rotation.
from logging.handlers import RotatingFileHandler, TimedRotatingFileHandler
def setup_rotating_logger(name):
logger = logging.getLogger(name)
logger.setLevel(logging.DEBUG)
# Size-based rotation: rotate when file reaches 10MB
size_handler = RotatingFileHandler(
'app_size_rotation.log',
maxBytes=10 * 1024 * 1024, # 10 MB
backupCount=5 # Keep 5 old files
)
size_handler.setLevel(logging.DEBUG)
# Time-based rotation: rotate daily at midnight
time_handler = TimedRotatingFileHandler(
'app_time_rotation.log',
when='midnight',
interval=1,
backupCount=7 # Keep 7 days
)
time_handler.setLevel(logging.INFO)
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
size_handler.setFormatter(formatter)
time_handler.setFormatter(formatter)
logger.addHandler(size_handler)
logger.addHandler(time_handler)
return logger
logger = setup_rotating_logger('rotating_app')
Let’s now try to use rotation of log files:
for i in range(1000):
logger.info(f'Processing record {i}')
logger.debug(f'Record {i} details: completed in {i * 0.1}ms')
RotatingFileHandler Manages logs based on file size. When the log file reaches 10 MB (specified in bytes), it is renamed. app_size_rotation.log.1and a new one app_size_rotation.log Begins. backupCount Out of 5 means you will keep the 5 oldest log files before deleting the oldest log files.
TimedRotatingFileHandler Rotates based on time interval. The ‘midnight’ parameter means that it creates a new log file every day at midnight. You can also use ‘H’ for hourly, ‘D’ for daily (any time), or ‘W0’ for weekly on Monday.
interval works with parameters when Parameters. with when='H' And interval=6Logs will rotate every 6 hours.
These handlers are required for production environments. Without them, your application may crash when the disk becomes filled with logs.
# Logging in to different environments
Your logging needs vary between development, staging, and production. Here’s how to configure logging to suit each environment.
import logging
import os
def configure_environment_logger(app_name):
"""Configure logger based on environment"""
environment = os.getenv('APP_ENV', 'development')
logger = logging.getLogger(app_name)
# Clear existing handlers
logger.handlers = ()
if environment == 'development':
# Development: verbose console output
logger.setLevel(logging.DEBUG)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter(
'%(levelname)s - %(name)s - %(funcName)s:%(lineno)d - %(message)s'
)
handler.setFormatter(formatter)
logger.addHandler(handler)
elif environment == 'staging':
# Staging: detailed file logs + important console messages
logger.setLevel(logging.DEBUG)
file_handler = logging.FileHandler('staging.log')
file_handler.setLevel(logging.DEBUG)
file_formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(funcName)s - %(message)s'
)
file_handler.setFormatter(file_formatter)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.WARNING)
console_formatter = logging.Formatter('%(levelname)s: %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
elif environment == 'production':
# Production: structured logs, errors only to console
logger.setLevel(logging.INFO)
file_handler = logging.handlers.RotatingFileHandler(
'production.log',
maxBytes=50 * 1024 * 1024, # 50 MB
backupCount=10
)
file_handler.setLevel(logging.INFO)
file_formatter = logging.Formatter(
'{"timestamp": "%(asctime)s", "level": "%(levelname)s", '
'"logger": "%(name)s", "message": "%(message)s"}'
)
file_handler.setFormatter(file_formatter)
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
console_formatter = logging.Formatter('%(levelname)s: %(message)s')
console_handler.setFormatter(console_formatter)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
return logger
This environment-dependent configuration handles each step differently. Shows everything with detailed information on the development console, including function names and line numbers. This makes debugging faster.
Staging balances development and production. It writes detailed logs to files for investigation but shows only warnings and errors on the console to avoid noise.
The production focuses on performance and structure. it only logs INFO At the files level and above, uses JSON formatting for easy parsing, and implements log rotation to manage disk space. Console output is limited to errors only.
# Set environment variable (normally done by deployment system)
os.environ('APP_ENV') = 'production'
logger = configure_environment_logger('my_application')
logger.debug('This debug message won't appear in production')
logger.info('User logged in successfully')
logger.error('Failed to process payment')
the environment determines APP_ENV environment variable. Your deployment system (postal worker, kubernetesor other cloud platform) sets this variable automatically.
Note how we clear existing handlers before configuration. This prevents duplicate handlers if the function is called multiple times during the application lifecycle.
# wrapping up
Good logging makes the difference between diagnosing problems quickly and spending hours guessing what went wrong. Start with basic logging using appropriate severity levels, add structured context to make the logs searchable, and configure rotation to prevent disk space issues.
The patterns shown here work for applications of any size. Start simple with basic logging, then add structured logging when you need better discoverability, and apply environment-specific configurations when you deploy to production.
Happy logging!
Bala Priya C is a developer and technical writer from India. She likes to work in the fields of mathematics, programming, data science, and content creation. His areas of interest and expertise include DevOps, Data Science, and Natural Language Processing. She loves reading, writing, coding, and coffee! Currently, she is working on learning and sharing her knowledge with the developer community by writing tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.