Custom Python Logging to File: Lightweight Logging Without Complexity
Learn how to build a simple custom logging system in Python to write logs directly to files with timestamps, process IDs, and structured messages.
Drake Nguyen
Founder · System Architect
Python’s built-in logging module is powerful, but in many real-world projects, you do not always need a complex logging configuration. Sometimes, you simply need a lightweight way to write custom logs into files for debugging, API tracking, background tasks, or internal system monitoring.
In this article, we will build a simple custom file logging function in Python. This approach is especially useful in Django projects where you want to store logs inside a specific directory such as media/log/.
Why Use Custom File Logging?
Standard logging is great for production-level system logs, but a custom logging function gives you more direct control over how logs are written, where they are stored, and how they are formatted.
A custom file logger is useful when you need:
- Simple log files organized by date
- Custom folder paths for different services
- Readable logs for debugging
- Quick tracking of API errors or background jobs
- Minimal setup without configuring multiple handlers
Example: A Lightweight Custom Log Function
Below is a simple custom logging function that writes messages directly to a log file. It supports both text messages and dictionary-based data.
import os
import time
from django.conf import settings
def log_to_file(msg, prefix_path='openai', log_type='exceptions'):
log_type = str(log_type).replace('log', '')
log_name = '{}_{}.log'.format(log_type, time.strftime("%Y-%m-%d"))
base_dir = os.path.join(settings.BASE_DIR, 'media', 'log')
file_log = os.path.join(base_dir, prefix_path, log_name)
folder_log = os.path.dirname(file_log)
os.makedirs(folder_log, exist_ok=True)
ts = time.strftime('%Y/%m/%d %H:%M:%S')
msg_log = '{}(Pid {}):'.format(ts, str(os.getpid()))
if isinstance(msg, dict):
for key in msg:
msg_log += f"\n{key}: {msg[key]}"
else:
msg_log += f"\n{msg}"
msg_log += "\n{}".format("-" * 100)
with open(file_log, 'a') as log_file:
log_file.write(msg_log)
How This Custom Logger Works
1. Create Daily Log Files
The function automatically creates a log file based on the current date:
exceptions_2026-04-25.log
This makes it easier to inspect logs by day and prevents all log data from being stored in one large file.
2. Organize Logs by Folder
The prefix_path parameter allows you to group logs by service or feature.
log_to_file("API request failed", prefix_path="openai")
log_to_file("Payment error", prefix_path="payment")
log_to_file("User action failed", prefix_path="user")
This will create a structure like:
media/log/openai/
media/log/payment/
media/log/user/
3. Support Text and Dictionary Logs
You can log a simple string:
log_to_file("Something went wrong")
Or log a dictionary:
log_to_file({
"error": "Invalid API response",
"status_code": 500,
"user_id": 12
})
The output will be human-readable and easy to inspect:
2026/04/25 14:21:33(Pid 1234):
error: Invalid API response
status_code: 500
user_id: 12
----------------------------------------------------------------------------------------------------
Why Include Process ID in Logs?
The function includes the current process ID using os.getpid(). This is useful when your application runs with multiple processes, such as:
- Gunicorn workers
- Background subprocesses
- Celery-like task systems
- Parallel API jobs
When something goes wrong, the process ID helps you identify which process generated the log.
When Should You Use This Approach?
This custom logging method is useful for:
- Debugging API integrations
- Tracking OpenAI or external API responses
- Logging background task errors
- Saving temporary debug information
- Recording custom business logic events
It is simple, readable, and easy to modify for your own project.
When Should You Avoid This Approach?
This method is not a full replacement for production logging systems. You should avoid relying only on this approach if your system needs:
- Centralized log search
- Real-time monitoring
- Structured JSON log pipelines
- High-volume logging
- Advanced log rotation and retention policies
For larger systems, you should consider tools such as Python’s logging module, JSON logging, ELK Stack, Datadog, or cloud logging services.
Custom File Logging vs JSON Logging
| Feature | Custom File Logging | JSON Logging |
|---|---|---|
| Setup | Very simple | More structured |
| Readability | Human-friendly | Machine-friendly |
| Best for | Debugging and quick tracking | Analytics and log pipelines |
| Flexibility | High | High |
| Production monitoring | Limited | Better |
If you already use JSON logging, this custom logging function can still be useful as a lightweight companion for quick debugging and feature-specific logs.
Production Improvements
The function above works well for simple use cases, but you can improve it further for production.
Add UTF-8 Encoding
with open(file_log, 'a', encoding='utf-8') as log_file:
log_file.write(msg_log)
Add Exception Safety
try:
with open(file_log, 'a', encoding='utf-8') as log_file:
log_file.write(msg_log)
except Exception as e:
print(f"Failed to write log: {e}")
Add Automatic Cleanup
You can also create a scheduled cleanup task to delete old logs after 7, 14, or 30 days.
Final Thoughts
A custom Python file logger is a practical solution when you need fast, readable, and flexible logging without heavy configuration.
It is not designed to replace enterprise logging systems, but it is extremely useful for debugging APIs, tracking background tasks, and storing custom application events in a Django project.
For small to medium projects, this approach gives you full control with very little overhead.
Explore more Python, Django, and system design tutorials at Netalith.