Memory-Optimized Workers
Background workers — the processes that handle email delivery, document generation, large imports, workflow automation — have a memory problem that most platforms quietly live with: under sustained high load, memory consumption grows, performance degrades, and eventually something has to restart.
Memory-optimized workers address this at the architectural level.
The Problem with Long-Running Workers
A worker that processes many jobs over its lifetime accumulates state. Object caches grow. Compiled definitions accumulate. Application state that was intended to be temporary persists longer than it should.
In a lightly-loaded system, this is invisible — the worker processes a job, sits idle for a while, the operating system reclaims memory. Under sustained load, there's no idle time. Memory grows continuously until the worker is restarted, at which point performance dips during the restart and the queue backs up.
What We Changed
Explicit object lifecycle. Objects created to process a job are now explicitly freed after the job completes. The worker's memory footprint after processing 1,000 jobs is the same as after processing 1.
Per-job isolation. Each job runs in an isolated processing context. State created by one job cannot leak into the next. This eliminates the entire class of "previous job's state affected this job's outcome" bugs in addition to fixing the memory behavior.
Worker health monitoring. Workers now report their memory and processing metrics continuously. Workers that approach memory thresholds recycle gracefully — finishing their current job, then restarting fresh — rather than being killed mid-job under resource pressure.
The Result
Workers that process 10,000 jobs perform identically to workers that process 10. Heavy-load periods — end-of-month bulk invoice generation, large import operations, workflow automation that triggers on many records simultaneously — complete without degradation.
Background processing that's reliable under load, not just under test conditions.