Worker
The worker processes jobs from the queue. Run it as a separate process in development and production.
Quick Start
Start the worker with automatic job discovery:
npx pidgey worker devThis discovers all jobs in your jobs directory, syncs any scheduled jobs, and starts processing.
Configuration
Create a pidgey.config.ts in your project root:
import { defineConfig } from '@pidgeyjs/core';
export default defineConfig({
adapter: 'sqlite',
filename: './pidgey.db',
worker: {
jobsDir: 'jobs', // Job discovery directory
concurrency: 10, // Max concurrent jobs
pollInterval: 100, // Polling interval (ms)
},
});This config is shared by both the CLI worker and your app.
Commands
pidgey worker dev
Development mode with verbose logging.
pidgey worker dev
pidgey worker dev --concurrency 20
pidgey worker dev --queue emails --queue notificationsOptions:
--concurrency— Max concurrent jobs (default: 10)--queue— Process specific queues (can specify multiple)--poll— Polling interval in milliseconds (default: 100)
pidgey worker start
Production mode with optimized settings.
pidgey worker start
pidgey worker start --concurrency 50 --poll 1000Differences from dev:
- Longer default poll interval (1000ms vs 100ms)
- Less verbose logging
Concurrency
Control how many jobs run simultaneously.
pidgey worker dev --concurrency 50Guidelines:
- I/O-bound jobs (API calls, emails): Higher concurrency (20-100)
- CPU-bound jobs (image processing): Lower concurrency (1-10)
- Mixed workload: Start with 10 and adjust
Polling Interval
How often the worker checks for new jobs.
pidgey worker dev --poll 100 # Low latency
pidgey worker start --poll 5000 # Lower DB loadRecommendations:
- Development: 100ms (fast feedback)
- Production: 1000-5000ms (balanced)
Queue Selection
Run dedicated workers for specific queues:
# Only process email queue
pidgey worker dev --queue emails
# Process multiple queues
pidgey worker dev --queue emails --queue notificationsUse this to prioritize certain queues or separate workloads.
Deployment
Docker
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
CMD ["npx", "pidgey", "worker", "start", "--concurrency", "50"]Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: pidgey-worker
spec:
replicas: 3
selector:
matchLabels:
app: pidgey-worker
template:
metadata:
labels:
app: pidgey-worker
spec:
containers:
- name: worker
image: myapp:latest
command: ['npx', 'pidgey', 'worker', 'start']
args: ['--concurrency', '50']Systemd
[Unit]
Description=Pidgey Worker
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/var/www/app
ExecStart=/usr/bin/npx pidgey worker start --concurrency 50
Restart=always
[Install]
WantedBy=multi-user.targetScaling
Horizontal Scaling
Run multiple worker processes:
# Terminal 1
pidgey worker start --concurrency 25
# Terminal 2
pidgey worker start --concurrency 25
# Now processing 50 jobs across 2 workersJobs are distributed via database locking.
Queue-Specific Workers
Dedicate workers to specific queues:
# High-priority emails
pidgey worker start --queue emails --concurrency 50
# Low-priority reports
pidgey worker start --queue reports --concurrency 10Graceful Shutdown
Workers handle SIGTERM and SIGINT gracefully:
- Stop accepting new jobs
- Wait for in-flight jobs to complete (up to 30s)
- Exit
Monitoring
Query job stats programmatically:
const job = await pidgey.getJob('job_123');
if (job) {
console.log(`Status: ${job.status}`);
console.log(`Attempts: ${job.attempts}/${job.maxAttempts}`);
}For CLI monitoring, see CLI Reference.
Next Steps
- CLI Reference — Job management commands
- Deployment — Production deployment guides
- Adapters — Choose your storage backend