Worker
The worker is responsible for processing jobs from the queue. Run it as a separate process in both development and production.
Quick Start
Start the worker with automatic job discovery:
npx pidgey worker devThis will:
- Discover all jobs in your
jobsdirectory - Sync any scheduled jobs
- Begin processing jobs immediately
Configuration
Create a pidgey.config.ts in your project root:
import { defineConfig } from '@pidgeyjs/core';
export default defineConfig({
adapter: 'sqlite',
filename: './pidgey.db',
worker: {
jobsDir: 'jobs', // Directory for job discovery
concurrency: 10, // Max concurrent jobs
pollInterval: 100, // Polling interval in ms
},
});This config is shared by both the CLI worker and your application code.
Worker Commands
pidgey worker dev
Start in development mode with verbose logging:
pidgey worker dev
pidgey worker dev --concurrency 20
pidgey worker dev --queue emails --queue notificationsOptions:
--concurrency— Max concurrent jobs (default: 10)--queue— Process specific queues (can specify multiple)--poll— Polling interval in milliseconds (default: 100)
pidgey worker start
Production mode with optimized settings:
pidgey worker start
pidgey worker start --concurrency 50 --poll 1000Differences from dev mode:
- Longer default poll interval (1000ms vs 100ms)
- Less verbose logging
Concurrency
Control how many jobs run simultaneously:
pidgey worker dev --concurrency 50Guidelines:
- I/O-bound jobs (e.g., API calls, emails): Higher concurrency (20–100)
- CPU-bound jobs (e.g., image processing): Lower concurrency (1–10)
- Mixed workload: Start with 10 and adjust as needed
Polling Interval
How often the worker checks for new jobs:
pidgey worker dev --poll 100 # Low-latency development
pidgey worker start --poll 5000 # Reduced DB load in productionRecommendations:
- Development: 100ms (fast feedback)
- Production: 1000–5000ms (balanced performance vs DB load)
Queue Selection
Run dedicated workers for specific queues:
# Only process email queue
pidgey worker dev --queue emails
# Process multiple queues
pidgey worker dev --queue emails --queue notificationsThis is useful to prioritize certain queues or separate workloads.
Deployment
Docker
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
CMD ["npx", "pidgey", "worker", "start", "--concurrency", "50"]Kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: pidgey-worker
spec:
replicas: 3
selector:
matchLabels:
app: pidgey-worker
template:
metadata:
labels:
app: pidgey-worker
spec:
containers:
- name: worker
image: myapp:latest
command: ['npx', 'pidgey', 'worker', 'start']
args: ['--concurrency', '50']Systemd
[Unit]
Description=Pidgey Worker
After=network.target
[Service]
Type=simple
User=www-data
WorkingDirectory=/var/www/app
ExecStart=/usr/bin/npx pidgey worker start --concurrency 50
Restart=always
[Install]
WantedBy=multi-user.targetScaling Workers
Horizontal Scaling
Run multiple worker processes for increased throughput:
# Terminal 1
pidgey worker start --concurrency 25
# Terminal 2
pidgey worker start --concurrency 25
# Now processing 50 jobs across 2 workersJobs are safely distributed via database locking.
Queue-Specific Workers
Dedicate workers to certain queues:
# High-priority emails
pidgey worker start --queue emails --concurrency 50
# Low-priority reports
pidgey worker start --queue reports --concurrency 10Graceful Shutdown
Workers handle SIGTERM and SIGINT gracefully:
- Stop accepting new jobs
- Wait for in-flight jobs to complete (up to 30s)
- Exit cleanly
Monitoring
Query job stats programmatically:
const job = await pidgey.getJob('job_123');
if (job) {
console.log(`Status: ${job.status}`);
console.log(`Attempts: ${job.attempts}/${job.maxAttempts}`);
}For CLI-based monitoring, see CLI Reference.
Next Steps
- CLI Reference — Job management commands
- Deployment — Production deployment guides
- Adapters — Choose your storage backend