Configuration
Configure the Pidgey client and worker to match your needs. Pidgey uses a single configuration file for both your app and workers.
Unified Configuration
Create pidgey.config.ts in your project root:
import { defineConfig } from '@pidgeyjs/core';
export default defineConfig({
adapter: 'sqlite', // Storage backend
filename: './pidgey.db', // SQLite file (or ':memory:' for tests)
worker: {
jobsDir: 'jobs',
concurrency: 10,
},
defaultJobOptions: {
retries: 3,
timeout: 60000,
},
});Initialize the client with the shared config:
import { Pidgey } from '@pidgeyjs/next';
import config from '../pidgey.config';
export const pidgey = Pidgey(config);Adapter Configuration
Choose the storage backend that fits your needs. Jobs never change when switching adapters—only the config updates.
SQLite (Development / Low-Traffic)
export default defineConfig({
adapter: 'sqlite',
filename: './dev.db', // Persistent or ':memory:' for tests
});- ✅ Zero setup, embedded, works locally
- ⚠️ Single worker, lower throughput (~200 jobs/sec)
Use :memory: only for unit tests or single-process apps. Data is lost on exit.
PostgreSQL (Production / Multi-Worker)
export default defineConfig({
adapter: 'postgres',
connection: process.env.DATABASE_URL,
});- ✅ Durable, ACID, works with existing DB
- ⚠️ Requires migrations (
npx pidgey migrate) and moderate ops - Throughput: ~180 jobs/sec per worker
Serverless / Neon / Supabase Tips:
- Neon: Use HTTP-based serverless driver or configure pool sizes.
- Supabase: Use Pooler (port 6543) for app clients; direct connections (5432) for workers.
Set different environment variables for app vs worker connections to avoid exhausting DB connections.
Redis (High-Throughput / Distributed)
export default defineConfig({
adapter: 'redis',
options: {
host: process.env.REDIS_HOST!,
port: Number(process.env.REDIS_PORT || 6379),
password: process.env.REDIS_PASSWORD,
},
});- ✅ Event-driven, sub-5ms latency, 10,000+ jobs/sec
- ⚠️ Requires Redis infrastructure
Job Configuration
Define per-job options when creating jobs:
export const sendEmail = pidgey.defineJob({
name: 'send-email',
handler: async (data) => {
/* ... */
},
config: {
retries: 3,
timeout: 60000,
queue: 'critical',
},
});| Option | Default | Description |
|---|---|---|
retries | 3 | Max retry attempts |
timeout | 300000 | Job timeout (ms) |
queue | Job name | Queue name for grouping jobs |
Global defaults can be overridden by job config or runtime options.
Queues & Concurrency
Group jobs into queues for better control:
export const criticalEmail = pidgey.defineJob({
name: 'critical-email',
handler: async () => {
/* ... */
},
config: { queue: 'critical' },
});
export const newsletter = pidgey.defineJob({
name: 'newsletter',
handler: async () => {
/* ... */
},
config: { queue: 'low-priority' },
});Run dedicated workers per queue:
pidgey worker dev --queue critical --concurrency 50
pidgey worker dev --queue low-priority --concurrency 5Environment-Based Configuration
Switch adapters or configs via environment variables:
const adapter = process.env.JOB_ADAPTER || 'sqlite';
export default defineConfig(
adapter === 'redis'
? { adapter: 'redis', options: { host: process.env.REDIS_HOST!, port: 6379 } }
: adapter === 'postgres'
? { adapter: 'postgres', connection: process.env.DATABASE_URL! }
: { adapter: 'sqlite', filename: './dev.db' }
);# Dev
JOB_ADAPTER=sqlite npm run dev
# Production
JOB_ADAPTER=postgres DATABASE_URL=postgres://... npm start
# High throughput
JOB_ADAPTER=redis REDIS_HOST=redis.internal npm startWorker CLI Options
Override worker behavior at runtime:
pidgey worker dev --concurrency 50 --poll 1000 --queue emailsFlags:
--concurrency— Max concurrent jobs--poll— Poll interval in ms--queue— Process specific queues (repeatable)
Best Practices
Singleton Client
export const pidgey = Pidgey({
/* config */
});Import the same instance everywhere.
Separate Queues
Different queues for fast vs slow jobs:
// Fast, high-priority
config: { queue: 'realtime', timeout: 5000 }
// Slow, low-priority
config: { queue: 'background', timeout: 300000 }Timeout & Retry Strategies
- Short timeout for API calls, long for reports
- Adjust retries based on job idempotency
Next Steps
- Worker — Configure and deploy workers
- Adapters — Learn about storage backends
- API Reference — Full API documentation