Why Pidgey?
Background jobs that start simple and scale when you need them.
Pidgey follows a progressive scaling approach: start with SQLite for zero-setup local development, move to PostgreSQL for production, and scale to Redis when you need it. Same code, same API—just change one line of config.
// Week 1: Development with SQLite
const pidgey = Pidgey({ adapter: 'sqlite' });
// Month 1: Production with PostgreSQL
const pidgey = Pidgey({ adapter: 'postgres', connection: DATABASE_URL });
// Month 6: Scale with Redis
const pidgey = Pidgey({ adapter: 'redis', options: { host: 'localhost' } });Your jobs, handlers, and application code stay exactly the same. Only the adapter changes.
The Problem
If you’ve tried adding background jobs to a Next.js app, you’ve probably hit these pain points:
Current Solutions Fall Short
Managed job services require a vendor account, webhooks or tunnels for local development, and introduce pay-per-use anxiety. Great for API-triggered jobs, painful for everything else.
Redis-based queues are powerful but require Redis setup before you write a single line of code. Docker Compose in development, managed Redis in production—complexity before you’ve proven the need.
Workflow platforms offer excellent developer experience but lock you into their platform. Generous free tiers become monthly bills, and your jobs can’t run without their infrastructure.
Postgres-native queues use your database but often have dated APIs without modern TypeScript support. No escape hatch when you outgrow them.
What Developers Actually Want
- Simple local development — No Docker, no Redis, no vendor accounts
- Production-ready out of the box — Use your existing database
- Scalable when needed — Not locked into initial choice
- Type-safe — Catch errors at compile time
- Self-hosted — Your data, your infrastructure
Pidgey delivers all of this through progressive enhancement.
The Solution
Pidgey is built on three core principles:
1. Progressive Enhancement
Start with the simplest possible setup and add complexity only when you need it.
// Day 1: Local development
Pidgey({ adapter: 'sqlite', filename: ':memory:' });
// Week 1: Persist jobs locally
Pidgey({ adapter: 'sqlite', filename: './pidgey.db' });
// Month 1: Production with your existing Postgres database
Pidgey({ adapter: 'postgres', connection: process.env.DATABASE_URL });
// Month 6: High throughput with Redis
Pidgey({ adapter: 'redis', options: { host: 'redis.internal' } });2. Unified Adapter Pattern
Same API everywhere. Your job definitions, handlers, and application code never change.
// This job works with SQLite, Postgres, AND Redis
export const sendEmail = pidgey.defineJob({
name: 'send-email',
handler: async (data: { to: string; subject: string }) => {
await emailService.send(data);
return { sent: true };
},
config: {
retries: 3,
timeout: 30000,
},
});Switch adapters by changing one line in your Pidgey client—that’s it.
3. Next.js-Native Integration
File-based job discovery that follows Next.js conventions:
app/
jobs/ # Put jobs here
send-email.ts
process-payment.ts
actions.ts # Enqueue from Server Actions
lib/
pidgey.ts # Client singletonRun pidgey worker dev and it automatically discovers all jobs. No manual registration, no config files.
Key Benefits
Zero-Setup Local Development
No Redis installation, no Docker Compose, no vendor accounts.
// One line—you're ready to go
const pidgey = Pidgey({ adapter: 'sqlite' });SQLite runs in-process with zero dependencies. Perfect for local development and testing.
Works with Your Existing Database
Already using PostgreSQL? Just add the Pidgey adapter:
const pidgey = Pidgey({
adapter: 'postgres',
connection: process.env.DATABASE_URL,
});No separate infrastructure to manage. Jobs live alongside your application data.
Perfect Next.js Integration
Built for App Router and Server Actions from day one:
'use server';
import { processPayment } from '@/app/jobs/process-payment';
export async function handleCheckout(cartId: string) {
await processPayment.enqueue({ cartId });
return { success: true };
}Type-safe enqueuing with full autocomplete and type checking.
Escape Hatch to Redis
When you need Redis-level throughput, switch to Redis without changing your jobs:
// Change this
const pidgey = Pidgey({ adapter: 'postgres', connection: DATABASE_URL });
// To this
const pidgey = Pidgey({ adapter: 'redis', options: { host: 'redis' } });Your jobs, handlers, and application code stay the same. Zero adoption risk.
Self-Hosted First
Your data never leaves your infrastructure:
- No external API calls for job enqueuing
- No vendor dashboards (unless you want them)
- No monthly bills based on job volume
- Full control over data retention and privacy
Adapter Comparison
| Feature | SQLite | PostgreSQL | Redis |
|---|---|---|---|
| Setup | Zero dependencies | Existing Postgres DB | Redis required |
| Throughput | ~200 jobs/sec | ~180 jobs/sec* | ~16,000+ jobs/sec* |
| Latency | ~5ms | ~5ms | <1ms |
| Best For | Dev, embedded apps | Production apps | High-throughput systems |
| Persistence | File or in-memory | Durable, replicated | Durable, fast |
| Concurrency | Single worker | Multi-worker | Distributed workers |
*Throughput per worker instance. PostgreSQL and Redis can scale horizontally by adding more workers.
All adapters support the same features: retries, delays, timeouts, and job lifecycle hooks.
When Pidgey Shines
- New Next.js projects — background jobs in 5 minutes, zero infrastructure
- SaaS applications — scales from prototype to millions of jobs/day
- Local development — SQLite in-memory, no Docker required
- Progressive scaling — start with Postgres, swap to Redis with one line
- Self-hosted — your database, your data, no vendor lock-in
- TypeScript-first — end-to-end type safety from job definition to execution
When to Look Elsewhere
| Need | Better Fit |
|---|---|
| Multi-step sagas, branching workflows | Temporal, Inngest |
| Fully managed, zero infrastructure | QStash |
| Deep existing BullMQ investment | Stay with BullMQ |
Already have BullMQ? Pidgey’s Redis adapter still simplifies configuration and adds SQLite for local dev. Worth considering for new projects alongside existing infrastructure.
Progressive Scaling in Practice
Here’s how a typical Pidgey adoption looks:
Week 1: Development with SQLite
import { defineConfig } from '@pidgeyjs/core';
export default defineConfig({
adapter: 'sqlite',
filename: './dev.db',
});Zero setup. Start building jobs immediately.
Month 1: Production with PostgreSQL
import { defineConfig } from '@pidgeyjs/core';
export default defineConfig({
adapter: process.env.NODE_ENV === 'production' ? 'postgres' : 'sqlite',
connection: process.env.DATABASE_URL,
filename: './dev.db',
});Run migrations: npx pidgey migrate
Jobs now use your production database. No code changes needed.
Month 6: Scale with Redis
Your app is growing. You’re processing thousands of jobs per minute and need better throughput.
import { defineConfig } from '@pidgeyjs/core';
const adapter = process.env.JOB_ADAPTER || 'sqlite';
export default defineConfig(
adapter === 'redis'
? { adapter: 'redis', options: { host: process.env.REDIS_HOST } }
: adapter === 'postgres'
? { adapter: 'postgres', connection: process.env.DATABASE_URL }
: { adapter: 'sqlite', filename: './dev.db' }
);Set JOB_ADAPTER=redis in production. Done.
What changed: One environment variable.
What stayed the same: All your jobs, all your application code.
Comparison with Alternatives
vs Managed Job Services
Pidgey advantages:
- Self-hosted option—no vendor dependencies
- Better local dev (no tunnels required)
- No per-job billing anxiety
Managed service advantages:
- HTTP-native (great for webhooks)
- Global edge network for low latency
- No worker process to manage
Choose Pidgey if: You want self-hosted, simple local dev, and flexible scaling.
Choose managed services if: You primarily trigger jobs via HTTP/webhooks and want zero ops.
vs Redis-Based Queues
Pidgey advantages:
- Simpler start (no Redis setup)
- Progressive enhancement (SQLite → Postgres → Redis)
- Better DX for Next.js developers
Redis-based queue advantages:
- Higher throughput from day one
- More battle-tested at extreme scale
- Rich ecosystem of plugins
Choose Pidgey if: You want to start simple and can migrate to Redis-backed queues later if needed.
Choose Redis-based queues directly if: You know you need Redis-level throughput from the start.
vs Workflow Platforms
Pidgey advantages:
- Self-hosted (your infrastructure)
- No vendor lock-in or monthly bills
- Simpler for straightforward jobs
- Better for air-gapped or compliance-heavy environments
Workflow platform advantages:
- Workflow orchestration out of the box
- Excellent observability dashboards
- Managed infrastructure (less ops work)
Choose Pidgey if: You want self-hosted, focused job processing without workflow complexity.
Choose workflow platforms if: You need complex workflows and don’t mind vendor dependency.
vs Postgres-Native Queues
Pidgey advantages:
- Modern TypeScript-first API
- Redis escape hatch (remove adoption risk)
- Active development and Next.js integration
- Better type safety and DX
Postgres-native queue advantages:
- More mature (battle-tested over years)
- Simpler implementation (easier to audit)
Choose Pidgey if: You value modern DX and want flexibility to scale beyond Postgres.
Choose Postgres-native queues if: You need maximum simplicity and never plan to leave Postgres.
Architecture Philosophy
Why a Separate Worker Process?
Following industry standards (Sidekiq, Celery, BullMQ), Pidgey uses a dedicated worker process:
- Isolation: Job failures don’t crash your web server
- Scalability: Scale workers independently of web servers
- Resource control: Different CPU/memory limits for job processing
- Deployment flexibility: Run workers on different infrastructure
For development, pidgey worker dev runs the worker locally. In production, deploy it as a separate service.
Why File-Based Discovery?
Next.js popularized file-based patterns. Pidgey extends this to jobs:
app/jobs/
send-email.ts # Auto-discovered
process-payment.ts # Auto-discovered
generate-report.ts # Auto-discoveredRun pidgey worker dev and all jobs are automatically registered. No config files, no manual registration.
Why the Adapter Pattern?
Different storage backends have different tradeoffs. The adapter pattern gives you:
- Flexibility: Start with SQLite, scale to Redis
- No lock-in: Same code works everywhere
- Risk reduction: Can always migrate to different adapter
Your application code never knows which adapter is running—it just enqueues jobs.
Real-World Use Cases
Email Sending Workflows
export const sendWelcomeEmail = pidgey.defineJob({
name: 'send-welcome-email',
handler: async (data: { userId: string }) => {
const user = await db.users.findById(data.userId);
await emailService.send({
to: user.email, // e.g., ben.wyatt@pawnee.gov
template: 'welcome',
data: { name: user.name }, // e.g., Ben Wyatt
});
},
config: { retries: 3 }, // Retry if email provider is down
});Payment Processing
export const processPayment = pidgey.defineJob({
name: 'process-payment',
handler: async (data: { orderId: string; amount: number }) => {
const charge = await stripe.charges.create({
amount: data.amount,
currency: 'usd',
metadata: { orderId: data.orderId },
});
await db.orders.update(data.orderId, { paymentStatus: 'paid' });
return { chargeId: charge.id };
},
config: { timeout: 30000, retries: 2 },
});Report Generation
export const generateReport = pidgey.defineJob({
name: 'generate-report',
handler: async (data: { userId: string; month: string }) => {
const reportData = await analytics.getMonthlyReport(data.userId, data.month); // e.g., userId: 'chris-traeger'
const pdf = await pdfGenerator.create(reportData);
await storage.upload(`reports/${data.userId}-${data.month}.pdf`, pdf);
return { url: storage.getUrl(`reports/${data.userId}-${data.month}.pdf`) };
},
config: { timeout: 120000 }, // 2 minute timeout for large reports
});Image Processing
export const optimizeImage = pidgey.defineJob({
name: 'optimize-image',
handler: async (data: { imageUrl: string; sizes: number[] }) => {
const image = await fetch(data.imageUrl).then((r) => r.arrayBuffer());
const optimized = await Promise.all(
data.sizes.map((size) => sharp(image).resize(size).jpeg({ quality: 80 }).toBuffer())
);
return { thumbnails: optimized.map((img, i) => ({ size: data.sizes[i], data: img })) };
},
config: { timeout: 60000 },
});Data Imports
export const importCSV = pidgey.defineJob({
name: 'import-csv',
handler: async (data: { fileUrl: string; userId: string }) => {
// e.g., userId: 'ann-perkins'
const csv = await fetch(data.fileUrl).then((r) => r.text());
const rows = parseCSV(csv);
for (const row of rows) {
await db.contacts.create({ ...row, userId: data.userId });
}
return { imported: rows.length };
},
config: { timeout: 300000, retries: 1 }, // 5 minute timeout, limited retries
});Webhook Handling
export const processWebhook = pidgey.defineJob({
name: 'process-webhook',
handler: async (data: { event: string; payload: any }) => {
switch (data.event) {
case 'payment.succeeded':
await handlePaymentSuccess(data.payload);
break;
case 'subscription.cancelled':
await handleSubscriptionCancelled(data.payload);
break;
}
},
config: { retries: 5 }, // Retry webhook processing on failure
});Next Steps
Ready to get started?
- Getting Started — Install and configure Pidgey
- API Reference — Explore the full API
- Adapters — Learn about storage backends
- Worker Configuration — Fine-tune job processing
Questions? Check our GitHub Discussions.