Why Pidgey?
Background jobs without the infrastructure tradeoffs.
Pidgey is designed around a simple idea: you shouldn’t have to commit to infrastructure before you know your workload.
Start with zero setup in development, run reliably in production, and scale to high-throughput workloads — all with the same code.
Under the hood, Pidgey can run on SQLite, PostgreSQL, or Redis. You choose the adapter that fits your workload.
// Choose the backend that matches your deployment needs.
// Local dev with SQLite
const pidgey = Pidgey({ adapter: 'sqlite' });
// Production workloads with PostgreSQL
const pidgey = Pidgey({ adapter: 'postgres', connection: DATABASE_URL });
// High-throughput scenarios with Redis
const pidgey = Pidgey({ adapter: 'redis', options: { host: 'localhost' } });Your jobs, handlers, and application code stay exactly the same. Only the adapter changes.
The Problem
If you’ve tried adding background jobs to a Next.js app, you’ve probably hit these pain points:
Current Solutions Fall Short
Managed job services require a vendor account, webhooks or tunnels for local development, and introduce pay-per-use anxiety. Great for API-triggered jobs, painful for everything else.
Redis-based queues are powerful and battle-tested, but require Redis before you write your first job. For many teams, that’s infrastructure overhead before the workload is clear.
Workflow platforms offer excellent developer experience but lock you into their platform. Generous free tiers become monthly bills, and your jobs can’t run without their infrastructure.
Postgres-native queues integrate well with your database, but often lack modern TypeScript APIs or an easy path to evolve beyond Postgres if requirements change.
What Developers Actually Want
- Simple local development — No Docker, no Redis, no vendor accounts
- Production-ready out of the box — Use your existing database
- Scalable when needed — Not locked into initial choice
- Type-safe — Catch errors at compile time
- Self-hosted — Your data, your infrastructure
Pidgey delivers all of this through flexible backend options.
How Pidgey Is Different
Pidgey is built on three core principles:
1. Start Simple, Scale When Needed
Start with the simplest possible setup and add complexity only when your workload demands it.
// Local development - zero setup
Pidgey({ adapter: 'sqlite', filename: ':memory:' });
// Persist jobs locally
Pidgey({ adapter: 'sqlite', filename: './pidgey.db' });
// Production with your existing Postgres database
Pidgey({ adapter: 'postgres', connection: process.env.DATABASE_URL });
// High-throughput requirements
Pidgey({ adapter: 'redis', options: { host: 'redis.internal' } });Each backend has tradeoffs. Redis excels in throughput and bursty workloads, but many production projects run Pidgey successfully on Postgres alone.
2. Unified Adapter Pattern
Same API everywhere. Your job definitions, handlers, and application code never change.
// This job works with SQLite, Postgres, AND Redis
export const sendEmail = pidgey.defineJob({
name: 'send-email',
handler: async (data: { to: string; subject: string }) => {
await emailService.send(data);
return { sent: true };
},
config: {
retries: 3,
timeout: 30000,
},
});Switch adapters by changing one line in your Pidgey client—that’s it.
3. Next.js-Native Integration
File-based job discovery that follows Next.js conventions:
app/
jobs/ # Put jobs here
send-email.ts
process-payment.ts
actions.ts # Enqueue from Server Actions
lib/
pidgey.ts # Client singletonRun pidgey worker dev and it automatically discovers all jobs. No manual registration, no config files.
Key Benefits
Zero-Setup Local Development
No Redis installation, no Docker Compose, no vendor accounts.
// One line—you're ready to go
const pidgey = Pidgey({ adapter: 'sqlite' });SQLite runs in-process with zero dependencies. Perfect for local development and testing.
Works with Your Existing Database
Already using PostgreSQL? Just add the Pidgey adapter:
const pidgey = Pidgey({
adapter: 'postgres',
connection: process.env.DATABASE_URL,
});No separate infrastructure to manage. Jobs live alongside your application data.
Flexibility to Switch
If you ever need higher throughput, switch adapters, or don’t. Pidgey lets you choose the backend that matches your workload.
// Change this
const pidgey = Pidgey({ adapter: 'postgres', connection: DATABASE_URL });
// To this
const pidgey = Pidgey({ adapter: 'redis', options: { host: 'redis' } });Your jobs, handlers, and application code stay the same. Zero adoption risk.
Perfect Next.js Integration
Built for App Router and Server Actions from day one:
'use server';
import { processPayment } from '@/app/jobs/process-payment';
export async function handleCheckout(cartId: string) {
await processPayment.enqueue({ cartId });
return { success: true };
}Type-safe enqueuing with full autocomplete and type checking.
Self-Hosted First
Your data never leaves your infrastructure:
- No external API calls for job enqueuing
- No vendor dashboards (unless you want them)
- No monthly bills based on job volume
- Full control over data retention and privacy
When Pidgey Shines
- New Next.js projects — background jobs in 5 minutes, zero infrastructure
- SaaS applications — scales from prototype to millions of jobs/day
- Local development — SQLite in-memory, no Docker required
- Flexible backends — choose SQLite, Postgres, or Redis based on your needs
- Self-hosted — your database, your data, no vendor lock-in
- TypeScript-first — end-to-end type safety from job definition to execution
When to Look Elsewhere
| Need | Better Fit |
|---|---|
| Multi-step sagas, branching workflows | Temporal, Inngest |
| Fully managed, zero infrastructure | QStash |
| Deep existing BullMQ investment | Stay with BullMQ |
Already have BullMQ? Pidgey’s Redis adapter still simplifies configuration and adds SQLite for local dev. Worth considering for new projects alongside existing infrastructure.
Backend Selection in Practice
Here’s how to choose the right backend for your needs:
Local Development: SQLite
import { defineConfig } from '@pidgeyjs/core';
export default defineConfig({
adapter: 'sqlite',
filename: './dev.db',
});Zero setup. Start building jobs immediately.
Production Apps: PostgreSQL
import { defineConfig } from '@pidgeyjs/core';
export default defineConfig({
adapter: process.env.NODE_ENV === 'production' ? 'postgres' : 'sqlite',
connection: process.env.DATABASE_URL,
filename: './dev.db',
});Run migrations: npx pidgey migrate
Jobs now use your production database. No code changes needed. Many production apps run successfully on Postgres alone.
High-Throughput: Redis
Redis is not required for most applications. Consider it when throughput or latency requirements exceed what Postgres can comfortably handle.
If you’re processing sustained thousands of jobs per minute and need better throughput:
import { defineConfig } from '@pidgeyjs/core';
const adapter = process.env.JOB_ADAPTER || 'sqlite';
export default defineConfig(
adapter === 'redis'
? { adapter: 'redis', options: { host: process.env.REDIS_HOST } }
: adapter === 'postgres'
? { adapter: 'postgres', connection: process.env.DATABASE_URL }
: { adapter: 'sqlite', filename: './dev.db' }
);Set JOB_ADAPTER=redis in production.
What changed: One environment variable.
What stayed the same: All your jobs, all your application code.
Adapter Comparison
| Feature | SQLite | PostgreSQL | Redis |
|---|---|---|---|
| Setup | Zero dependencies | Existing Postgres DB | Redis required |
| Throughput | ~200 jobs/sec | ~180 jobs/sec* | ~16,000+ jobs/sec* |
| Latency | ~5ms | ~5ms | <1ms |
| Best For | Dev, embedded apps | Production apps | High-throughput systems |
| Persistence | File or in-memory | Durable, replicated | Durable, fast |
| Concurrency | Single worker | Multi-worker | Distributed workers |
*Throughput per worker instance. PostgreSQL and Redis can scale horizontally by adding more workers.
All adapters support the same features: retries, delays, timeouts, and job lifecycle hooks.
Comparison with Alternatives
vs Managed Job Services
Pidgey advantages:
- Self-hosted option—no vendor dependencies
- Better local dev (no tunnels required)
- No per-job billing anxiety
Managed service advantages:
- HTTP-native (great for webhooks)
- Global edge network for low latency
- No worker process to manage
Choose Pidgey if: You want self-hosted, simple local dev, and flexible scaling.
Choose managed services if: You primarily trigger jobs via HTTP/webhooks and want zero ops.
vs Redis-Based Queues
Pidgey advantages:
- Simpler start (no Redis setup)
- Flexible backend options (SQLite, Postgres, or Redis)
- Better DX for Next.js developers
Redis-based queue advantages:
- Higher throughput from day one
- More battle-tested at extreme scale
- Rich ecosystem of plugins
Choose Pidgey if: You want to start simple and can migrate to Redis-backed queues later if needed.
Choose Redis-based queues directly if: You know you need Redis-level throughput from the start.
vs Workflow Platforms
Pidgey advantages:
- Self-hosted (your infrastructure)
- No vendor lock-in or monthly bills
- Simpler for straightforward jobs
- Better for air-gapped or compliance-heavy environments
Workflow platform advantages:
- Workflow orchestration out of the box
- Excellent observability dashboards
- Managed infrastructure (less ops work)
Choose Pidgey if: You want self-hosted, focused job processing without workflow complexity.
Choose workflow platforms if: You need complex workflows and don’t mind vendor dependency.
vs Postgres-Native Queues
Pidgey advantages:
- Modern TypeScript-first API
- Flexibility to switch backends (remove adoption risk)
- Active development and Next.js integration
- Better type safety and DX
Postgres-native queue advantages:
- More mature (battle-tested over years)
- Simpler implementation (easier to audit)
Choose Pidgey if: You value modern DX and want flexibility to scale beyond Postgres.
Choose Postgres-native queues if: You need maximum simplicity and never plan to leave Postgres.
Architecture Philosophy
Why a Separate Worker Process?
Following industry standards (Sidekiq, Celery, BullMQ), Pidgey uses a dedicated worker process:
- Isolation: Job failures don’t crash your web server
- Scalability: Scale workers independently of web servers
- Resource control: Different CPU/memory limits for job processing
- Deployment flexibility: Run workers on different infrastructure
For development, pidgey worker dev runs the worker locally. In production, deploy it as a separate service.
Why File-Based Discovery?
Next.js popularized file-based patterns. Pidgey extends this to jobs:
app/jobs/
send-email.ts # Auto-discovered
process-payment.ts # Auto-discovered
generate-report.ts # Auto-discoveredRun pidgey worker dev and all jobs are automatically registered. No config files, no manual registration.
Why the Adapter Pattern?
Different storage backends have different tradeoffs. The adapter pattern gives you:
- Flexibility: Choose SQLite, Postgres, or Redis based on your needs
- No lock-in: Same code works everywhere
- Risk reduction: Can always migrate to different adapter
Your application code never knows which adapter is running—it just enqueues jobs.
Real-World Use Cases
These are just examples — any async or long-running task fits naturally into Pidgey.
Email Sending Workflows
export const sendWelcomeEmail = pidgey.defineJob({
name: 'send-welcome-email',
handler: async (data: { userId: string }) => {
const user = await db.users.findById(data.userId);
await emailService.send({
to: user.email, // e.g., ben.wyatt@pawnee.gov
template: 'welcome',
data: { name: user.name }, // e.g., Ben Wyatt
});
},
config: { retries: 3 }, // Retry if email provider is down
});Payment Processing
export const processPayment = pidgey.defineJob({
name: 'process-payment',
handler: async (data: { orderId: string; amount: number }) => {
const charge = await stripe.charges.create({
amount: data.amount,
currency: 'usd',
metadata: { orderId: data.orderId },
});
await db.orders.update(data.orderId, { paymentStatus: 'paid' });
return { chargeId: charge.id };
},
config: { timeout: 30000, retries: 2 },
});Report Generation
export const generateReport = pidgey.defineJob({
name: 'generate-report',
handler: async (data: { userId: string; month: string }) => {
const reportData = await analytics.getMonthlyReport(data.userId, data.month); // e.g., userId: 'chris-traeger'
const pdf = await pdfGenerator.create(reportData);
await storage.upload(`reports/${data.userId}-${data.month}.pdf`, pdf);
return { url: storage.getUrl(`reports/${data.userId}-${data.month}.pdf`) };
},
config: { timeout: 120000 }, // 2 minute timeout for large reports
});Webhook Handling
export const processWebhook = pidgey.defineJob({
name: 'process-webhook',
handler: async (data: { event: string; payload: any }) => {
switch (data.event) {
case 'payment.succeeded':
await handlePaymentSuccess(data.payload);
break;
case 'subscription.cancelled':
await handleSubscriptionCancelled(data.payload);
break;
}
},
config: { retries: 5 }, // Retry webhook processing on failure
});Next Steps
Ready to get started?
- Getting Started — Install and configure Pidgey
- API Reference — Explore the full API
- Adapters — Learn about storage backends
- Worker Configuration — Fine-tune job processing
Questions? Check our GitHub Discussions.