FAQ
Common questions about Pidgey, organized by topic.
Setup & Configuration
How do I run migrations?
Migrations run automatically when the client connects. To run manually:
npx pidgey migrateCan I use Pidgey without Next.js?
Yes! You can use @pidgeyjs/core directly:
import { PidgeyClient } from '@pidgeyjs/core';
const client = new PidgeyClient({ adapter: yourAdapter });How do I switch adapters for production?
const pidgey = Pidgey({
adapter: process.env.NODE_ENV === 'production' ? 'postgres' : 'sqlite',
connection: process.env.DATABASE_URL,
filename: './dev.db',
});Your jobs work identically on any adapter. See Adapters guide for more.
Workers
Do I need a separate worker process?
Yes. Pidgey uses a dedicated worker process. Benefits:
- Job failures don’t crash your web server
- Scale workers independently
- Use different resource limits per job type
Run locally:
npx pidgey worker devIn production:
npx pidgey worker startHow do I scale workers?
Horizontal scaling: Run multiple worker instances pointing to the same database.
# Multiple servers or processes
npx pidgey worker start --concurrency 50- PostgreSQL and Redis support multiple workers
- SQLite is single-worker only
Worker not processing jobs?
Check:
- Worker is running (
npx pidgey worker dev) - Worker is connected to the same database as your app
- Jobs are discovered — check logs for
Found X jobs
Jobs
How do I prevent duplicate jobs?
Use an idempotencyKey when enqueuing jobs. If the same key is used again,
Pidgey will not create a duplicate job.
await myJob.enqueue(data, {
idempotencyKey: 'order:123',
});How do I retry failed jobs?
Jobs retry automatically based on their configuration:
export const myJob = pidgey.defineJob({
name: 'my-job',
handler: async (data) => {
/* ... */
},
config: {
retries: 5, // Retry up to 5 times
},
});Manual retry:
await pidgey.retryJob(jobId); // Single job
await pidgey.retryAllFailed(); // All failed jobsVia CLI:
npx pidgey jobs retry --id job_123 # Single job
npx pidgey jobs retry --failed # All failed jobsHow do I schedule delayed jobs?
await myJob.enqueue(data, { delay: 3600000 }); // Run in 1 hourWhat happens when a job fails all retries?
- Job status is set to
failed - Manage failed jobs via CLI:
npx pidgey jobs list --status failed
npx pidgey jobs retry --id <job_id>
npx pidgey jobs delete --status failedCan I cancel a job?
await pidgey.cancelJob(jobId);The Redis adapter does not support canceling jobs. Use deleteJob instead.
Adapters
Which adapter should I use?
| Adapter | Best For |
|---|---|
| SQLite | Development, embedded apps |
| PostgreSQL | Production apps, existing Postgres |
| Redis | High-throughput, Redis available |
Can I migrate between adapters?
Yes—your code stays the same. Important: drain queues first to avoid losing in-flight jobs.
Adapter performance (per worker)
| Adapter | Jobs/sec | Notes |
|---|---|---|
| SQLite | ~200 | Single worker only |
| PostgreSQL | ~180 | Scales horizontally |
| Redis | ~16,000+ | High throughput, low latency |
Troubleshooting
”no such table: _pidgey_jobs”
Migrations haven’t run. Either:
- Let auto-migration run on first connection
- Run manually:
npx pidgey migrate
Jobs stuck in “pending”
- Worker not running
- Worker connected to different database
- Queue name mismatch
Memory issues / high load
Reduce worker concurrency:
npx pidgey worker start --concurrency 10Common mistakes
- Worker not running or connected to wrong DB
- Jobs directory missing or misconfigured
- Missing environment variables
Check worker logs for detailed errors: npx pidgey worker dev or platform-specific
logs.