FAQ
Common questions about Pidgey.
Setup & Configuration
How do I run migrations?
Migrations run automatically when the client connects. To run manually:
npx pidgey migrateCan I use Pidgey without Next.js?
Yes! Use @pidgeyjs/core directly:
import { PidgeyClient } from '@pidgeyjs/core';
const client = new PidgeyClient({ adapter: yourAdapter });How do I switch adapters for production?
const pidgey = Pidgey({
adapter: process.env.NODE_ENV === 'production' ? 'postgres' : 'sqlite',
connection: process.env.DATABASE_URL,
filename: './dev.db',
});Your jobs work identically on any adapter.
Workers
Do I need a separate worker process?
Yes. Unlike serverless solutions, Pidgey uses a dedicated worker process. This provides:
- Job failures don’t crash your web server
- Scale workers independently
- Different resource limits for jobs
Run with npx pidgey worker dev locally or npx pidgey worker start in production.
How do I scale workers?
Horizontal scaling: Run multiple worker instances pointing to the same database.
# On multiple servers
npx pidgey worker start --concurrency 50PostgreSQL and the Redis adapter support multiple workers. SQLite is single-worker only.
Why aren’t my jobs being processed?
- Worker not running — Start with
npx pidgey worker dev - Wrong database — Ensure worker uses the same
DATABASE_URLas your app - Jobs not discovered — Check worker logs for “Found X jobs”
Jobs
How do I retry failed jobs?
Jobs retry automatically based on config:
export const myJob = pidgey.defineJob({
name: 'my-job',
handler: async (data) => {
/* ... */
},
config: {
retries: 5, // Retry up to 5 times
},
});How do I retry failed jobs?
Jobs retry automatically based on config:
export const myJob = pidgey.defineJob({
name: 'my-job',
handler: async (data) => {
/* ... */
},
config: {
retries: 5, // Retry up to 5 times
},
});Or, you can retry failed jobs manually:
await pidgey.retryJob(jobId);
// Retry all failed jobs
await pidgey.retryAllFailed();Or via CLI:
npx pidgey jobs retry -i job_123
# Retry all failed jobs
npx pidgey jobs retry --failedHow do I schedule delayed jobs?
await myJob.enqueue(data, { delay: 3600000 }); // Run in 1 hourWhat happens when a job fails all retries?
The job’s status is set to failed. You can view and manage failed jobs:
npx pidgey jobs list --status failed
npx pidgey jobs retry --id <job_id>
npx pidgey jobs delete --status failedCan I cancel a job?
await pidgey.cancelJob(jobId);The Redis adapter does not support canceling jobs. Use deleteJob instead.
Or via CLI:
npx pidgey jobs cancel --id job_123How do I check job status?
const job = await pidgey.getJob(jobId);
console.log(job.status); // 'pending' | 'active' | 'completed' | 'failed'Adapters
Which adapter should I use?
| Adapter | Best For |
|---|---|
| SQLite | Development, embedded apps |
| PostgreSQL | Production apps, existing Postgres |
| Redis | High-throughput, Redis available |
Can I migrate between adapters?
Yes—your code stays the same. Jobs in-queue during migration will be lost, so drain queues first in production.
What’s the throughput difference?
- SQLite: ~200 jobs/sec (single worker)
- PostgreSQL: ~180 jobs/sec per worker (scales horizontally)
- Redis: ~16,000+ jobs/sec per worker
Troubleshooting
”no such table: _pidgey_jobs”
Migrations haven’t run. Either:
- Let auto-migration run on first connection
- Run manually:
npx pidgey migrate
Jobs stuck in “pending”
- Worker not running
- Worker connected to different database
- Queue name mismatch
Memory issues
Reduce worker concurrency:
npx pidgey worker start --concurrency 10