Adapters

Scale to infinity

Pidgey supports multiple storage backends through adapters. Choose the right one for your needs—you can always change later.

SQLite

Zero-setup storage for development and small-scale production.

SQLite runs in-process with no external dependencies. Perfect for local development, testing, and applications that don’t need distributed workers.

When to use SQLite

  • Local development
  • Testing and CI/CD
  • Embedded applications
  • Low-traffic production apps (<100 jobs/sec)
  • Single-server deployments

Configuration

import { defineConfig } from '@pidgeyjs/core';
 
export default defineConfig({
  adapter: 'sqlite',
  filename: './pidgey.db', // Persistent storage
});

In-memory mode

For tests or single-process applications only:

export default defineConfig({
  adapter: 'sqlite',
  filename: ':memory:', // No persistence, blazing fast
});
💡

In-memory databases are destroyed when the process exits and cannot be shared between processes. Only use :memory: for unit tests or if your worker runs in the same process as your app.

Limitations

  • Single worker process (no distribution)
  • Lower throughput (~100 jobs/sec)
  • Polling-based (not push-based like Redis)

PostgreSQL

Production-ready storage using your existing database.

No separate infrastructure needed—jobs live alongside your application data.

When to use PostgreSQL

  • Production applications
  • Multi-worker deployments
  • When you already use Postgres
  • Need durability and ACID guarantees
  • Want to manage fewer services

Configuration

import { defineConfig } from '@pidgeyjs/core';
 
export default defineConfig({
  adapter: 'postgres',
  connection: process.env.DATABASE_URL,
});

Migrations

Run migrations before starting the worker:

npx pidgey migrate

This creates the _pidgey_jobs table in your database.

Performance

  • ~1,000 jobs/sec throughput
  • Supports multiple concurrent workers
  • Polling interval: 100ms (configurable)

Limitations

  • Higher latency than Redis (~50ms vs ~5ms)
  • Polling-based (checks queue every 100ms)
  • Requires database migrations

Redis

High-throughput Redis-backed queues (powered by BullMQ).

The Redis adapter gives you access to BullMQ-level performance when you need it.

When to use Redis

  • High-throughput requirements (>1,000 jobs/sec)
  • Need millisecond-level latency
  • Already have Redis infrastructure
  • Want distributed workers across many servers
  • Complex queue topologies (priorities, rate limiting)

Configuration

import { defineConfig } from '@pidgeyjs/core';
 
export default defineConfig({
  adapter: 'redis',
  options: {
    host: 'localhost',
    port: 6379,
    password: process.env.REDIS_PASSWORD,
  },
});

Performance

  • 10,000+ jobs/sec throughput
  • Sub-5ms latency
  • Event-driven (no polling)
  • Scales to many concurrent workers

Limitations

  • Requires Redis setup and management
  • Additional infrastructure to maintain
  • More complex than SQLite/Postgres for simple use cases

Adapter Comparison

FeatureSQLitePostgreSQLRedis
Setup complexity⭐ None⭐⭐ Migrations⭐⭐⭐ Redis required
Throughput~200/sec~180/sec*~16,000+/sec*
Latency<10ms~50ms~5ms
Concurrent workers1ManyMany
PersistenceFile or memoryDurableDurable
Best forDev, testingProduction appsHigh-scale

*Throughput per worker instance. PostgreSQL and Redis can scale horizontally by adding more workers.

Benchmark Results

We benchmark every adapter to help you make informed scaling decisions. These tests measure real-world throughput by enqueuing 5,000 jobs and processing them with a worker at concurrency=10.

Methodology: For each adapter, we enqueue 5,000 simple jobs (minimal processing, immediate return) and measure the time to completion using a single worker with concurrency=10 and a 50ms poll interval (databases) or event-driven processing (Redis). These benchmarks reflect typical scenarios with lightweight job handlers. Your actual throughput may vary based on job complexity and infrastructure.

Measured Throughput

AdapterJobs ProcessedTime (sec)Throughput (jobs/sec)Avg Job Time (ms)
SQLite5,00026.0~1925.20
Postgres5,00028.4~1765.69
Redis5,0000.3~16,5000.06

What This Means

  • SQLite and Postgres have similar performance (~180-190 jobs/sec) for most workloads. The database adapters use polling (checking for jobs every 50ms), which introduces inherent latency.

  • Redis is ~86x faster than the database adapters thanks to Redis’s in-memory speed and event-driven architecture (no polling delay).

When to Scale

  • Start with SQLite for development and low-traffic apps (<200 jobs/sec)
  • Move to Postgres when you need distributed workers or already use Postgres
  • Upgrade to Redis when you need >10,000 jobs/sec or sub-10ms latency

Progressive Scaling

Start simple, scale when needed. Your job code never changes.

Development: SQLite

pidgey.config.ts
import { defineConfig } from '@pidgeyjs/core';
 
export default defineConfig({
  adapter: 'sqlite',
  filename: './dev.db',
});

Zero dependencies. Run pidgey worker dev and start building.

Production: PostgreSQL

pidgey.config.ts
import { defineConfig } from '@pidgeyjs/core';
 
export default defineConfig({
  adapter: 'postgres',
  connection: process.env.DATABASE_URL,
});

Use your existing database. Run pidgey migrate once, then deploy.

Scale: Redis

pidgey.config.ts
import { defineConfig } from '@pidgeyjs/core';
 
export default defineConfig({
  adapter: 'redis',
  options: {
    host: process.env.REDIS_HOST,
    port: 6379,
  },
});

Add Redis, change one line of config. Jobs stay the same.

Environment-based Configuration

Use environment variables to switch adapters:

lib/pidgey.ts
import { Pidgey } from '@pidgeyjs/next';
 
export const pidgey = Pidgey(
  process.env.NODE_ENV === 'production'
    ? {
        adapter: 'postgres',
        connection: process.env.DATABASE_URL!,
      }
    : {
        adapter: 'sqlite',
        filename: './dev.db',
      }
);

Or use an explicit adapter environment variable:

pidgey.config.ts
import { defineConfig } from '@pidgeyjs/core';
 
const adapter = process.env.JOB_ADAPTER || 'sqlite';
 
export default defineConfig(
  adapter === 'redis'
    ? {
        adapter: 'redis',
        options: {
          host: process.env.REDIS_HOST!,
          port: Number(process.env.REDIS_PORT || 6379),
        },
      }
    : adapter === 'postgres'
      ? {
          adapter: 'postgres',
          connection: process.env.DATABASE_URL!,
        }
      : {
          adapter: 'sqlite',
          filename: process.env.SQLITE_FILE || './pidgey.db',
        }
);

Then switch adapters via environment:

# Development
JOB_ADAPTER=sqlite npm run dev
 
# Production with Postgres
JOB_ADAPTER=postgres DATABASE_URL=postgres://... npm start
 
# High-scale with Redis
JOB_ADAPTER=redis REDIS_HOST=redis.internal npm start

Migration Guide

SQLite → PostgreSQL

  1. Install Postgres adapter:
npm install @pidgeyjs/postgres
  1. Update your Pidgey config:
const pidgey = Pidgey({
  adapter: 'postgres',
  connection: process.env.DATABASE_URL,
});
  1. Run migrations:
npx pidgey migrate
  1. Deploy. Your jobs work unchanged.

PostgreSQL → Redis

  1. Install Redis adapter and Redis client:
npm install @pidgeyjs/redis
  1. Update your Pidgey config:
const pidgey = Pidgey({
  adapter: 'redis',
  options: {
    host: process.env.REDIS_HOST,
    port: 6379,
  },
});
  1. Deploy. Jobs migrate automatically.

No code changes to your job definitions or handlers.

Next Steps