Rate Limit Storage Selection: PostgreSQL vs Redis

Tadashi Shigeoka ·  Mon, January 5, 2026

When implementing rate limiting for the Giselle API, we considered whether to use PostgreSQL or Redis.

In the previous article Rate Limit Algorithm Comparison and Case Studies, we decided to adopt the Fixed Window Counter algorithm. This time, we’ll explain why we chose PostgreSQL as the storage layer.

Implementation Example at Giselle

In this PR feat(api/apps/run): add team-scoped rate limiting to App Run API · #2603, we implemented Fixed Window Counter using PostgreSQL.

PostgreSQL vs Redis Rate Limit Comparison

AspectPostgreSQLRedis
Latency1-10ms0.1-1ms
ThroughputThousands req/sHundreds of thousands req/s
PersistencePersistent by defaultConfigurable (can be volatile)
Operational costCan use existing DBAdditional infrastructure needed
AtomicityTransactionsSingle-threaded
ScalingPrimarily verticalEasy horizontal scaling

Benefits of Choosing PostgreSQL

Avoiding Infrastructure Complexity

  • Already using PostgreSQL
  • No additional Redis operation/monitoring needed
  • No additional failure points

Data Consistency

  • Foreign key constraints with teams and apps
  • Automatic cleanup with CASCADE DELETE
  • Transaction isolation

Drawbacks of PostgreSQL

Latency

  • +1-5ms DB round trip per request
  • However, the API applying rate limits is already DB-dependent (authentication, team retrieval, etc.)

Bottleneck Under High Load

  • Could become problematic at tens of thousands req/s
  • Risk of connection pool exhaustion

Table Bloat

  • Old windows accumulate
  • Periodic cleanup required

Why PostgreSQL is Appropriate for This Project

1. Current Rate Limit Settings

PlanLimitEquivalent
Pro300 req/min5 req/s
Team600 req/min10 req/s
Enterprise3000 req/min50 req/s

At this scale, PostgreSQL can handle it sufficiently.

2. Existing API Request Flow

Below is a conceptual sample code.

// Multiple DB accesses already occur per request
await authenticate()        // DB: API key verification
await enforceRateLimit()    // DB: Rate limit enforcement ← Added this time
await loadApp()             // DB: Load app information
await executeTask()         // DB: Create and execute task

As shown, the impact of adding one query is relatively small.

3. Operational Consistency

  • Leverage existing monitoring and backup infrastructure
  • No learning cost for new dependencies

When to Consider Migrating to Redis

  • Traffic increases significantly (tens of thousands req/s)
  • Latency requirements become strict (<10ms required)
  • Need for multi-region distribution
  • Need for more advanced rate limiting (Sliding Window, etc.)

Summary

We determined that PostgreSQL is fine for now.

  • Aligns with the project’s “シンプルさ優先” (simplicity first) philosophy
  • Performance is sufficient at current scale
  • If it becomes a bottleneck in the future, we can migrate by simply replacing the rate limit check function implementation with Redis (the interface remains unchanged)

We believe optimizing only when needed (YAGNI principle) is the appropriate approach.

That’s all from the Gemba on comparing PostgreSQL and Redis for rate limit storage selection.