Redis

L1 — Multi-Modal Storage Cache Free / $5+/mo RSALv2 / SSPL

In-memory key-value store with rich data structures (strings, hashes, lists, sets, sorted sets, streams). License: RSALv2 / SSPL since March 2024 (not OSI-approved). The OSI-approved OSS path is Valkey (Linux Foundation fork at Redis 7.2.4, BSD-3-Clause). Redis remains widely deployed; pick Redis when you need vendor support or are already on Redis Cloud / Redis Enterprise, otherwise Valkey for predictable open-source posture.

AI Analysis

Redis is the in-memory key-value store that defined the modern cache category for over a decade, but Redis Inc.'s March 2024 relicense to RSALv2/SSPL removed it from the OSI-approved OSS column. The technical capabilities remain best-in-class (sub-millisecond ops, rich data structures, mature client ecosystem), but the OSS market default for Redis-compatible caching is now Valkey (Linux Foundation fork at Redis 7.2.4). Pick Redis when you need vendor support, are already on Redis Cloud or Redis Enterprise, or specifically want the modules in Redis Stack. Pick Valkey for license predictability without re-architecting.

Trust Before Intelligence

Redis delivers the speed and simplicity that agent stacks need at L1 cache, but the trust posture has bifurcated since March 2024. From a Trust Before Intelligence lens, the access control model is RBAC-only via ACL, which is fine for service-to-service caching but insufficient for fine-grained ABAC enforcement (that must live at L5). The bigger trust signal is the licensing change itself: a unilateral relicense to source-available terms is a category of vendor risk that affects every BSL/SSPL/RSAL choice. Teams that picked Redis years ago for OSS predictability got changed terms without consultation. The Valkey fork exists precisely because that risk materialized.

INPACT Score

23/36
I — Instant
6/6

Sub-millisecond reads and writes with in-memory storage. No cold start once the process is running. Tens of thousands of ops per second per node, millions across a cluster. Best-in-class on the I dimension. Cap rule N/A.

N — Natural
2/6

Redis commands (GET, SET, ZADD, HSET, XADD) are precise and well-documented but not natural language. Agents need a translation layer above Redis to express semantic intent. Cap rule N/A.

P — Permitted
3/6

Redis ACL (since 6.0) supports per-user command and key-pattern restrictions, which is RBAC. There is no native ABAC: no time-of-day, purpose-of-use, or attribute-based decisioning. Cap rule applied: RBAC-only without ABAC caps at 3.

A — Adaptive
4/6

Cluster mode for horizontal sharding, Sentinel for HA, every major cloud has a managed Redis-compatible offering (ElastiCache, MemoryStore, Azure Cache for Redis), self-hostable on any infrastructure. Truly multi-cloud and multi-deployment. Cap rule N/A.

C — Contextual
4/6

Eight built-in data structures (strings, hashes, lists, sets, sorted sets, streams, HyperLogLog, bitmaps), pub/sub, transactions via MULTI/EXEC, Lua scripting. Strong contextual richness for a cache. Lower than database peers because Redis lacks first-class lineage and FDW-style cross-system integration.

T — Transparent
4/6

MONITOR (real-time command stream), slowlog, INFO (operational stats), keyspace notifications give solid operational transparency. Cost-per-query attribution is N/A for in-memory cache (the cost model is provisioned capacity, not per-query). Cap rule N/A.

GOALS Score

16/25
G — Governance
2/6

G1=N (no ABAC, ACL is RBAC-only with sub-10ms enforcement but cap rule applies), G2=N (slowlog isn't full access audit by default), G3=N (cache primitive, not workflow tool), G4=N (no model versioning concept), G5=N, G6=N (no built-in compliance mapping). Yes count 1/6 -> bucketed 2.

O — Observability
2/6

O1=Y (INFO/MONITOR integrate with Datadog and Prometheus), O2=N (no native distributed tracing), O3=N (LLM cost tracking N/A for cache), O4=Y (MTTD with proper monitoring), O5=N, O6=N. Yes count 2/6 -> bucketed 2.

A — Availability
5/6

A1=Y (sub-ms p95), A2=Y (in-memory is real-time), A3=Y (cache hit rate IS the metric), A4=Y (HA via cluster + Sentinel), A5=Y (Redis runs at hyperscaler scale), A6=Y (MGET batch operations). Yes count 6/6 -> bucketed 5. Same as Valkey, since the engine is the same.

L — Lexicon
2/6

L1=N, L2=N, L3=N, L4=N, L5=Y (key namespacing as terminology alignment, lenient interpretation), L6=N. Yes count 1/6 -> bucketed 2.

S — Solid
5/6

S1=Y (deterministic, cache returns what was stored), S2=Y (typed values, no missing fields by design), S3=Y (cluster replication), S4=Y (typed key-value, no schema mismatch), S5=Y (replication + AOF persistence as quality gates), S6=Y (slowlog as anomaly detection signal). Yes count 6/6 -> bucketed 5.

AI-Identified Strengths

  • + Mature ecosystem. Every major language has a battle-tested Redis client, every major framework has Redis-backed cache primitives, runbooks for most operational scenarios already exist
  • + Sub-millisecond performance with cluster mode for sharding and Sentinel for HA
  • + Rich data structures (sorted sets, streams, HyperLogLog) that go well beyond simple key-value, useful for rate-limiting, leaderboards, time-series buffers, and pub/sub
  • + Best-in-class operational visibility via MONITOR, slowlog, and INFO; strong support in Datadog, New Relic, Grafana exporters
  • + Mature managed offerings on every cloud (AWS ElastiCache, GCP MemoryStore, Azure Cache for Redis, Redis Cloud, Redis Enterprise Cloud) with HIPAA BAA and SOC 2 available
  • + Redis Stack adds vector search, JSON, graph, and time-series modules for teams that want a single in-memory platform across multiple data shapes
  • + Active commercial support from Redis Inc. for enterprise needs and SLAs

AI-Identified Limitations

  • - License: RSALv2 / SSPL since March 2024 (not OSI-approved). Source-available, with restrictions on competing managed services. The OSS-default path for the core engine is Valkey
  • - RBAC only via ACL, no native ABAC. ABAC enforcement must happen at L5
  • - Eventual consistency in cluster mode by default; strict consistency requires Sentinel and careful client config
  • - No native distributed tracing. Observability beyond INFO/MONITOR comes from app instrumentation
  • - Compliance certifications are deployment-dependent. The Redis project itself doesn't sign BAAs; managed Redis Cloud and Redis Enterprise Cloud do
  • - Redis Stack's modules (RediSearch, RedisJSON, RedisGraph, RedisTimeSeries) carry the same RSALv2/SSPL constraints. They are not in Valkey
  • - Persistence (AOF/RDB) requires careful tuning for high-write workloads; RDB snapshots can pause production briefly

Industry Fit

Best suited for

Teams already on Redis Cloud or Redis Enterprise Cloud with vendor support contracts in placeRAG pipelines using cache for embedding lookups, session state, or rate-limit counters where module-level features (vector search) are neededHigh-throughput stateless web services where sub-millisecond cache latency drives the architectureWorkloads that genuinely need Redis Stack modules (vector, JSON, graph, time-series) and the team has accepted the license trade-offStack Builder choices where ABAC is enforced at L5 and Redis is purely an L1 caching layer

Compliance certifications

The Redis project itself does not sign BAAs and holds no third-party audit reports. Compliance for regulated workloads comes from managed deployments: Redis Cloud (HIPAA BAA, SOC 2 Type II, ISO 27001, PCI DSS), Redis Enterprise Cloud (same plus FedRAMP-eligible variants), AWS ElastiCache for Redis (HIPAA BAA, SOC 2, FedRAMP), GCP MemoryStore for Redis (HIPAA BAA, SOC 2). Self-hosted Redis on a FedRAMP-authorized substrate (AWS GovCloud, Azure Gov) inherits substrate compliance for the infrastructure layer but the Redis project does not provide BAAs directly.

Use with caution for

Organizations with strict OSI-only licensing requirements. Default to Valkey insteadTeams expecting cache durability without configuring AOF/RDBStrict-consistency requirements without Sentinel or careful client configHealthcare or government workloads using the OSS Redis distribution directly without a BAA-signing managed deployment

AI-Suggested Alternatives

Valkey

Choose Valkey when you want the same engine without RSALv2/SSPL license risk. Drop-in replacement for Redis OSS core. Major hyperscalers (AWS ElastiCache, GCP MemoryStore, Oracle Cloud) all run Valkey now. Redis wins if you need Redis Stack modules or vendor support; Valkey wins on license predictability.

View analysis →
Memcached

Choose Memcached for the simplest possible distributed key-value cache: no clustering, no persistence, no rich types. Redis wins on data structures, HA, and module ecosystem. Memcached wins on operational simplicity for stateless cache use cases.

View analysis →
AWS MemoryDB for Redis

Choose AWS MemoryDB for AWS-native deployments wanting durable Redis-compatible storage with strong consistency. Redis wins on cross-cloud flexibility; MemoryDB wins on durability and managed operational burden inside AWS.

View analysis →
Redis Stack

Redis Stack is the same engine plus vector/JSON/graph/time-series modules at L4. Pick Redis (this row) for plain cache; pick Redis Stack when you need the modules and accept the RSALv2/SSPL trade-off there too.

View analysis →

Integration in 7-Layer Architecture

Role: L1 in-memory cache and message broker substrate. Provides key-value lookups, pub/sub, streams, and atomic data structures (sets, sorted sets, hashes) for agent stacks. Sits between L1 primary stores and downstream agent runtimes.

Upstream: Receives writes from L2 streaming (CDC results), L4 retrieval (cached embeddings), L7 orchestration (task queues), and direct application caches. Configuration via redis.conf or CONFIG SET commands.

Downstream: Serves cached reads to L4 retrieval (hot-data lookups), L7 inter-agent messaging (pub/sub fanout), L5 governance (ABAC decision cache, rate-limit counters), and direct application reads. Wire-protocol compatible with Valkey, AWS MemoryDB, Azure Cache, GCP MemoryStore.

⚡ Trust Risks

high Future unilateral license changes. RSALv2/SSPL is not the company's first relicense move

Mitigation: If your organization has a strict OSI-only requirement, default to Valkey at L1. If you stay on Redis, document the license posture, monitor Redis Inc.'s announcements, and have a Valkey migration plan you've actually tested (clients are compatible at the wire-protocol level).

medium Team picks Redis Stack expecting the full feature set without auditing the RSALv2 trade-off

Mitigation: If you use Redis Stack modules (FT.SEARCH, JSON.SET, GRAPH.QUERY, TS.ADD), you are accepting RSALv2/SSPL for those modules too. For vector search, evaluate pgvector at L1 or a dedicated vector DB at L4. For JSON, consider Postgres JSONB. For time-series, TimescaleDB.

high Production deployed on a single node. Restart loses all cache state, agent stack hits cold-start latency until warm

Mitigation: Use cluster mode (3+ nodes) for sharding OR Sentinel for HA. Test failover with a planned reboot and measure RTO. Don't run production agent stacks against a single Redis instance.

medium ACL not enabled. Using the default user with a shared password as the only access control

Mitigation: Enable Redis ACLs (since Redis 6). Create per-service users with command and key-pattern restrictions. Use aclLog for ACL change auditing.

Use Case Scenarios

moderate Healthcare claims agent caching member eligibility lookups

The OSS Redis distribution alone is not BAA-eligible. Use a managed deployment (Redis Cloud HIPAA, AWS ElastiCache for Redis with BAA). The license posture is otherwise irrelevant once you're on a managed BAA-signing service. ABAC enforcement still happens at L5.

strong Financial-services trading agent rate-limit counters and hot quotes

Sub-millisecond latency and INCR/DECR atomic counters are tailor-made for rate limiting and quote fan-out. Use cluster mode for horizontal scale and Sentinel for failover.

weak Multi-tenant SaaS agent stack with strict OSS-only procurement

If procurement requires OSI-approved licenses for all stack components, RSALv2/SSPL is a non-starter. Default to Valkey instead. Same engine, BSD-3-Clause.

Stack Impact

L1 Redis colocates with primary L1 stores (Postgres, Snowflake) as the hot operational cache. Reduces upstream load by 60-90% for read-heavy agent workloads through hot-data caching and rate-limit counters.
L4 If using Redis Stack at L4 instead of pure Redis at L1, the L1 cache and L4 vector store collapse into one engine. This simplifies operations but couples the license trade-off to both layers. Switching to Valkey at L1 means re-doing L4 vector storage.
L5 Redis ACL is RBAC-only. ABAC enforcement and audit-log durability must be provided by L5 components (OPA, AWS Verified Permissions, Cerbos). Don't treat Redis ACL as your governance layer.

⚠ Watch For

2-Week POC Checklist

Explore in Interactive Stack Builder →

Visit Redis website →

This analysis is AI-generated using the INPACT and GOALS frameworks from "Trust Before Intelligence." Scores and assessments are algorithmic and may not reflect the vendor's complete capabilities. Always validate with your own evaluation.