In-memory key-value store with rich data structures (strings, hashes, lists, sets, sorted sets, streams). License: RSALv2 / SSPL since March 2024 (not OSI-approved). The OSI-approved OSS path is Valkey (Linux Foundation fork at Redis 7.2.4, BSD-3-Clause). Redis remains widely deployed; pick Redis when you need vendor support or are already on Redis Cloud / Redis Enterprise, otherwise Valkey for predictable open-source posture.
Redis is the in-memory key-value store that defined the modern cache category for over a decade, but Redis Inc.'s March 2024 relicense to RSALv2/SSPL removed it from the OSI-approved OSS column. The technical capabilities remain best-in-class (sub-millisecond ops, rich data structures, mature client ecosystem), but the OSS market default for Redis-compatible caching is now Valkey (Linux Foundation fork at Redis 7.2.4). Pick Redis when you need vendor support, are already on Redis Cloud or Redis Enterprise, or specifically want the modules in Redis Stack. Pick Valkey for license predictability without re-architecting.
Redis delivers the speed and simplicity that agent stacks need at L1 cache, but the trust posture has bifurcated since March 2024. From a Trust Before Intelligence lens, the access control model is RBAC-only via ACL, which is fine for service-to-service caching but insufficient for fine-grained ABAC enforcement (that must live at L5). The bigger trust signal is the licensing change itself: a unilateral relicense to source-available terms is a category of vendor risk that affects every BSL/SSPL/RSAL choice. Teams that picked Redis years ago for OSS predictability got changed terms without consultation. The Valkey fork exists precisely because that risk materialized.
Sub-millisecond reads and writes with in-memory storage. No cold start once the process is running. Tens of thousands of ops per second per node, millions across a cluster. Best-in-class on the I dimension. Cap rule N/A.
Redis commands (GET, SET, ZADD, HSET, XADD) are precise and well-documented but not natural language. Agents need a translation layer above Redis to express semantic intent. Cap rule N/A.
Redis ACL (since 6.0) supports per-user command and key-pattern restrictions, which is RBAC. There is no native ABAC: no time-of-day, purpose-of-use, or attribute-based decisioning. Cap rule applied: RBAC-only without ABAC caps at 3.
Cluster mode for horizontal sharding, Sentinel for HA, every major cloud has a managed Redis-compatible offering (ElastiCache, MemoryStore, Azure Cache for Redis), self-hostable on any infrastructure. Truly multi-cloud and multi-deployment. Cap rule N/A.
Eight built-in data structures (strings, hashes, lists, sets, sorted sets, streams, HyperLogLog, bitmaps), pub/sub, transactions via MULTI/EXEC, Lua scripting. Strong contextual richness for a cache. Lower than database peers because Redis lacks first-class lineage and FDW-style cross-system integration.
MONITOR (real-time command stream), slowlog, INFO (operational stats), keyspace notifications give solid operational transparency. Cost-per-query attribution is N/A for in-memory cache (the cost model is provisioned capacity, not per-query). Cap rule N/A.
G1=N (no ABAC, ACL is RBAC-only with sub-10ms enforcement but cap rule applies), G2=N (slowlog isn't full access audit by default), G3=N (cache primitive, not workflow tool), G4=N (no model versioning concept), G5=N, G6=N (no built-in compliance mapping). Yes count 1/6 -> bucketed 2.
O1=Y (INFO/MONITOR integrate with Datadog and Prometheus), O2=N (no native distributed tracing), O3=N (LLM cost tracking N/A for cache), O4=Y (MTTD with proper monitoring), O5=N, O6=N. Yes count 2/6 -> bucketed 2.
A1=Y (sub-ms p95), A2=Y (in-memory is real-time), A3=Y (cache hit rate IS the metric), A4=Y (HA via cluster + Sentinel), A5=Y (Redis runs at hyperscaler scale), A6=Y (MGET batch operations). Yes count 6/6 -> bucketed 5. Same as Valkey, since the engine is the same.
L1=N, L2=N, L3=N, L4=N, L5=Y (key namespacing as terminology alignment, lenient interpretation), L6=N. Yes count 1/6 -> bucketed 2.
S1=Y (deterministic, cache returns what was stored), S2=Y (typed values, no missing fields by design), S3=Y (cluster replication), S4=Y (typed key-value, no schema mismatch), S5=Y (replication + AOF persistence as quality gates), S6=Y (slowlog as anomaly detection signal). Yes count 6/6 -> bucketed 5.
Best suited for
Compliance certifications
The Redis project itself does not sign BAAs and holds no third-party audit reports. Compliance for regulated workloads comes from managed deployments: Redis Cloud (HIPAA BAA, SOC 2 Type II, ISO 27001, PCI DSS), Redis Enterprise Cloud (same plus FedRAMP-eligible variants), AWS ElastiCache for Redis (HIPAA BAA, SOC 2, FedRAMP), GCP MemoryStore for Redis (HIPAA BAA, SOC 2). Self-hosted Redis on a FedRAMP-authorized substrate (AWS GovCloud, Azure Gov) inherits substrate compliance for the infrastructure layer but the Redis project does not provide BAAs directly.
Use with caution for
Choose Valkey when you want the same engine without RSALv2/SSPL license risk. Drop-in replacement for Redis OSS core. Major hyperscalers (AWS ElastiCache, GCP MemoryStore, Oracle Cloud) all run Valkey now. Redis wins if you need Redis Stack modules or vendor support; Valkey wins on license predictability.
View analysis →Choose Memcached for the simplest possible distributed key-value cache: no clustering, no persistence, no rich types. Redis wins on data structures, HA, and module ecosystem. Memcached wins on operational simplicity for stateless cache use cases.
View analysis →Choose AWS MemoryDB for AWS-native deployments wanting durable Redis-compatible storage with strong consistency. Redis wins on cross-cloud flexibility; MemoryDB wins on durability and managed operational burden inside AWS.
View analysis →Redis Stack is the same engine plus vector/JSON/graph/time-series modules at L4. Pick Redis (this row) for plain cache; pick Redis Stack when you need the modules and accept the RSALv2/SSPL trade-off there too.
View analysis →Role: L1 in-memory cache and message broker substrate. Provides key-value lookups, pub/sub, streams, and atomic data structures (sets, sorted sets, hashes) for agent stacks. Sits between L1 primary stores and downstream agent runtimes.
Upstream: Receives writes from L2 streaming (CDC results), L4 retrieval (cached embeddings), L7 orchestration (task queues), and direct application caches. Configuration via redis.conf or CONFIG SET commands.
Downstream: Serves cached reads to L4 retrieval (hot-data lookups), L7 inter-agent messaging (pub/sub fanout), L5 governance (ABAC decision cache, rate-limit counters), and direct application reads. Wire-protocol compatible with Valkey, AWS MemoryDB, Azure Cache, GCP MemoryStore.
Mitigation: If your organization has a strict OSI-only requirement, default to Valkey at L1. If you stay on Redis, document the license posture, monitor Redis Inc.'s announcements, and have a Valkey migration plan you've actually tested (clients are compatible at the wire-protocol level).
Mitigation: If you use Redis Stack modules (FT.SEARCH, JSON.SET, GRAPH.QUERY, TS.ADD), you are accepting RSALv2/SSPL for those modules too. For vector search, evaluate pgvector at L1 or a dedicated vector DB at L4. For JSON, consider Postgres JSONB. For time-series, TimescaleDB.
Mitigation: Use cluster mode (3+ nodes) for sharding OR Sentinel for HA. Test failover with a planned reboot and measure RTO. Don't run production agent stacks against a single Redis instance.
Mitigation: Enable Redis ACLs (since Redis 6). Create per-service users with command and key-pattern restrictions. Use aclLog for ACL change auditing.
The OSS Redis distribution alone is not BAA-eligible. Use a managed deployment (Redis Cloud HIPAA, AWS ElastiCache for Redis with BAA). The license posture is otherwise irrelevant once you're on a managed BAA-signing service. ABAC enforcement still happens at L5.
Sub-millisecond latency and INCR/DECR atomic counters are tailor-made for rate limiting and quote fan-out. Use cluster mode for horizontal scale and Sentinel for failover.
If procurement requires OSI-approved licenses for all stack components, RSALv2/SSPL is a non-starter. Default to Valkey instead. Same engine, BSD-3-Clause.
This analysis is AI-generated using the INPACT and GOALS frameworks from "Trust Before Intelligence." Scores and assessments are algorithmic and may not reflect the vendor's complete capabilities. Always validate with your own evaluation.