Caching In Spring Boot?

Database Caching stores frequently accessed data in memory (RAM), so your app doesn’t hit the database every time.

🎯 Result: Faster performance + reduced DB load


✅ When to Use It?

  • Same data is read again and again (e.g., user info, product list)
  • Performance is critical (e.g., high-traffic APIs)

🛠️ How to Enable Caching in Spring Boot

✅ Step 1: Add Starter Dependency

In pom.xml:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-cache</artifactId>
</dependency>


✅ Step 2: Enable Caching in Main Class

@SpringBootApplication
@EnableCaching
public class MyApp {
    public static void main(String[] args) {
        SpringApplication.run(MyApp.class, args);
    }
}


✅ Step 3: Use @Cacheable in Your Service

 

@Service
public class StudentService {

    @Autowired
    private StudentRepository repo;

    @Cacheable("students")
    public Student getStudentById(Long id) {
        System.out.println("Fetching from DB...");
        return repo.findById(id).orElse(null);
    }
}

📌 Now, the first call will fetch from the DB and store it in cache.
Next time? Fetched from memory – faster!


⚙️ How It Works

  • @Cacheable → stores method result
  • @CacheEvict → removes entry from cache
  • @CachePut → updates entry in cache

🧠 What Cache is Used?

By default: Simple in-memory (ConcurrentMap)

For production: Use EhCache, Redis, or Caffeine


🗂 application.properties (optional settings)

spring.cache.type=simple   # or redis, caffeine, etc.

🧪 Example Output:

GET /students/101
→ Fetching from DB...

GET /students/101 (again)
→ (No DB call – data returned from cache)

What is Radis Cache?

Redis is a fast, in-memory data store that works like a key–value database, cache, and message broker. It keeps data in RAM for microsecond latency, with optional persistence to disk.

What makes Redis special

  • Rich data types: strings, hashes (maps), lists, sets, sorted sets, bitmaps, geospatial, streams, HyperLogLog; plus modules (e.g., Bloom filters).
  • Speed & simplicity: many ops are O(1) because data is in memory.
  • Persistence (optional):
    • RDB snapshots (periodic)
    • AOF append-only log (durable, can combine with RDB)
  • High availability: replication, Sentinel (failover), Cluster (sharding + HA).
  • Messaging: Pub/Sub and Streams with consumer groups.
  • Scripting & transactions: Lua scripts, MULTI/EXEC.
  • Eviction policies: LRU/LFU/TTL-based when memory limits are reached.

Common use cases

  • Caching (DB results, HTML fragments, config)
  • Sessions and login state
  • Rate limiting and API throttling
  • Queues / event logs (Lists, Streams)
  • Leaderboards (sorted sets)
  • Distributed locks (SET NX PX pattern)

Tiny taste (CLI)

SET user:42:name "Aftab"
GET user:42:name
INCR page:views
EXPIRE user:42:name 300

In Spring Boot (one-liner view)

  • Add spring-boot-starter-data-redis + spring-boot-starter-cache
  • @EnableCaching + @Cacheable("product") → results stored in Redis with TTL

Spring Boot + Radis Implementation

What & why

  • Spring Cache abstraction: simple annotations (@Cacheable, @CachePut, @CacheEvict) to cache method results.
  • Redis as the cache store: in-memory, fast, shared across app instances (good for clusters), supports TTLs and eviction.

How it works (flow)

  • You call a method annotated with @Cacheable("name").
  • Spring computes a key (default: params) and checks Redis.
  • If miss → method runs, result is serialized and saved to Redis with optional TTL.
  • On writes/updates you use @CachePut/@CacheEvict to keep cache consistent.

Minimal setup

Dependencies

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-cache</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

Config

spring:
  cache:
    type: redis
  data:
    redis:
      host: localhost
      port: 6379

Enable & tune (JSON + TTLs)

@Configuration
@EnableCaching
public class CacheConfig {
  @Bean
  public CacheManager cacheManager(RedisConnectionFactory cf) {
    var serializer = new GenericJackson2JsonRedisSerializer();
    var base = RedisCacheConfiguration.defaultCacheConfig()
        .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(serializer))
        .disableCachingNullValues()
        .entryTtl(Duration.ofMinutes(5)); // default TTL

    return RedisCacheManager.builder(cf)
        .cacheDefaults(base)
        .withInitialCacheConfigurations(Map.of(
            "product", base.entryTtl(Duration.ofMinutes(10)),
            "user",    base.entryTtl(Duration.ofMinutes(3))
        ))
        .build();
  }
}

Use in a service

@Service
public class ProductService {

  @Cacheable(cacheNames = "product", key = "#id", sync = true)
  public Product get(long id) { return loadFromDb(id); }

  @CachePut(cacheNames = "product", key = "#p.id")
  public Product update(Product p) { return saveToDb(p); }

  @CacheEvict(cacheNames = "product", key = "#id")
  public void delete(long id) { deleteFromDb(id); }
}

Design choices that matter

  • Keys: default is args; customize with SpEL (key = "#root.methodName + ':' + #id"). Namespace per entity to avoid collisions.
  • Serialization: prefer JSON (GenericJackson2JsonRedisSerializer) over JDK serialization for readability and language-agnostic use.
  • TTLs: choose per cache based on data volatility; add small jitter if you worry about stampedes.
  • Nulls: usually don’t cache nulls (disableCachingNullValues()), or set short TTL if you do.
  • Stampede control: @Cacheable(sync = true) prevents duplicate loads within one JVM; at cluster level consider request coalescing or small randomized TTLs.
  • Eviction strategy: use @CacheEvict on mutations, or @CachePut to update cache with the new value. For bulk changes, use allEntries = true.

Scaling & ops

  • HA Redis: use Sentinel or Cluster (spring.data.redis.sentinel.* / cluster.*).
  • Security: set Redis password/ACLs, network-isolate the instance; don’t cache secrets unless keys are carefully namespaced and encrypted if needed.
  • Observability: expose caches actuator endpoint; enable Redis/Lettuce Micrometer metrics; watch hit/miss, evictions, and latency.

Common pitfalls

  • Caching mutable objects—prefer immutables or defensive copies.
  • Forgetting to evict/update after writes → stale reads.
  • Very large values → Redis memory pressure; cache only what’s needed.
  • Class changes breaking deserialization → version your DTOs or keep JSON stable.

 

Back to blog

Leave a comment