REST API Fallback mechanism!
Handling API calls when the client or server is down requires planning for resilience, retries, and fallback mechanisms. Hereβs how to approach it:
β 1. Clarify Who is Down
Scenario | Strategy |
---|---|
π΄ Client is down | Client retries later or uses message queue |
π΄ Server is down | Client implements retries, circuit breakers, fallback |
Β
Letβs focus on client making a call and the server is down.
π οΈ Strategies to Handle API Call When Server is Down
β A. Retry with Exponential Backoff
- Automatically retry the failed request after waiting.
- Increase wait time between attempts (e.g., 1s, 2s, 4s).
Example with Spring RestTemplate + RetryTemplate
You can configure attempts, delay, backoff policy.
β B. Circuit Breaker Pattern
- Prevent system overload by stopping calls after repeated failures.
- Automatically retries when the service becomes healthy.
π§ Tools:
- Resilience4j (Recommended for Spring Boot)
- Hystrix (deprecated)
β C. Queue the Request (Asynchronous Retry)
- Send the request to a message queue (e.g., Kafka, RabbitMQ).
- Retry processing when the server is back up.
-
Useful for non-blocking use cases like order processing, email sending, etc.
β D. Log and Alert
- Log failure with request details.
- Optionally alert DevOps team via Slack, Email, etc.
β E. Client Timeout and Graceful UI Response
- Always set client-side timeouts.
- Show proper fallback message to user: βService temporarily unavailable, please try again later.β
π Bonus: Detecting If Server is Down
- Ping health check endpoint before making a heavy call.
- Use Spring Boot Actuator on the server side:
/actuator/health
π Best Practices Summary
Practice | Benefit |
---|---|
Retry with backoff | Avoid hammering the server |
Circuit breaker | Fail fast, self-heal system |
Fallback response | Provide continuity to users |
Message queue | Ensure no data loss |
Health checks | Prevent unnecessary calls |