When one microservice talks to two databases
Legit use cases
- Polyglot persistence: one DB fits OLTP (e.g., Postgres) and another fits search/analytics (e.g., Elasticsearch).
- Strangler/migration: old DB + new DB during a phased rewrite.
- Read/write split across stores: e.g., write to SQL, project to cache/search store for queries.
- Regulatory or tenancy separation: sensitive data isolated in a separate store.
Pros
- Right tool for the job: better performance/cost by matching workload to store type.
- Incremental migration: ship changes without a big-bang cutover.
- Local consistency for composite ops: avoid cross-service chattiness when data lives in two places.
- Fewer network hops: can reduce tail latency versus orchestrating multiple services.
Cons
- Operational complexity: two drivers, two schemas, two sets of migrations, backups, HA/DR.
- Coupled deployments: schema changes in either DB can gatekeep the same service’s releases.
- Transaction boundaries: no native ACID across DBs; 2PC is painful and fragile. You’ll need Sagas/outbox patterns.
- Failure modes multiply: partial writes, split-brain semantics, tricky retries/idempotency.
- Observability & debugging: traces/spans across two data paths; harder to reason about correctness.
- Security blast radius: one compromised service key may expose two data stores; more secrets to manage.
- Connection & resource pressure: two pools to tune; risk of exhausting threads/connections under load.
Red flags (consider avoiding)
- Cross-DB strong consistency requirements.
- Frequent multi-entity transactions that span both DBs.
- Independent scaling profiles (one is read-heavy, the other write-heavy) that force conflicting resource tuning.
- Team/process not mature on Sagas, idempotent handlers, and schema versioning.
If you must do it — best practices
- Clear ownership: define which entities live in DB-A vs DB-B; avoid overlap.
- Asynchronous integration: use outbox + CDC (e.g., Debezium) to project changes from system of record to the other store.
- Sagas over 2PC: orchestrate/choréograph steps; design compensating actions.
- Idempotency everywhere: request IDs, upserts, at-least-once consumers.
- Connection pools per DB: size and isolate (bulkheads) so one DB’s stalls don’t starve the other.
- Circuit breakers & timeouts: independent for each DB client; degrade gracefully (serve cached/partial data).
- Schema versioning: backward-compatible changes; blue/green or expand–contract migrations for both stores.
- Health checks: readiness should fail if either DB access that’s critical is impaired; liveness should not flap on transient blips.
- Secrets & IAM: separate credentials/roles; least privilege; rotate independently.
- Backups/DR tested separately: different RPO/RTO per store; document recovery order.
Good alternatives
- Database-per-service + API composition: keep one DB per service; compose responses upstream.
- Event sourcing + projections: one source of truth, many read models (search, cache).
- Data virtualization/federation (careful with latency) for reporting-like reads.
- Use a single DB with extensions: e.g., Postgres + full-text/search indexes if “good enough”.
- Read replicas of the same engine if the second DB is only for scale-out reads.
Quick decision checklist
- Can your business tolerate eventual consistency between the two data sets?
- Do you have the ops maturity for dual migrations, backups, and DR?
- Is there a clean boundary of responsibility per DB (system of record vs projection)?
- Do you have SLA clarity on failure of either DB and a graceful degradation story?
- Will this be temporary (migration) with an exit plan and timeline?
Bottom line
It’s fine for polyglot reads/projections or temporary migrations. For anything requiring strong cross-store consistency or frequent distributed writes, prefer database-per-service with events and keep each microservice owning exactly one system-of-record database.