AK
Architecture

Systems I've designed, drawn.

I think in boxes and arrows before I think in code. These are the patterns I've shipped in production — and the trade-offs baked into each.

Distributed rate limiter

Sliding-window limiter in Redis, protecting a 10K-RPS API. Composite keys (tenant, user, route). Fail-soft when Redis degrades.

  • Sliding window via Redis sorted sets (ZADD + ZREMRANGEBYSCORE)
  • Atomic via Lua script — no race with MULTI/EXEC
  • Per-tenant, per-user, per-route composite keys
  • Local shadow cache as fail-soft when Redis is unreachable
architecture / distributed-rate-limiter.svg
Clientsmobile · web · 3rd partyAbusive actorbot / retry stormAPI Gatewayrate-limit middlewaretenant · route · userRedisZADD + ZREMRANGEBYSCOREsliding window (sorted set)Local shadow cachefallback if Redis downOrders serviceprotectedPayments serviceprotectedSearch serviceprotectedHTTP 429Retry-After headerallowed paththrottled (429)
Distributed Rate Limiter

Idempotent payment pipeline

Retry-safe charges using an idempotency key contract, atomic ledger writes, and an outbox-driven retry worker for PSP calls.

  • Required Idempotency-Key header (uuidv4, 24h TTL)
  • Redis hot-path dedupe + Postgres unique constraint for truth
  • Ledger write + idempotency write in the same transaction
  • Outbox pattern → retry worker with exponential backoff & DLQ
architecture / idempotent-payment-system.svg
ClientIdempotency-KeyPayments APIdedupe + validatecheck-and-set on keyIdempotency storeRedis + Postgreskey → response hashLedger (Postgres)double-entrytx: DEBIT + CREDITunique(idempotency_key)Outbox tablesettlement.requestedFraud / risk checkssynchronousAcquirer / PSPexponential backoffRetry workerreads outbox → PSPdead-letter on cap1. POST /charge + key2. check dedupe3. write tx atomically4. call PSP5. outbox event6. async retry
Idempotent Payment System

Event-driven platform

Services publish aggregate events to Kafka; downstream consumers (search, analytics, notifications) subscribe with their own pace.

  • Topics versioned (`orders.v1`) and partitioned by aggregate id
  • Consumer groups isolate reader pace + failure domains
  • Dead-letter topic with scheduled replay worker
  • Schema registry keeps producers and consumers honest
architecture / event-driven-architecture.svg
Order serviceproducerCatalog serviceproducerUser serviceproducerKafka clusterpartitioned by aggregate idorders.v1catalog.v1users.v1dead-letterSearch indexerconsumer group AAnalytics warehouseconsumer group BNotification serviceconsumer group CReplay / DLQ workermanual + scheduled
Event-Driven Architecture

Circuit breakers per dependency

One breaker per remote dependency — not per service. Jittered recovery windows. Fallbacks only for safe reads.

  • Per-dependency breakers surface the real culprit
  • Breaker timeout < caller timeout (or it hides failures)
  • Jittered half-open windows to avoid recovery stampedes
  • Fallbacks only for non-authoritative reads; writes fail loud
architecture / microservices-with-circuit-breakers.svg
Edge / BFFauth · routingOrders servicestate machinePricing servicecached responsesSearch serviceRedis cacheCBCBCBPayments providerclosed · healthyVendor APIhalf-open · probingMaps provideropen · fallbackcachedtilesLegend:closedhalf-openopen (falls back)
Microservices with Circuit Breakers

NFT marketplace on Terra

CosmWasm contracts on-chain (Rust), off-chain indexer projecting events into Postgres for fast queries and UX.

  • On-chain: mint, escrow, royalties in CosmWasm
  • Off-chain indexer subscribes to chain events
  • Postgres projections enable joined queries and fast reads
  • Cache hot listings; IPFS for token metadata
architecture / nft-marketplace.svg
Web appReact + walletRead APIGraphQL over indexerChain (Terra)CosmWasm (Rust)mint · escrow · royaltiesIndexerevent subscriberPostgreslistings · owners · historyIPFS / storagetoken metadataCachehot listings
NFT Marketplace
Let's build

Have a system that needs to scale — or stop breaking?

I work with a small number of teams each month on architecture reviews, scaling, and hands-on backend engineering. If that sounds like you, let's talk.