Architecture Overview

Architecture Overview

The Big Picture

FloodLine runs as 5 Docker containers orchestrated by Docker Compose:

Browser ──▶ nginx:80
              ├── /api/*  ──▶ backend:8000  (FastAPI)
              └── /*      ──▶ frontend:3000 (Next.js)

worker (APScheduler) ──▶ postgres:5432
backend              ──▶ postgres:5432

There is no Redis, no Celery, no Kubernetes. The entire system runs on a single machine.

Why These Choices

Why no Redis?

JWT authentication is stateless — the token lives in an httpOnly cookie and the backend just verifies its signature. There's no session store to share between processes. The only scheduled work (weather checks every 30 min) runs inside APScheduler in the worker container, which doesn't need a message broker.

Why no Celery/task queue?

The only async work is: check weather for each ZIP code, send SMS if it's raining. APScheduler triggers this every 30 minutes. The worker does all ZIP codes in one batch using asyncio.gather() for parallelism. If the system ever needs heavy async work (video transcoding, bulk email), that's when you'd add Redis + Celery.

Why a separate worker container?

The worker runs the same backend Python code but with a different entrypoint (python -m app.worker instead of uvicorn app.main:app). This keeps the web server and cron jobs isolated — the worker can't crash the API, and vice versa. It's the same Docker image, just a different command: in docker-compose.yml.

Why nginx in front?

Nginx handles routing: API calls go to the backend, everything else goes to Next.js. In production, it also handles SSL termination. Without nginx, you'd need to expose both ports (3000 and 8000) to the internet and deal with CORS headaches. With nginx, the browser sees a single origin on port 80.

Why Next.js instead of a plain React SPA?

Next.js gives us the App Router for file-based routing, server-side rendering for SEO (landing page), and output: 'standalone' for small Docker images. We use it mostly as a client-side app (most pages are "use client"), but the framework handles routing, code-splitting, and build optimization.

Multi-Tenancy

Multi-tenancy is column-based. Every user has an optional bid_id foreign key pointing to the bids table. When a BID manager logs in, their queries are filtered by WHERE bid_id = :their_bid_id. This is the simplest approach — no separate schemas, no separate databases, just a column filter. It scales to hundreds of BIDs.

Container Communication

Containers talk to each other by service name (Docker's internal DNS):

From To How
nginx frontend http://frontend:3000
nginx backend http://backend:8000
backend postgres postgresql://...@postgres:5432/floodline
worker postgres Same connection string as backend
worker Twilio HTTPS (external)
worker Open-Meteo HTTPS (external)
backend AWS S3 HTTPS (external, presigned URLs)

The browser never talks to the backend or frontend directly — everything goes through nginx on port 80.

External Services

Service Purpose Why this one
Twilio SMS verification codes + weather alerts Industry standard, reliable delivery
AWS S3 Photo/video storage via presigned URLs Client uploads directly to S3, backend never touches the file bytes
Open-Meteo Weather data (precipitation) Free, no API key needed, reliable
Sentry Error monitoring Catches backend exceptions in production

Subscribe to Osama Abuomar

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe