What Multi-Tenancy Actually Means
Multi-tenancy is an architectural pattern where a single application instance serves multiple customers (tenants), with each tenant’s data isolated from the others. It’s the foundation of how virtually every B2B SaaS product works — one deployed application, thousands of separate business accounts, each experiencing the product as if they have their own dedicated instance.
Getting multi-tenancy right from the start determines how well your SaaS scales, how securely customer data is isolated, and how complex your database operations become as the business grows. Getting it wrong means either rebuilding your architecture at scale — expensive and risky — or having data leaks between tenants, which is catastrophic for trust and potentially a regulatory violation.
This guide covers the three main multi-tenancy patterns, when to use each, and how we implement it at Zargham Labs using FastAPI and PostgreSQL — the same stack powering Messenjo, our multi-tenant WhatsApp Business automation platform.
The Three Multi-Tenancy Patterns
Pattern 1 — Separate database per tenant. Each customer gets their own database. Maximum isolation: a bug in one tenant’s data cannot physically reach another tenant’s records. Compliance is straightforward — point to a specific database for a GDPR deletion request. The downside is operational complexity that scales linearly with customer count. Managing 500 databases, running migrations across all of them, and monitoring each separately is a serious DevOps burden. This pattern is appropriate for enterprise customers paying significant monthly fees who contractually require complete isolation, but it’s not viable for high-volume, lower-ticket SaaS.
Pattern 2 — Separate schema per tenant (PostgreSQL). All tenants share one database, but each gets their own schema — a namespace within the database. Tables like tenant_abc.messages and tenant_xyz.messages coexist in the same database without interfering. Migrations become more manageable (one migration file, run once per schema). This is a reasonable middle ground for B2B SaaS with moderate tenant counts — typically up to a few hundred before operational complexity becomes a significant burden.
Pattern 3 — Shared schema with tenant ID column. All tenants share the same tables. Every row has a tenant_id (or workspace_id) column. Isolation is enforced either at the application layer (every query includes WHERE tenant_id = :current_tenant) or at the database layer using PostgreSQL’s Row Level Security (RLS). This is the most operationally efficient pattern — one set of tables, one migration, trivial to query across tenants for platform analytics. It’s also the most dangerous if your application has a bug that forgets to filter by tenant. This is how most high-scale SaaS products work at their core.
What We Use: Shared Schema with PostgreSQL Row Level Security
For Messenjo and the SaaS products we build at Zargham Labs, we use the shared schema with RLS pattern. Here is why and how it works in practice.
PostgreSQL’s Row Level Security lets you define policies at the database level that automatically filter rows based on the current session’s tenant context. This means even if your application code forgets a WHERE clause — due to a bug, a new developer, or an edge case — the database itself will not return data from other tenants. The isolation guarantee moves from “hopefully our application code is always correct” to “the database enforces this regardless of application logic.”
The setup looks like this: every table has a workspace_id UUID column (we use “workspace” rather than “tenant” in our domain model). You enable RLS on each table with ALTER TABLE messages ENABLE ROW LEVEL SECURITY, then define a policy that compares workspace_id to a PostgreSQL session variable: CREATE POLICY workspace_isolation ON messages USING (workspace_id = current_setting('app.current_workspace_id')::uuid).
In FastAPI, we set this session variable in a database session dependency that reads the authenticated user’s workspace from their JWT, then executes SET LOCAL app.current_workspace_id = '{workspace_id}' at the start of each database transaction. Every query in that request automatically inherits the correct tenant context without any individual developer needing to remember to add a WHERE clause.
Tenant Identification: How to Route Requests to the Right Workspace
Before applying tenant context, you need to know which tenant is making the request. There are three common approaches:
Subdomain routing (acme.yourapp.com) — The tenant is extracted from the subdomain. Clean UX, easy to white-label. The downside is DNS complexity and wildcard SSL certificate management at scale.
Path prefix routing (yourapp.com/workspace/acme/) — Simpler infrastructure, no DNS changes needed. Less professional appearance and harder to white-label for enterprise customers.
JWT claims — The tenant ID is embedded in the authentication token at login time. No routing logic needed — every authenticated request carries its own tenant context. This is our preferred approach. When a user authenticates, their JWT includes workspace_id. The FastAPI dependency chain reads the token, extracts the workspace, and sets the database context. Simple, stateless, and works identically for web, mobile, and API clients.
The Mistakes That Force a Rebuild
Forgetting to scope background jobs. Your Celery workers, cron jobs, and async tasks need tenant context just as much as your API endpoints do. A background job that sends WhatsApp messages, generates reports, or syncs data must explicitly know which tenant it’s operating for. We pass workspace IDs explicitly in task payloads — never rely on a global context variable in async workers, where request context doesn’t exist.
Shared file storage without tenant scoping. If tenants upload files, file paths must include the workspace ID: uploads/workspace_123/document.pdf, not uploads/document.pdf. Use pre-signed URLs with path validation to prevent tenants from constructing URLs to access other tenants’ files.
Platform-level analytics bypassing RLS incorrectly. Internal dashboards (total messages sent today across all tenants) need to bypass RLS. Create a dedicated database role with elevated permissions specifically for analytics queries and internal tooling. Never expose that role through the public-facing API.
Enforcing plan limits only at the API controller layer. If a tenant’s plan allows 1,000 contacts and you only check this in your REST API controllers, a webhook handler, background job, or internal admin tool can bypass the limit. Enforce usage limits in your service layer so they apply across all code paths.
Build It Right From Day One
Multi-tenancy architecture decisions made on day one are genuinely expensive to change at 10,000 tenants. The shared schema plus RLS pattern gives you operational simplicity at scale, database-enforced isolation guarantees, and the flexibility to move high-value enterprise customers to dedicated schemas or databases if contractually required later.
The total effort to implement RLS properly from the start — table policies, session variable injection, worker context passing — is roughly one week of focused engineering work. The cost of retrofitting isolation into an existing schema without it is measured in months and carries significant risk of regressions.
At Zargham Labs, we’ve implemented this architecture across multiple SaaS products. If you’re building a new SaaS and want the database architecture, FastAPI patterns, and Celery worker setup done correctly from the start, our SaaS development team builds production-grade multi-tenant applications — typically delivered in 8 weeks. We also offer dedicated FastAPI developers for teams that need to move fast without compromising on architectural quality.
