Migrating a Monolith to Microservices: A Case Study
The Starting Point
The monolith was a 250,000-line Django application handling catalog management, order processing, payment, shipping, notifications, and customer accounts. Deployments required 45 minutes and full regression testing. The team had grown to 12 engineers, and merge conflicts were frequent. The database schema had evolved over 5 years into a tangle of cross-domain foreign keys that made independent changes risky.
- 250K lines of Python in a single Django project
- 45-minute deployment pipeline with manual QA gates
- Single PostgreSQL database with 180 tables and complex cross-domain joins
- 12 engineers experiencing frequent merge conflicts and deployment queues
Domain Discovery and Bounded Contexts
Before writing any new code, the team spent two weeks mapping the domain using Event Storming workshops. They identified six bounded contexts: Catalog, Orders, Payments, Shipping, Notifications, and Customer Identity. Each context had clear aggregate roots, domain events, and well-defined interactions with other contexts. The key insight was that several database joins that seemed essential were actually cross-context queries that could be replaced with eventual consistency.
First Service: Notifications
The team chose Notifications as the first extraction target because it had the fewest inbound dependencies, was primarily event-driven (react to order events, payment events, etc.), had no shared database tables that other domains read, and failure was low-impact (a delayed email is acceptable, a lost order is not). They built a new Node.js service that consumed events from the monolith via a shared message queue, implemented the Strangler Fig pattern with a feature flag to switch between the monolith's notification code and the new service, and ran both in parallel for three weeks comparing outputs before decommissioning the monolith code.
Always extract the least risky service first. Your team needs to build operational muscle — service discovery, observability, deployment pipelines — before tackling critical business domains.
Solving the Shared Database Problem
The hardest part of the migration was untangling the shared database. The team established a rule: each service owns its data exclusively. No other service reads from or writes to another service's tables. Cross-service data access happens through APIs or events. They implemented this gradually using database views to provide read access during the transition period, eventually replacing views with API calls. The process took six months for the core domains.
Resist the temptation to share a database between services 'temporarily.' Shared databases create tight coupling that undermines every benefit of microservices. Invest in proper API contracts from the start.