Refactoring a monolith into microservices  |  Cloud Architecture Center  |  Google Cloud (2022)

This reference guide is the second in a four-part series about designing,building, and deploying microservices. This series describes the variouselements of a microservices architecture. The series includes information aboutthe benefits and drawbacks of the microservices architecture pattern, and how toapply it.

  1. Introduction to microservices
  2. Refactoring a monolith into microservices (this document)
  3. Interservice communication in a microservices setup
  4. Distributed tracing in a microservices application

This series is intended for application developers and architects who design andimplement the migration to refactor a monolith application to a microservicesapplication.

The process of transforming a monolithic application into microservices is aform ofapplication modernization.To accomplish application modernization, we recommend that you don't refactorall of your code at the same time. Instead, we recommend that youincrementally refactor your monolithic application. When you incrementally refactor an application, yougradually build a new application that consists of microservices, and run theapplication along with your monolithic application. This approach is also knownas theStrangler Fig pattern.Over time, the amount of functionality that is implemented by the monolithicapplication shrinks until either it disappears entirely or it becomes anothermicroservice.

To decouple capabilities from a monolith, you have to carefully extract thecapability's data, logic, and user-facing components, and redirect them to thenew service. It's important that you have a good understanding of the problemspace before you move into the solution space.

When you understand the problem space, you understand the natural boundaries inthe domain that provide the right level of isolation. We recommend that youcreate larger services instead of smaller services until you thoroughlyunderstand the domain.

Defining service boundaries is an iterative process. Because this process is anon-trivial amount of work, you need to continuously evaluate the cost ofdecoupling against the benefits that you get. Following are factors to help youevaluate how you approach decoupling a monolith:

  • Avoid refactoring everything all at once. To prioritize servicedecoupling, evaluate cost versus benefits.
  • Services in a microservice architecture are organized around businessconcerns, and not technical concerns.
  • When you incrementally migrate services, configure communication betweenservices and monolith to go through well-defined API contracts.
  • Microservices require much more automation: think in advance aboutcontinuous integration (CI),continuous deployment (CD),central logging, and monitoring.

The following sections discuss various strategies to decouple services andincrementally migrate your monolithic application.

Decouple by domain-driven design

Microservices should be designed around business capabilities, not horizontallayers such as data access or messaging. Microservices should also have loosecoupling and high functional cohesion. Microservices are loosely coupled if youcan change one service without requiring other services to be updated at thesame time. A microservice is cohesive if it has a single, well-defined purpose,such as managing user accounts or processing payment.

Domain-driven design (DDD) requires a good understanding of the domain for which the application iswritten. The necessary domain knowledge to create the application resides withinthe people who understand it—the domain experts.

You can apply the DDD approach retroactively to an existing application asfollows:

  1. Identify aubiquitous language—acommon vocabulary that is shared between all stakeholders. As a developer,it's important to use terms in your code that a non-technical person canunderstand. What your code is trying to achieve should be a reflection ofyour company processes.
  2. Identify the relevantmodules in the monolithic application, and then apply the common vocabulary tothose modules.
  3. Definebounded contexts where you apply explicit boundaries to the identified modules with clearlydefined responsibilities. The bounded contexts that you identify arecandidates to be refactored into smaller microservices.

The following diagram shows how you can apply bounded contexts to an existingecommerce application:

Refactoring a monolith into microservices | Cloud Architecture Center | Google Cloud (1)

Figure 1. Application capabilities are separated into bounded contexts thatmigrate to services.

In figure 1, the ecommerce application's capabilities are separated intobounded contexts and migrated to services as follows:

  • Order management and fulfillment capabilities are bound into thefollowing categories:
    • The order management capability migrates to the order service.
    • The logistics delivery management capability migrates to thedelivery service.
    • The inventory capability migrates to the inventory service.
  • Accounting capabilities are bound into a single category:
    • The consumer, sellers, and third-party capabilities are boundtogether and migrate to the account service.

Prioritize services for migration

An ideal starting point to decouple services is to identify the loosely coupledmodules in your monolithic application. You can choose a loosely coupled moduleas one of the first candidates to convert to a microservice. To complete adependency analysis of each module, look at the following:

  • The type of the dependency: dependencies from data or other modules.
  • The scale of the dependency: how a change in the identified modulemight impact other modules.

Migrating a module with heavy data dependencies is usually a nontrivial task.If you migrate features first and migrate the related data later, you might betemporarily reading from and writing data to multiple databases. Therefore, youmust account for data integrity and synchronization challenges.

We recommend that you extract modules that have different resource requirementscompared to the rest of the monolith. For example, if a module has an in‑memorydatabase, you can convert it into a service, which can then be deployed on hostswith higher memory. When you turn modules with particular resource requirementsinto services, you can make your application much easier to scale.

From an operations standpoint, refactoring a module into its own service alsomeans adjusting your existing team structures. The best path to clearaccountability is to empower small teams that own an entire service.

Additional factors that can affect how you prioritize services for migrationinclude business criticality, comprehensive test coverage, security posture ofthe application, and organizational buy-in. Based on your evaluations, you canrank services as described in the first document in this series, by thebenefit you receive from refactoring.

Extract a service from a monolith

After you identify the ideal service candidate, you must identify a way forboth microservice and monolithic modules to coexist. One way to manage thiscoexistence is to introduce an inter-process communication (IPC) adapter, whichcan help the modules work together. Over time, the microservice takes on theload and eliminates the monolithic component. This incremental process reducesthe risk of moving from the monolithic application to the new microservicebecause you can detect bugs or performance issues in a gradual fashion.

The following diagram shows how to implement the IPC approach:

Refactoring a monolith into microservices | Cloud Architecture Center | Google Cloud (2)

Figure 2. An IPC adapter coordinates communication between the monolithicapplication and a microservices module.

In figure 2, module Z is the service candidate that you want to extract fromthe monolithic application. Modules X and Y are dependent upon module Z.Microservice modules X and Y use an IPC adapter in the monolithic application tocommunicate with module Z through a REST API.

The next document in this series,Interservice communication in a microservices setup,describes theStrangler Fig pattern and how to deconstruct a service from the monolith.

Manage a monolithic database

Typically, monolithic applications have their own monolithic databases. One ofthe principles of a microservices architecture is to have one database for eachmicroservice. Therefore, when you modernize your monolithic application intomicroservices, you must split the monolithic database based on the serviceboundaries that you identify.

To determine where to split a monolithic database, first analyze the databasemappings. As part of the service extraction analysis , you gathered someinsights on the microservices that you need to create. You can use the sameapproach to analyze database usage and to map tables or other database objectsto the new microservices. Tools likeSchemaCrawler,SchemaSpy,andERBuilder can help you to perform such an analysis. Mapping tables and other objects helpsyou to understand the coupling between database objects that spans across yourpotential microservices boundaries.

However, splitting a monolithic database is complex because there might not beclear separation between database objects. You also need to consider otherissues, such as data synchronization, transactional integrity, joins, andlatency. The next section describes patterns that can help you respond to theseissues when you split your monolithic database.

Reference tables

In monolithic applications, it's common for modules to access required datafrom a different module through an SQL join to the other module's table. Thefollowing diagram uses the previous ecommerce application example to show thisSQL join access process:

Refactoring a monolith into microservices | Cloud Architecture Center | Google Cloud (3)

Figure 3. A module joins data to a different module's table.

In figure 3, to get product information, an order module uses a product_idforeign key to join an order to the products table.

However, if you deconstruct modules as individual services, we recommend thatyou don't have the order service directly call the product service's database torun a join operation. The following sections describe options that you canconsider to segregate the database objects.

When you separate the core functionalities or modules into microservices, youtypically use APIs to share and expose data. The referenced service exposes dataas an API that the calling service needs, as shown in the following diagram:

Refactoring a monolith into microservices | Cloud Architecture Center | Google Cloud (4)

Figure 4. A service uses an API call to get data from another service.

In figure 4, an order module uses an API call to get data from a productmodule. This implementation has obvious performance issues due to additionalnetwork and database calls. However, sharing data through an API works well whendata size is limited. Also, if the called service is returning data that has awell-known rate of change, you can implement a local TTL cache on the caller toreduce network requests to the called service.

Replicate data

Another way to share data between two separate microservices is to replicatedata in the dependent service database. The data replication is read-only andcan be rebuilt any time. This pattern enables the service to be more cohesive.The following diagram shows how data replication works between twomicroservices:

Refactoring a monolith into microservices | Cloud Architecture Center | Google Cloud (5)

Figure 5. Data from a service is replicated in a dependent service database.

In figure 5, the product service database is replicated to the order servicedatabase. This implementation lets the order service get product data withoutrepeated calls to the product service.

To build data replication, you can use techniques like materialized views,change data capture (CDC), and event notifications. The replicated data iseventually consistent, but there can be lag in replicating data, so there is arisk of serving stale data.

Static data as configuration

Static data, such as country codes and supported currencies, are slow tochange. You can inject such static data as a configuration in a microservice.Modern microservices and cloud frameworks provide features to manage suchconfiguration data using configuration servers, key-value stores, and vaults.You can include these features declaratively.

Monolithic applications have a common pattern known as shared mutable state.In a shared mutable state configuration, multiple modules use a single table, asshown in the following diagram:

Refactoring a monolith into microservices | Cloud Architecture Center | Google Cloud (6)

Figure 6. Multiple modules use a single table.

In figure 6, the order, payment, and shipping functionalities of the ecommerceapplication use the same ShoppingStatus table to maintain the customer's orderstatus throughout the shopping journey.

To migrate a shared mutable state monolith, you can develop a separateShoppingStatus microservice to manage the ShoppingStatus database table. Thismicroservice exposes APIs to manage a customer's shopping status, as shown inthe following diagram:

Refactoring a monolith into microservices | Cloud Architecture Center | Google Cloud (7)

Figure 7. A microservice exposes APIs to multiple other services.

In figure 7, the payment, order, and shipping microservices use theShoppingStatus microservice APIs. If the database table is closely related toone of the services, we recommend that you move the data to that service. Youcan then expose the data through an API for other services to consume. Thisimplementation helps you ensure that you don't have too many fine-grainedservices that call each other frequently. If you split services incorrectly, youneed to revisit defining the service boundaries.

Distributed transactions

After you isolate the service from the monolith, a local transaction in theoriginal monolithic system might get distributed between multiple services. Atransaction that spans multiple services is considered a distributedtransaction. In the monolithic application, the database system ensures that thetransactions are atomic. To handle transactions between various services in amicroservice-based system, you need to create a global transaction coordinator.The transaction coordinator handles rollback, compensating actions, and othertransactions that are described in the next document in this series,Interservice communication in a microservices setup.

Data consistency

Distributed transactions introduce the challenge of maintaining dataconsistency across services. All updates must be done atomically. In amonolithic application, the properties of transactions guarantee that a queryreturns a consistent view of the databasebased on its isolation level.

In contrast, consider a multistep transaction in a microservices-basedarchitecture. If any one service transaction fails, data must be reconciled byrolling back steps that have succeeded across the other services. Otherwise, theglobal view of the application's data is inconsistent between services.

It can be challenging to determine when a step that implements eventualconsistency has failed. For example, a step might not fail immediately, butinstead could block or time out. Therefore, you might need to implement somekind of time-out mechanism. If the duplicated data is stale when the calledservice accesses it, then caching or replicating data between services to reducenetwork latency can also result in inconsistent data.

The next document in the series,Interservice communication in a microservices setup,provides an example of a pattern to handle distributed transactionsacross microservices.

Design interservice communication

In a monolithic application, components (or application modules) invoke eachother directly through function calls. In contrast, a microservices‑basedapplication consists of multiple services that interact with each other over thenetwork.

When you design interservices communication, first think about how services areexpected to interact with each other. Service interactions can be one of thefollowing:

  • One‑to‑one interaction: each client request is processed by exactlyone service.
  • One‑to‑many interactions: each request is processed by multipleservices.

Also consider whether the interaction is synchronous or asynchronous:

  • Synchronous: the client expects a timely response from the serviceand it might block while it waits.
  • Asynchronous: the client doesn't block while waiting for a response.The response, if any, isn't necessarily sent immediately.

The following table shows combinations of interaction styles:

One-to-one One-to-many
Synchronous Request and response: send a request to a service and wait for a response.
Asynchronous Notification: send a request to a service, but no reply is expected or sent. Publish and subscribe: the client publishes a notification message, and zero or more interested services consume the message.
Request and asynchronous response: send a request to a service, which replies asynchronously. The client doesn't block. Publish and asynchronous responses: the client publishes a request, and waits for responses from interested services.

Each service typically uses a combination of these interaction styles.

Implement interservices communication

To implement interservice communication, you can choose from different IPCtechnologies. For example, services can use synchronous request-response‑basedcommunication mechanisms such as HTTP‑based REST, gRPC, or Thrift.Alternatively, services can use asynchronous, message‑based communicationmechanisms such as AMQP or STOMP. You can also choose from various differentmessage formats. For example, services can use human-readable, text‑basedformats such as JSON or XML. Alternatively, services can use a binary formatsuch as Avro or Protocol Buffers.

Configuring services to directly call other services leads to high couplingbetween services. Instead, we recommend using messaging or event-basedcommunication:

  • Messaging: When you implement messaging, you remove the need forservices to call each other directly. Instead, all services know of amessage broker, and they push messages to that broker. The message brokersaves these messages in a message queue. Other services can subscribe tothe messages that they care about.
  • Event-based communication: When you implement event-driven processing,communication between services takes place through events that individualservices produce. Individual services write their events to a messagebroker. Services can listen to the events of interest. This pattern keepsservices loosely coupled because the events don't include payloads.

In a microservices application, we recommend using asynchronous interservicecommunication instead of synchronous communication. Request-response is awell-understood architectural pattern, so designing a synchronous API might feelmore natural than designing an asynchronous system. Asynchronous communicationbetween services can be implemented using messaging or event-drivencommunication. Using asynchronous communication provides the followingadvantages:

  • Loose coupling: An asynchronous model splits the request–responseinteraction into two separate messages, one for the request and another onefor the response. The consumer of a service initiates the request messageand waits for the response, and the service provider waits for requestmessages to which it replies with response messages. This setup means thatthe caller doesn't have to wait for the response message.
  • Failure isolation: The sender can still continue to send messageseven if the downstream consumer fails. The consumer picks up the backlogwhenever it recovers. This ability is especially useful in a microservicesarchitecture, because each service has its own lifecycle. Synchronous APIs,however, require the downstream service to be available or the operation fails.
  • Responsiveness: An upstream service can reply faster if it doesn'twait on downstream services. If there is a chain of service dependencies(service A calls B, which calls C, etc.), waiting on synchronous calls canadd unacceptable amounts of latency.
  • Flow control: A message queue acts as a buffer, so that receiverscan process messages at their own rate.

However, following are some challenges to using asynchronous messagingeffectively:

  • Latency: If the message broker becomes a bottleneck, end-to-endlatency might become high.
  • Overhead in development and testing: Based on the choice ofmessaging or event infrastructure, there can be a possibility of havingduplicate messages, which makes it difficult to make operations idempotent.It also can be hard to implement and test request-response semantics usingasynchronous messaging. You need a way to correlate request and responsemessages.
  • Throughput: Asynchronous message handling, either using a centralqueue or some other mechanism can become a bottleneck in the system. Thebackend systems, such as queues and downstream consumers, should scaleto match the system's throughput requirements.
  • Complicates error handling: In an asynchronous system, the callerdoesn't know if a request was successful or failed, so error handling needsto be handled out of band. This type of system can make it difficult toimplement logic like retries or exponential back-offs. Error handling isfurther complicated if there are multiple chained asynchronous calls thathave to all succeed or fail.

The next document in the series,Interservice communication in a microservices setup,provides a reference implementation to address some of the challenges mentionedin the preceding list.

What's next

  • Read the first document in this series to learn aboutmicroservices, their benefits, challenges, and use cases.
  • Read the next document in this series,Interservice communication in a microservices setup.
  • Read the fourth, final document in this series to learn more aboutdistributed tracing of requests between microservices.

Top Articles

You might also like

Latest Posts

Article information

Author: Mr. See Jast

Last Updated: 10/31/2022

Views: 5557

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.