TNS
VOXPOP
404 - VOXPOP NOT FOUND
Give use a minute to figure out what's going on here ...
CI/CD / Containers / DevOps / Microservices

The Ten Commandments of Microservices

Marking the beginning of the new era in distributed computing, these are the aspects architects should consider when moving to microservices.
Jan 25th, 2016 2:26pm by
Featued image for: The Ten Commandments of Microservices
Feature image via Pixabay.

Microservices mark the beginning of the new era in distributed computing.

With the emergence of containers, the unit of deployment gradually started to shift from away from the VM models. Linux container technologies, such as LXC, Docker, runC and rkt, make it possible to run multiple containers within the same VM, allowing DevOps to package each app component or app module in a dedicated container. Each container has everything—from the OS to the runtime, framework and code—the component needs to run as a standalone unit.

The composition of these containers can logically form an application. The focus of an application becomes orchestrating multiple containers to achieve the desired output.

Yet, there is a misconception that moving a traditional, monolithic application to containers automatically turns the monolithic system into a microservice. Containerization doesn’t instantly transform an app to a microservices-based application.

The best way to understand this concept is to think of virtualization and cloud. During the initial days of IaaS, many CIOs claimed they were running a private cloud; but, in reality, they were only implementing virtualization. Likewise, while microservices may use containerization, not every containerized application is a microservice.

Here are some additional aspects the architect should consider when moving to microservices:

1. Clean Separation of Stateless and Stateful Services

Applications composed of microservices contain both stateless and stateful services. It is important to understand the constraints and limitations of implementing stateful services. If a service relies on the state, it should be separated into a dedicated container that’s easily accessible.

One of the key advantages of microservices is the ability to scale rapidly. Like other distributed computing architectures, microservices scale better when they are stateless. Within seconds, multiple containers can be launched across multiple hosts. Each container running the service is autonomous and doesn’t acknowledge the presence of other services. This makes it possible to precisely scale the required service instead of scaling the VMs. For this pattern to work seamlessly, services should be stateless. Containers are ephemeral and thus, become an ideal choice for microservices.

A microservices-based application may contain stateful services in the form of a relational database management system (RDBMS), NoSQL databases, and file systems. They are packaged as containers with unique attributes. Typically, stateful services offload persistence to the host, which makes it difficult to port containers from one host to another. Technologies, such as Flocker and Docker volume plugins, address this problem by creating a separate persistence layer that’s not host-dependent.

Typically, stateful services offload persistence to the host or use highly available cloud data stores to provide a persistence layer. Both approaches introduce complications: offloading to the host makes it difficult to port containers from one host to another, and highly available data stores trade consistency for availability, meaning that we have to design for eventual consistency in our data model.

Technologies, such as Flocker, help address the host portability problem by creating a separate persistence layer that’s not host-dependent. The new cloud datastores, such as Redis, Cassandra, and IBM’s Cloudant, maximize availability with minimal delay on consistency.

As container technologies evolve, it will become easier to tackle the stateful services problem.

2. Do Not Share Libraries or SDKs

The premise of microservices is based on autonomous and fine-grained units of code that do one thing and one thing only. This is closely aligned with the principle of “don’t repeat yourself” (DRY), which states that every piece of knowledge must have a single, unambiguous, authoritative representation within a system.

Every service is a self-contained unit of OS, runtime, framework, third-party libraries and code. When one or more containers rely on the same library, it may be tempting to share the dependencies by centrally configuring them on the host. This model introduces complexities in the long run. It not only it brings host affinity, but also breaks the CI/CD pipeline. Upgrading the library or SDK might end up breaking a service. Each service should be treated entirely independent of others.

In some scenarios, the commonly used libraries and SDKs can be moved to a dedicated service that can be managed independently, making the service immutable.

3. Avoid Host Affinity

This point was briefly discussed in the context of shared libraries. No assumptions can be made about the host on which the service would run. The host includes the presence of a directory, IP address, port number, a specific distribution and version of the OS, and availability of specific files, libraries and SDKs.

Each service can be launched on any available host in the cluster that meets the predefined requirements. These requirements are more aligned with the specifications, like the CPU type, storage type, region and availability zone, rather than the software configuration. Services should function independently of the host on which they are deployed.

In case of stateful services, a dedicated persistent (data volume) container should be considered.

4. Focus on Services with One Task in Mind

Each service must be designed with one task in mind. It may map to one function or a module with a well-defined boundary. This means that there may also be one process per container, but that’s not always the case.

Docker encourages the pattern of running one background process/daemon per container. This makes containers fundamentally different from VMs. While a virtual machine may run the whole stack, a container owns a subset of the stack. For example, when refactoring a LAMP web application for microservices, the web tier with Apache runs in a dedicated container while MySQL moves to another container.

Microservices are modular, composable and fine-grained units that do one thing and one thing only.

5. Use a Lightweight Messaging Protocol for Communication

There is no hard-and-fast rule on how microservices talk to each other. They can use synchronous or asynchronous channels with any protocol that’s platform agnostic. Each service implements a simple request and response mechanism. It’s common for microservices to expose well-known HTTP endpoints that can be invoked through REST API calls.

While HTTP and REST are preferred for synchronous communication, it’s becoming increasingly popular to use asynchronous communication between microservices. Many consider the Advanced Message Queuing Protocol (AMQP) standard as the preferred protocol, in this regard. Developing microservices with an asynchronous communication model, while sometimes a little more complex, can have great advantages in terms of minimizing latency and enabling event-driven interactions with applications.

In the market today, RabbitMQ and Apache Kafka are both commonly used message bus technologies for asynchronous communication between microservices. Also, if the message-passing is done on the same host, then the containers can communicate with each other by way of system calls, as they all share the same kernel.

6. Design a Well-Defined Entry Point and Exit Point

In most cases, microservices are treated like a black box, with less visibility into the actual implementation. With inconsistent entry points and exit points, it will be a nightmare to develop a composed application.

Similar to the interface definition in COM and CORBA, microservices should expose a well-defined, well-documented contract. This will enable services to seamlessly talk to each other.

Even if a microservice is not expected to return an explicit value, it may be important to send the success/failure flag. Implementing a single exit point makes it easy to debug and maintain the code.

7. Implement a Self-Registration and Discovery Mechanism

One of the key aspects of microservices is the discovery of a service by the consumer. A central registry is maintained for looking up all available services.

Each microservice handles registration within the central service registry. They typically register during the startup and periodically update the registry with current information. When the microservice gets terminated, it needs to be unregistered from the registry. The registry plays a critical role in orchestrating microservices.

Consul, etcd and Apache Zookeeper are examples of commonly used registries for microservices. Netflix Eureka is another popular registry that exposes registration APIs to services for registering and unregistering.

8. Explicitly Check for Rules and Constraints

During deployment, microservices may need to consider special requirements and constraints that impact performance. For example, the in-memory cache service needs to be on the same host as the web API service. The database microservice may have to be deployed on a host with solid-state drive (SSD) storage. The master and slave containers of the database cannot exist on the same host. These constraints are typically identified during the design of the services.

If rules and constraints are not considered by the SysOps team, services may need to raise alerts or log appropriate messages warning about possible implications and side effects. Under extreme conditions, a microservice may have to shut down if the mandatory rule is not respected at deployment.

9. Prefer Polyglot Over Single Stack

One advantage of using microservices is the ability to choose the best of breed OS, languages, runtimes and libraries. For example, the chat microservice can be implemented in Node.js, exposing the websockets; the web API service can be written in Python and Django; the image manipulation service may be in Java; and the web frontend could be implemented with Ruby on Rails.

As long as each service exposes a well-defined interface that’s consistent with other services, it can be implemented using the most optimal technology stack.

With Microsoft adding native container support to Windows, it is also possible to mix and match Linux containers with Win32 and .NET containers within the same environment.

10. Maintain Independent Revisions and Build Environments

Another benefit of microservices is the ability to code and maintain each service independently.

Though each microservice is part of a large, composite application, from a developer standpoint, it is important to treat each service as an independent unit of code. Each service needs to be versioned and maintained separately in the source code control system. This makes it possible to deploy newer versions of services without disrupting the application. CI/CD pipelines should be designed to take advantage of the independent versioning.

This mechanism makes it possible to implement blue/green testing of each service before rolling out the production version.

Go Forth

While system and application requirements continue to evolve, the methodology behind how we solve these problems is often based on older models and patterns. As mentioned before, microservices architecture has its roots in models like COM, COBRA, EJB and SOA, but there are still some rules to live by when creating microservices utilizing current technologies. While the ten best practices we’ve laid out here are not entirely comprehensive, they are core strategies for creating, migrating and managing microservices.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.