Technologies for microservices

Microservices lead to new challenges, making it necessary to find new technological approaches. Microservice frameworks are of course part of the solution, but they are certainly not the most important. Which technologies play a crucial role here?

Microservices are not defined consistently. The Independent Systems Architecture (ISA) principles [1] offer one definition. These are made up of nine principles which a good microservices architecture must adhere to.

Let’s start with some principles

The first principle states that microservices are modules. Microservices are just one way to modularize a system. Alternatives include packages, JAR files or Maven projects. The same applies to microservices as does to other types of modules. For example, they should be loosely coupled.

The second principle states that the modules are implemented as containers. This offers several advantages: Each microservice can be re-deployed independently of the others. If a microservice crashes, the other microservices continue to run. On the other hand, in a deployment monolith, a memory leak would crash the entire application. So, microservices increase decoupling. Classic modules only decouple the development. Microservices also decouple other aspects such as deployment or downtime.

Microservices should likewise offer resilience. If a microservice fails, the others must continue to run. Otherwise, decoupling in reference to failures is not achieved. In addition, the system would not be particularly stable, since the failure of any microservice could lead to an error cascade and cause the entire system to fail. Given the high number of microservices, this is an unacceptable risk.

Micro/Macro architecture

The ISA principles make a distinction between micro- and macro-architecture. Micro-architecture refers to decisions that can be made differently at the level of each microservice. The macro-architecture on the other hand affects all microservices. It must be stable in the long term, since changes are difficult to implement. After all, they affect all of the microservices. In addition, the macro-architecture should be reduced to a minimum so that the independent development of the microservices is limited to the least possible extent.

From the ISA principles, we can deduce that microservices should be loosely coupled and must support resilience. These are key factors to consider when selecting suitable technologies. The division into micro- and macro-architectures also has an effect on the technology you choose. The framework and programming language used to implement the microservice can be a part of the micro-architecture. After all, one of the key advantages of microservices is that each microservice can be implemented using a different technology. If the macro-architecture is to remain stable over the long term, it doesn’t make much sense to codify a programming language or a framework. Anyone developing with Java 9 and Spring 2.0 can be certain that the stack will be obsolete in a few years. Therefore, the following applies for long-running projects: If all the microservices are to use the same technologies, you need to either let them become obsolete or migrate all the microservices to the current technology at the same time, which is both risky and time-consuming. Only allowing for different technologies gives you the possibility to modernize each service individually. In addition, the freedom of technology gives teams the option of choosing the best tool for the challenge at hand. Nevertheless, certain technologies need to be codified at the macro-architecture level, such as communication technologies. They affect all microservices and are therefore not easy to change at the same time.

UI Integration

A microservice can also contain a UI. For example, a microservice can display an HTML page. The integration of another microservice can simply be a link. Crimson Assurance presents a specific example in a demo with detailed instructions available for download [2], and you can also check it out online [3]. The system is a prototype of an application to support insurance employees. You can retrieve policyholders and record damage claims on their cars. The damage claim is recorded in a microservice different from the one where the selection and display of the customer takes place. Integration takes place via a link.

Such integration has a few advantages: The coupling is very loose, only the URL format needs to be set and the other service can build its website in any desired way. Resilience is also provided: If the linked microservice should fail, the link can still be displayed.

Once the claim has been entered, the user will be returned to the main application. The form used to register the claim is located in the Damage Claim microservice. After the form is submitted, the user is reverted back to the main application page using an HTTP redirect. Again, the coupling is very loose and the resilience implemented quite well here.

Client-side transclusion

Finally, the user can overlay the web page of the main application with the postbox from the Postbox microservice. We refer to this as transclusion. The HTML page includes a link with some additional attributes. The JavaScript code reads these attributes and replaces the link with an overview from the Postbox microservice when the user clicks on the link. If the Postbox microservice is not available, the code displays an error message. Even if the JavaScript code does not work at all because it could not be loaded, or it is incompatible with the user’s browser, the link will still be displayed. Thus, resilience is guaranteed. Decoupling is a more difficult issue: Since the postbox can be inserted, the layout must match the main application. Much like programming interfaces in a classic system, you need to make sure that the two systems match.

An advantage of these integrations is that they are technically simple. The integration uses fundamental concepts of HTML and HTTP as well as a bit of JavaScript code. It can be used with any back-end technology. Accordingly, Crimson Assurance also has Spring Boot and Node.js microservices.

Server-side transclusion

Transclusion can also take place on the server. Doing so means that the client loads all fragments of the web page at once. This can be useful for the navigation bar, for example. Standards such as SSI [4] or ESI [5] are available for server-side transclusion. A microservice then delivers HTML which includes SSI or ESI tags. Web servers interpret SSI tags, while web caches implement ESI. The web server or web cache loads HTML snippets of other microservices according to the tags.

The example [4] uses the Varnish web cache (Fig. 1) and should be very easy to start thanks to the detailed documentation available. A website can use this cache to avoid access to backend services and operate them from the web cache. With ESI, the cache can even cache web sites with dynamic components. The dynamic components come from the backend and are integrated with ESI in the cached static pages. The example uses this approach to integrate a navigation bar. All components are held in the cache for 30 seconds.

 

Fig. 1: Transclusion with ESI and Varnish

As far as coupling is concerned, the same applies as with the client-side transclusion: The layout of the web content must be adapted such that it can be combined on a web page. The cache is likewise helpful for resilience: If the backends are not available, the data is kept in the cache for 15 minutes. That way, read accesses can still be processed.

UI Integration: Conclusion

UI integration supports loose coupling and resilience. Technically, it is very simple: links, redirects and approx. 60 lines of JavaScript can be sufficient. We often encounter the preconception that a UI made up of multiple microservices cannot have a consistent look and feel. But even with a deployment monolith, one webpage can look completely different compared to all other webpages in the monolith. The only way to create a consistent look and feel is to have a style guide. In addition, shared assets are helpful. The Crimson Assurance example implements this in an asset project, while the ESI example has the assets delivered by a microservice.

Asynchronous Microservices

Yet another way to couple microservices is asynchronous communication. This means that  if a microservice is currently processing a request, it may not call up another microservice and wait for a response. It can therefore call another microservice only if it is not waiting for a response. For example, a microservice can currently be busy processing a request for an order. As part of the logic, the microservice can call another microservice which should issue an invoice. Nevertheless, it may not wait for a response, but must continue to work. If the recipient is currently unavailable, the message will be delivered later. The invoice would therefore be issued later and resilience would be guaranteed.

The microservice can also call other microservices and wait for a response, but only if the microservice itself is not handling a request. So the service can, for example, query new customer data on a regular basis, wait for the data and replicate it. If the microservice should fail, replication will not take place. So the data becomes obsolete, but the system still works and resilience is guaranteed.

Asynchronous communication with message-oriented middleware

Message-Oriented Middleware (MOM) can provide the infrastructure to send asynchronous messages. A MOM can guarantee the delivery of messages with a high level of security. However, it needs to permanently save the messages. In the end, this is the only way to make sure that the message is delivered even if the receiver has just failed.

MOM and Kafka

In the world of Java, JMS (Java Message Service) [6] is quite often the method of choice for asynchronous communication. Yet especially in the microservices environment, Kafka [7] is becoming increasingly important. While other MOMs usually store messages for a certain time only, Kafka can keep them for as long as you need. So a receiver can have all the messages delivered again. There is a simple sample application available for Kafka for microservices systems, in which an invoice and delivery is generated from an order [8].

Asynchronous REST

Of course, it would be conceivable to use another MOM instead of Kafka. Since all communication between the microservices passes through the MOM, it has to be able to cope with heavy loads and feature high availability. That’s not an issue in itself –  after all, MOM installations which are business critical have been around for years. Still, it can be challenging to tune the MOM accordingly.

It would be nice to have a solution for asynchronous communication that would work without MOM. REST makes this very thing possible. A microservice collects events from another microservice through HTTP GET. This approach does not seem to be very efficient, since the services communicate between each other fairly frequently, while in most cases there are no new events. This issue can be solved with HTTP caching: In an HTTP request, the client sends the timestamp of the last known change. If there are no new events, the server responds with HTTP status 304 (Not Modified). Actual data is sent only if there are new messages. To avoid the transfer of redundant events, the interface can offer options to transfer only certain events. That way, communication can be designed to be very efficient.

If the server has saved the old events anyway, it can offer these events to the interface, without the need for further storage as with Kafka. Example [9] uses Atom to provide the events to the client. The data format is also used to provide subscribers with blogs or podcasts.

Unlike Kafka and most other MOMs, asynchronous REST cannot send events to one receiver only. Each receiver receives all new events and can process them. This way, many receivers might process a single order and multiple invoices or deliveries triggered as a result. The example solves the problem and first looks in the database to see whether the order has already been processed. The client proceeds to process the order only if this is not the case. The clients are therefore synchronized through the database. This approach has its disadvantages, especially in terms of scaling: Only one client processes an event, but all the others check whether the event has already been processed and performs a database operation in the process.

The implementation has to deal with events being transferred twice, not only for REST, but for Kafka as well. If an event is not acknowledged by the receiver, the MOM assumes that the event has not been processed successfully and retransmits it. However, it may be the case that the receiver had successfully processed the event, only it did not acknowledge receipt. For this case, the client needs to check whether the event has already been processed.

Synchronous communication

Many microservice projects use synchronous communication with REST, although this approach has many disadvantages. With synchronous communication, a microservice can be currently working on a request, while calling another microservice to, for example, read customer data. If the customer data service has just failed, the caller needs to implement an alternative strategy. This can become a business question: Should I accept the order, even though I cannot check the customer’s ability to pay at this very moment? It is also much more difficult to guarantee resilience.

Libraries such as Hystrix [10] can only solve a part of the resilience challenges: Thus, a timeout can prevent microservices from waiting too long for other microservices and fail as a result. Hystrix is written in Java and therefore limits the technology that can be used. An alternative to this is Istio [11]. This proxy secures network traffic and is not dependent on a programming language. Securing a microservice against network access issues with a service accessible through the network, for example, may sound absurd, but the proxy can run on the same hardware and be addressed by the loopback device.

With coupling, the same applies as for asynchronous communication: For matters of independence, it is important who defines the APIs and data structures and how many microservices are affected by changes. This is regardless of whether the communication is synchronous or asynchronous.

Synchronous microservices need to solve these challenges:

 

  • Service instances can be reached via an IP address and a port. To find a service, this information must be determined using a service name (service discovery).
  • There can be multiple instances of each microservice. The load has to be split between the instances (load balancing).
  • After all, the microservices should give the impression of a single system from the outside. The routing must therefore forward a request sent to the system to the responsible microservice.

 

Service discovery plays a crucial role as it can be the basis for solving all other challenges.

Consul

Consul [12] offers a solution for Service discovery. In example [13], registering a microservice requires only the Spring Cloud Annotation @EnableDiscoveryClient, a few settings in the application.properties configuration file, and a dependency to the spring-cloud-starter-consul-discovery library. For load balancing, the example uses the Ribbon Library from Netflix. It reads all instances of a microservice from Consul. Each call goes to another instance. So load balancing completely takes place on the client. A central load balancer, which would otherwise be a bottleneck and a single point of failure, has therefore been avoided. The example uses an Apache web server for the routing of calls from the outside to the correct microservice. However, it needs to configured such that it knows all microservice instances. Consul Template [14] offers a solution for this: From a template, it creates a configuration file with entries from the Consul Service Discovery. The Apache web server is therefore configured as a reverse proxy and load balancer. When a change is made in Consul, Consul Template creates a new version of the configuration file and restarts the Apache web server. The web server is not aware of Consul or Service Discovery, but simply reads information from the configuration file.

However, this design leads to dependencies in the Spring Boot projects to Consul to implement registration in Consul and load balancing. While this does not require a great deal of effort, the dependencies make it difficult to introduce microservices into the system using a different technology. Instead of Ribbon and the Spring Cloud functionalities, a different library would have to be used for registration in Consul.

Yet for registration, you can also select a solution which manages without code or code dependencies. Registrator [15] can register a Docker Container in a service discovery like Consul when the container is started. This way, the code is independent of Consul. Finally, access to Consul can take place using DNS (Domain Name System) which is also used in the Internet to resolve host names to IP addresses. The DNS queries also support load balancing, making the Ribbon dependency also disappear. Example [16] would therefore be completely independent of Consul in code. Therefore, integrating a microservice into the system which has been created in a different programming language or with the use of other frameworks wouldn’t be a problem either.

Kubernetes

Kubernetes [17] is a platform which allows Docker Containers to run in a cluster. Kubernetes also solves the typical challenges of synchronous microservices. For service discovery, a Kubernetes installation offers DNS. When a microservice is started in Kubernetes, it is automatically registered in the DNS. Load balancing takes place at the IP level – the microservice can be reached at an IP address, behind which all instances are hidden. Unlike DNS-based load balancing, this approach can guarantee that DNS caches are not a problem. Caching DNS results can lead to a situation in which the load is not evenly distributed among all instances, because some systems have obsolete data cached. Kubernetes provides node ports for routing accesses from the outside. The Kubernetes cluster consists of different servers or nodes. The microservice is available on each under its node port, which an external service can now use. Kubernetes can also configure a load balancer, to allow outside access to the microservices. Example [18] implements exactly such a process with Kubernetes.

Conclusion

Microservices provide freedom of technology in the implementation of individual microservices. Therefore, the ISA principles divide the architectural decisions into a global macrolevel and a microlevel, which only affects each individual microservice. Choosing a programming language or microservice framework can be a part of the micro-architecture and is very easy to revise, since you can implement the next microservice using a different programming language and framework. Communication technologies, on the other hand, are set at the macrolevel. Selecting a technology here is more difficult to revise because it can affect all the microservices.

There are a number of options available for communication (Fig. 2):

 

  • UI integration offers a technologically less expensive which leads to good resilience and decoupling.
  • The same applies to asynchronous communication. If a service fails, messages are transferred later, which can lead to data inconsistencies. But the other microservices can by all means continue to be used.
  • With synchronous communication, on the other hand, the system must be able to handle a situation in which a service has failed.

Fig. 2: Integration options at a glance

The various alternatives shown here represent options for the macro-architecture. You need to make these decisions yourself for each project. The approaches presented here also offer a number of possible variations. It is precisely this balance and choice that is at the core of work with architecture.

In addition to the examples explained in more detail here, there are other demos [19] for typical Microservices technologies. The ideas shown here are also the focus of the free brochure “Microservices Recipes” [20] and the book “Microservices – A Practical Guide” [21]. The free brochure “Microservices Primer” [22] and the Microservices Book [23] also cover technologies, but focus on architecture.

 

Links & literature

Top Articles About Microservices & Service Mesh

STAY TUNED!

JOIN OUR NEWSLETTER

Behind the Tracks

Software Architecture & Design
Software innovation & more
Microservices
Architecture structure & more
Agile & Communication
Methodologies & more
DevOps & Continuous Delivery
Delivery Pipelines, Testing & more
Big Data & Machine Learning
Saving, processing & more

JOIN OUR UPCOMING EVENTS IN LONDON!