Microservices & Service Mesh

JAX London Blog

JAX London, 09-12 October 2017
The Conference for JAVA & Software Innovation

Daniel Bryant

Source: Shutterstock

In this interview, Daniel Bryant talks about the great power of containers, but also about the operational responsibility they demand. He looks at the high-level steps that are essential for creating an effective pipeline for creating and deploying containerized applications.

JAX London: What impact do containers have on CD?

Daniel BryantA great question, and the top three takeaways from my talk are related exactly to this! From my experience, when we introduce containers into a continuous delivery build pipeline: the container image must become the ‘single binary’, meaning that this is the “unit of deployment” that gets built within the first stage of the pipeline and is the artifact that any tests are executed against; adding metadata to containers images is vital, but often challenging; and we must validate constraints (NFRs) that are introduced by the underlying container infrastructure, such as the effects of running within a container with limited CPU or an using a virtual network overlay.

 

JAX London: What is the difference between containers and virtualization technology?

Daniel BryantAs the answer to this question could be quite long, I will instead recommend reading this article.

The key advantage with containers is the ability to easily package application artifacts in an underlying platform agnostic way. 

 

JAX London: Container technology is hardly new. Still, Docker seems to be the most popular player. Are there other players that caught your attention? If yes, why?

Daniel BryantNo, you’re right – container technology has been available for quite some time, for example Solaris Zones, FreeBSD Jails and LXC. However, Docker was the first to provide a great developer experience with containers (for example, via creating good APIs and a centralised container image registry), and they were also the leaders in marketing within this space.

There is plenty of other container technology, and CoreOS’s rkt and Canonical’s LXD are interesting in the Linux container space, and Intel’s Clear Containers and Hyper.sh’s runv offer interesting hybrids between containers and VMs. The Docker story is also still evolving, what with the creation of the Moby project and the contribution of containerd to the CNCF.

In reality this is probably only interesting if you are working in the infrastructure engineering space, and increasingly we are seeing container implementation details being pushed away from a typical developer’s workflow, which is a good thing in my opinion. In addition standardisation is taking place throughout the technology stack, which will increase interoperability, such as the Open Container Initiative (OCI) container image and runtime specifications, and runc and cri-o.

JAX London: What trade-offs should we be aware of when using containers?

Daniel BryantThe core advice is that as containers offer great power, they also demand operational responsibility. This advice especially relates to developers creating images. In my (anecdotal) experience, although Docker enables rapid experimentation and deployment it is often the case that developers don’t have much exposure to operational concerns such as provisioning infrastructure, hardening operating systems, and ensuring configuration is valid and performant. By packaging application artifacts within a typical container image you will be exposed to these issues. For example (and admittedly a worse case), I have heard of production containers deployed with an old and unpatched “full-fat” operating system, running an application server in debug mode with a wide range of ports exposed…

Docker was the first to provide a great developer experience with containers and they were also the leaders in marketing within this space.

 

JAX London: Do virtual machines offer better security than containers? If yes, why?

Daniel BryantYes and no! Yes, in that the application artifacts that are now running can be isolated at a more granular level, which can increase the core security principles of “defense in depth” via the appropriate use of network ACLs around each service, or by applying the “principle of least privilege” by tailoring kernel security configuration using SELinux or AppArmor for each container.

The answer is also no, as hypervisors provide stronger isolation guarantees closer to the hardware, for example, traditional VMs don’t share the Linux kernel like containers do (and the shared Kernel is the most common attack vector) However, in fairness, hypervisors have been around a lot longer than container technology, and vulnerabilities are still occasionally found here.

If people are interested in this space then I recommend reading an article I wrote that summarises Aaron Grattafiori’s excellent DockerCon 2016 talk on “High Security Microservices”. For operators looking towards the future, researching into Unikernels could also be interesting.

 

JAX London: Are containers revolutionizing the IT infrastructure? How?

Daniel BryantContainers are part of the current revolutionary cycle, which includes the co-evolution of architecture (microservice), infrastructure (cloud and containers) and practice (DevOps, continuous delivery and Infrastructure as Code).

The core advice is that as containers offer great power, they also demand operational responsibility. 

 

JAX London: What are your favorite container tools right now? Why?

Daniel BryantI like a lot of the Cloud Native Computing Foundation (CNCF) technologies and the Governing Board and Technical Oversight Committee are doing great work. For example Kubernetes for orchestration and Prometheus for monitoring (combined with gRPC and Linkerd) and firm favourites within the industry. Other people I would like to shout out include Weaveworks, who are producing a lot of great tooling around the cloud native continuous delivery experience; CoreOS, who are innovating with rkt and Kubernetes Operators; and Sysdig, who produce some great container debugging tools.

 

JAX London: What are some of the common mistakes when introducing containers to the development stack?

Daniel BryantThese are the ones I have seen (and made!) the most:

  • Not deploying to a container enabled/supporting platform
  • Investing too much time and resource creating a platform
  • Treating containers like VMs
  • Packaging full operating systems into containers, rather than using something like Alpine, Container OS or RancherOS
  • Packaging a deployment artifact late in the continuous delivery pipeline i.e. not running tests against the containerised artifact
  • Not understanding that some technologies don’t play well with container technology e.g. several Linux tools (and the JVM) aren’t fully cgroup-aware, and generating entropy for security operations can be challenging on a host running many containers.

 

JAX London: If you were to choose one area where containers can really make a difference, what would that be?

Daniel BryantIn my opinion the key advantage with containers (that wasn’t quite realised with VMs) is the ability to easily package application artifacts in an underlying platform agnostic way. This allows developers to run a more production-like configuration locally, and can facilitate the transition of artifacts between Dev and Ops.

 

JAX London: What can attendees get out of your talk?

Daniel BryantHopefully the answers above have provided some hints, but other than that all I say is that if you are looking to implement continuous delivery with containers then you should definitely join me at JAX London!

 

Talk by Daniel Bryant at JAX London 2017

Top Articles About Microservices & Service Mesh

STAY TUNED!

JOIN OUR NEWSLETTER

Behind the Tracks

Software Architecture & Design
Software innovation & more
Microservices
Architecture structure & more
Agile & Communication
Methodologies & more
DevOps & Continuous Delivery
Delivery Pipelines, Testing & more
Big Data & Machine Learning
Saving, processing & more

JOIN OUR UPCOMING EVENTS IN LONDON!