Java is a first class citizen in a Docker ecosystem now

August 24, 2017 Aleksey Vorobyov

Dark Past

Hosting a Java application in Docker is relatively easy and described in many howtos and tutorials. Copy a Java distribution you like (e.g. Oracle HotSpot, OpenJDK, Azul Zulu) to a Docker image, copy your application, run ‘java $JAVA_ARGS -jar app.jar’. That’s all. But what they don’t tell us is how to run Java inside Docker in production.

As we know, everything always works well on a developer’s machine. An application could even survive on not heavily loaded environments, but production is another story. In the worst case scenario, JVM crashes by receiving a stop signal from the OS. In other cases applications handle requests with a surprisingly high latency.

Usually, when we start JVM, we only specify memory parameters using ‘-Xmx’ flag but we don’t do this for GC and JIT. The JVM is smart enough for 99% of applications to figure out GC and JIT parameters. Usually JVM uses the machine’s hardware specifications from ‘/proc’ directory. One of the most important part of Docker in this context is to understand that Docker is not a lightweight virtual machine. It’s a process inside the OS. We have access to a host machine ‘/proc’ directory in a Docker container.

In other words, if we omit a lot of technical details, from a JVM point of view this means that a Docker instance process is a JVM parent process. JVM thinks it has access to all host machine resources (CPU, Memory, Swap, IO). But it does not.

When we start a Docker instance not on a developer machine (on another machine than a developer machine), we usually specify limits for memory, CPU, and IO. Docker uses a Linux cgroups kernel feature for this. For example, if a process or a child process tries to go outside the memory limits specified in a process cgroups, Linux kernel kills the process.

The problem here is that the old JVMs are not aware of Docker’s existence and uses information directly from a host.

CPU

Imagine a situation where each of the ten JVMs deployed in Docker on the same host thinks that it has all 32 cores exclusively. Each JVM will start a lot of GC and JIT threads, assuming all 32 cores are available. But a Linux kernel gives only, let’s say 4. As a result, we have a very well hidden performance issue.

Memory

The situation with memory is the same. The JVM assumes it has all of the 128 GB host memory and by default uses ¼ of the available RAM. In our example 32 GB. What will happen if we start an instance with ‘-m 8Gb’? Right. The Linux kernel will kill a Docker instance process as soon as JVM inside it tries to allocate more memory. If an application is not memory hungry under some circumstances, you could even not see any issues. But eventually in production JVM will get enough load and get killed by the Linux kernel. Fortunately, this behavior does not affect us a lot because we always use the ‘-Xmx’ parameter for JVM. This best practice solves the issue.

It worth mentioning that the described problem is not exclusive to Java. All programs without cgroups support are affected. Try to run ‘top’ or ‘free’ Linux commands inside any Docker instance with memory limits, and you’ll see what I’m talking about.

Bright present and future

But things have been moving on and starting from Java 9 and Java 8 u131 (April 2017) the JVM is aware of Docker. This means that in any new Java versions, the JVM can use limits from cgroups. For GC and JIT threads it works out of the box. But for heap parameters we must use -XX:+UnlockExperimentalVMOptions and -XX:+UseCGroupMemoryLimitForHeap command line arguments.

See more details here. And don’t forget to update your Java version!

Previous Article
Using Spot Instances as build agents, or “How to save money without really trying.”
Using Spot Instances as build agents, or “How to save money without really trying.”

At Irdeto we have been working with AWS for some time. Our standard deployments are on AWS and this has led...

Next Article
The perimeter is a lie (part 1)
The perimeter is a lie (part 1)

I recall in early 2000's having a debate with a security expert about firewalls, at the time they were advo...