Docker logging best practice states that you should direct your logs to stdout and stderr. However, is there a way to avoid modifying your existing application to comply with that advice? Yes there is and it’s a simple symlink hack …
Docker Logging Best Practice | Existing App
One of the first things we do when setting up a project is logging. We tend to setup things like directory paths, filenames, log rotation, max file size, etc.. Mundane stuff but essential for sure. After all that work we’re rewarded with nice log files.
Now, docker comes along and you decide its time to containerize or “dockerize” your existing application. During your research, you learn that the best practice is to send your logs generated within a docker container to stdout and stderr. Sure this feels unnatural at first but eventually your sold on the idea and try to figure out a way on how your going to configure this.
If you find yourself modifying your existing application’s source code or configuration for dockers sake, please stop!
Docker Logging | Symlink Hack
Instead of modifying your existing application in any way, just modify your Dockerfile! You can simply redirect file based logging to stdout and stderr using symbolic links that are setup in your Dockerfile. No application code or configuration change is needed.
In the following demo we’ll take a simple containerized Java application which uses the built in java.util.logger.Logger to write all log messages to ‘/tmp/java-app.log’ and only error log messages to ‘/tmp/java-app-error.log’.
You can view the Java source code on GitHub here. The Docker image which containerizes the Java application can be run via Docker Image: ‘mvpjava/java-docker-logging’.
The following Dockerfile will “dockerize” the sample Java application while setting up two symbolic links on lines 5 and 6. This will have the effect of redirecting the file based logging to their respective stdout and stderr streams.
1 2 3 4 5 6 7 8 |
FROM openjdk:14 COPY logging-demo-0.0.1-SNAPSHOT.jar /tmp RUN ln -sf /dev/stdout /tmp/java-app.log \ && ln -sf /dev/stderr /tmp/java-app-errors.log CMD java -jar /tmp/logging-demo-0.0.1-SNAPSHOT.jar |
Since the Docker image is already available on DockerHub, all we have to do is run it like so …
1 |
docker container run -d --rm --name java-docker-log-demo-sym-links mvpjava/java-docker-logging |
Let’s take a look and see if the container’s write layer is being written to on disk when the application is writing to both log files …
1 2 3 |
$ watch docker container ps --size CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES SIZE 1d6cbb87944d mvpjava/java-docker-logging "/bin/sh -c 'java -j…" 11 minutes ago Up 11 minutes java-docker-log-demo-sym-links 32.8kB (virtual 503MB) |
Keep an eye on size column (on far right – 32.8 kB) which will NOT increase. The size reports the amount of data that is used for the writable layer of each container. The containers writable layer will just keep growing if you are writing to its filesystem. This would of been the case if we had not redirected the output to stdout and stderr.
Logging operations within a container’s filesystem causes a performance hit. This stems from the management overhead incurred by the storage drivers. We avoided the performance hit by bypassing writing to the container’s file-system.
The command output proves that the log files are not being written to the containers write layer. All we need now is to know how to capture the logs.
View logs for a container or service
We can retrieve all log files messages being redirected to stdout and stderr by using the docker logs command. Both streams will be multiplexed and aggregated by the docker daemon’s logging driver (json-file by default) and made available to you on the console.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
$ docker logs --follow java-docker-log-demo-sym-links Jun 20, 2020 2:18:49 PM com.mvpjava.demo.LoggingDemoApplication main SEVERE: Endless loop detected! 2020-06-20 14:18:49.092 ERROR 1 --- [ main] com.mvpjava.demo.LoggingDemoApplication : Endless loop detected! Jun 20, 2020 2:18:49 PM com.mvpjava.demo.LoggingDemoApplication lambda$main$0 FINE: number:1 Jun 20, 2020 2:18:49 PM com.mvpjava.demo.LoggingDemoApplication lambda$main$0 FINE: number:2 Jun 20, 2020 2:18:49 PM com.mvpjava.demo.LoggingDemoApplication lambda$main$0 FINE: number:3 Jun 20, 2020 2:18:49 PM com.mvpjava.demo.LoggingDemoApplication lambda$main$0 FINE: number:4 ... |
If we were using an orchestration tool such as Swarm, we could then view our service logs via the docker service logs command.
The Java application keeps working like it always did – without ANY modification. It keeps thinking it’s writing to both log files in the /tmp directory but it really isn’t. The symbolic links are swapping the destination from file based on the containers filesystem to the stdout and stderr streams instead – as per the Dockerfile symlink commands.
If you go to the DockerHub image repositoty here, you will see two tags …
- TAG “:to-file“
- TAG “:symbolic-links” or “:latest” which are identical (we ran the default :latest in our demo)
They showcase the difference between writing logs to file (using the tag “:to-file”) vs re-directing to stderr and stdout with our symlink hack with tag “:symbolic-links” or “:latest”. If you run them side-by-side, you can see the containers write layer grows for TAG “:to-file” vs. what we ran in the above example. Interesting stuff.
You can take look at the YouTube tutorial to see this in action.
Docker Logging Symlink | Summary
We were able to containerize an existing Java application without making any modifications to it in order to follow dockers best practice advice for logging. This was all thanks to including a couple of symbolic links in the Dockerfile.
Easy, peasy, lemon-squeezy, sweet hack!
Take a look at another Docker Tip from MVP Java here.