Microservice development on local part 2: Getting container logs
(Part 1 detailing how to run microservices on Docker Compose)
All right, so you’ve got your microservices running on Docker Compose. Now how can you tell what each service is up to?
Enter logs. Be it logs for requests and responses, logs indicating the current step, or logs upon encountering errors, they tell us what happens after a request reaches a service.
Native Docker Compose logging
Docker Compose does have native logging. With Docker Compose running in one terminal, these commands can be run in a second terminal:
- Show all logs
docker compose logs
- Shows logs specific for a service
docker compose logs service-name
- Opens bash terminal in docker image
docker compose exec service-name bash
- Shows last 10 logs created by
service-name
docker compose logs --tail 10 service-name
- Shows last 10 logs created by service-name that contain the text “2019”
docker compose logs --tail 10 service-name | grep 2019
- Restarts service-name
docker compose restart service-name
The commands above work only if the Compose yml is named docker-compose.yml
. If your Docker Compose file has a custom name, it needs to be specified in the commands above. Taking the one for this article’s repo for example:
docker compose -f docker-compose.local-microservices.yml logs operations-portal
Collecting logs then searching through them
As anyone who worked with AWS Cloudwatch or Google Cloud’s Logs Explorer can attest, it is important to be able to sort through thousands of logs quickly to identify production issues. We’re going to do just that using Fluentd for collecting logs from Docker Compose then direct them towards OpenSearch, an open source search and analytics suite.
Continuing off the Docker Compose setup in part 1, I have created a new /logging-setup
folder to house all logging-related configs. To avoid mixing different configs in the same Compose yml, the logging services have their own Compose yml. It will later run with docker-compose.local-microservices.yml when starting Docker Compose.
Pre-requisites
The setup differs by operating system. I’ll be going over a setup that works for Linux distros.
Setup for fluentd
Increase the Maximum Number of File Descriptors to at least 65536. Check the limit on your system by running ulimit -n
. If less than 65536, add the following lines to /etc/security/limits.conf
under the headers starting with #<domain>
:
#<domain> <type> <item> <value>
root soft nofile 65536
root hard nofile 65536
* soft nofile 65536
* hard nofile 65536
I prefer Tilde as my text editor, hence I open this file with sudo tilde /etc/security/limits.conf
Implement these limits by rebooting your system.
For high load environments with many Fluentd instances, add the following configuration to your /etc/sysctl.conf
file:
net.core.somaxconn = 1024
net.core.netdev_max_backlog = 5000
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_wmem = 4096 12582912 16777216
net.ipv4.tcp_rmem = 4096 12582912 16777216
net.ipv4.tcp_max_syn_backlog = 8096
net.ipv4.tcp_slow_start_after_idle = 0
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 10240 65535
Implement these settings for sysctl
by running sudo sysctl -p
or reboot.
Setup for Opensearch
Make sure vm.max_map_count
is set to at least 262144. Check by running cat /proc/sys/vm/max_map_count
To increase the count, edit /etc/sysctl.conf
and add the line: vm.max_map_count=262144
As done for the Fluentd config for /etc/sysctl.conf
, implement by running sudo sysctl -p
or reboot.
Compose yml for the logging service
Let’s look at the Compose yml. From the top level, it has the following:
version: '3'
services:
...volumes:
opensearch-data1:
opensearch-data2:
networks:
opensearch-net:
version: '3'
specifies the Compose file format is for version 3. Different versions are compatible with different Docker engine releases.
volumes
specifies the location to persist data generated and used by Docker containers. The two volumes shown are for storing data gathered the services opensearch-node1
and opensearch-node2
. Both opensearch
nodes communicate inside the Compose environment on the opensearch-net
network.
Now for the config of each service. Starting with fluentd
:
fluentd:
container_name: fluentd
user: root
build:
context: ./logging-setup/fluentd
image: fluentd
volumes:
# Container location where dockerhost stores all container logs
- /var/lib/docker/containers:/fluentd/log/containers
# Location of all .conf files
# Declare this way because Compose will run from root folder context instead of inside /logging-setup folder
- ./logging-setup/fluentd/config:/fluentd/etc/
# Defines save location of logs gathered by file-fluent.conf tail input plugin
- ./logging-setup/fluentd/logs:/output/
logging:
driver: 'local'
network_mode: host
Note that the context
refers to what’s in the logging-setup
folder. This is because the logging-setup docker-compose.yml will later be run from the root of the repo, which means the context is relative to the repo’s root.
Now for the opensearch node configs:
opensearch-node1:
image: opensearchproject/opensearch:latest
container_name: opensearch-node1
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node1
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
# along with the memlock settings below, disables swapping
- bootstrap.memory_lock=true
# minimum and maximum Java heap size, recommend setting both to 50% of system RAM
- 'OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m'
# Prevents run errors due to securityadmin.sh
- 'DISABLE_SECURITY_PLUGIN=true'
ulimits:
memlock:
soft: -1
hard: -1
nofile:
# maximum number of open files for the OpenSearch user, set to at least 65536 on modern systems
soft: 65536
hard: 65536
volumes:
- opensearch-data1:/usr/share/opensearch/data
ports:
- 9200:9200
- 9600:9600 # required for Performance Analyzer
expose:
- '9200'
- '9600'
networks:
- opensearch-net
opensearch-node2:
image: opensearchproject/opensearch:latest
container_name: opensearch-node2
environment:
- cluster.name=opensearch-cluster
- node.name=opensearch-node2
- discovery.seed_hosts=opensearch-node1,opensearch-node2
- cluster.initial_master_nodes=opensearch-node1,opensearch-node2
- bootstrap.memory_lock=true
- 'OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m'
- 'DISABLE_SECURITY_PLUGIN=true'
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data2:/usr/share/opensearch/data
networks:
- opensearch-net
Logs sent to opensearch
from fluentd
by default are lines of string. To have them displayed as objects with fields, I added logstash
. Here’s the config:
logstash:
# ref: https://opensearch.org/docs/2.0/clients/logstash/index/
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2
container_name: logstash
# Specifies location of .conf file in repo for ref in container
volumes:
- ./logging-setup/logstash/config/logstash.conf:/usr/share/logstash/pipeline.logstash.conf
networks:
- opensearch-net
Lastly, here’s the config for opensearch-dashboards:
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.0.0
container_name: opensearch-dashboards
network_mode: host
environment:
# Use http for local run, otherwise will result in ""message":"[ConnectionError]: connect ECONNREFUSED 127.0.0.1:9200"}"
- 'OPENSEARCH_HOSTS:["http://opensearch-node1:9200","http://opensearch-node2:9200"]'
- 'DISABLE_SECURITY_DASHBOARDS_PLUGIN=true'
OPENSEARCH_HOSTS
refers to the opensearch-node1
and opensearch-node2
services defined earlier
network_mode
is set to host
to allow reaching the dashboard from our browser
Now on to configuring log handling with .conf
files.
.conf for Fluentd and OpenSearch
Files ending in.conf
are the config files and Fluentd looks for fluent.conf
within the specified volume in the Compose config. To help organise them, it is possible to break down one large fluent.conf
file into multiple .conf
files then import the smaller .conf files
into fluent.conf
.
The following line imports opensearch-fluent.conf
located in the same folder: @include opensearch-fluent.conf
Any .conf
file has at least 2 tags:
<source>
which specifies where the logs are coming from. Ex:
<source>
@type tail
format json # All docker logs are in JSON
read_from_head true
tag docker.log # PATH where dockerhost logs are mounted
# /*/* are wildcards for container-id/container-id-json.log
path /fluentd/log/containers/*/*-json.log # Position file so that fluentd knows where it is reading
pos_file /tmp/container-logs.pos
</source>
<match>
which specifies where to send logs:
# The tag in <match> is the same as specified in <source>
<match docker.log>
@type opensearch
hosts http://127.0.0.1:9200
logstash_format false
</match>
@type
specifies the output plugin which is opensearch in this case
hosts
targets localhost on port 9200 inside the Compose environment. This port is bound to the opensearch-node1
service.
logstash_format
only works if logs are being sent to OpenSearch after being processed by logstash, hence its inclusion under the Compose yml’s services
Logstash also requires its own .conf file. It is defined here.
To apply any new changes to a Fluentd .conf file while running, restart fluentd with docker compose restart fluentd
.
Now where to send the logs? That depends on the Output plugin. I’m going to show 2 options:
Export logs to local
(Local export .conf file) Using the example repo, do the following:
- fluent.conf : Uncomment
@include file-fluent.conf
and comment out@include opensearch-fluent.conf
. Run one at a time to keep the logs readable - docker-compose.yml :Comment out the
services
specification foropensearch-node1
,opensearch-node2
, andopensearch-dashboards
. These will throw errors with@include opensearch-fluent.conf
commented out.
Next, start running the repo on Compose with:
docker compose -f docker-compose.local-microservices.yml -f ./logging-setup/docker-compose.yml --env-file .env up
Look in the repo under logging-setup/fluentd/logs
as specified under the fluentd
service volumes
in docker-compose.yml. Each log file has a filename of format file-myapp.log.<ms from epoch>.log
and they can be opened using a text editor.
Export logs to OpenSearch
(OpenSearch .conf file) Make sure this line is not commented-out in fluent.conf:
@include opensearch-fluent.conf
Next, start running the repo on Compose with:
docker compose -f docker-compose.local-microservices.yml -f ./logging-setup/docker-compose.yml --env-file .env up
Once the logs have settled down, navigate in your browser to the OpenSearch dashboard at http://localhost:5601
There’s a bit of config here to get the logs before searching for them. Click on Add data
. Open the side menu and click Discover
:
Since this is the first time opening the Dashboard, OpenSearch will ask to create an index pattern:
Note that if you land on a screen that prompts you to add data instead, it happens because the Compose logs have not reached OpenSearch. Wait for a bit then refresh.
Now to create an index pattern. Index patterns are matched against log sources. Simply use the name of the source listed as the index pattern name. *
at the end of the pattern is used to match multiple indices.
Confirm this pattern:
Now reopen the Dashboards > Discover page. You should see the logs generated and a searchbar:
OpenSearch queries use DQL which has some similarities with Grafana Loki. Think of the entire log as a bunch of fields with strings as values. DQL would search the specified field for a given string. Here’s some queries to get started:
log:”’type’:’response’”:
Returns all logs of type === response
log
is a field within the log object. You can see this by clicking on the top left arrow:
migration
: Gets all logs containing the word “migration”
You’ve reached the end! Congratulations and happy developing!
Tips
- Using an older version of Docker Compose and getting unexpected errors running the commands above starting with
docker compose
?
Usedocker-compose
instead of . Ex:docker-compose -f docker-compose.local-microservices.yml logs operations-portal
References
Docker logs: https://www.youtube.com/watch?v=MvucgV8qzzk&t=236s
Setting up fluentd: https://www.youtube.com/watch?v=Gp0-7oVOtPw
Setup for Logstash on OpenSearch: https://opensearch.org/docs/2.0/clients/logstash/index/
Defining a Logstash config file and importing into docker-compose as a volume: https://www.instaclustr.com/support/documentation/opensearch/using-logstash/connecting-logstash-to-opensearch/