Tag: docker

Docker Container Status as Prometheus Exporter Metrics

Reading time: 2 – 2 minutes

Tracking Docker container status in real time is a common challenge in DevOps. Popular tools like cAdvisor and the default Docker exporter for Prometheus often lack direct metrics for container states, meaning key insights—such as the number of containers that are running, stopped, or inactive—require complex workarounds. This limitation can complicate monitoring and lead to unreliable data.

Before creating docker_container_exporter, I relied on complex Prometheus queries to retrieve container statuses. This often involved calculations based on the last time a container was seen as active, but this approach had a major flaw: if the query time range didn’t match the last activity timestamp precisely, the data could be inaccurate or incomplete. Monitoring container states shouldn’t be this difficult.

With docker_container_exporter, this problem is solved. My tool captures real-time Docker container statuses, providing data on the number of running, stopped, and other container states, all in a Prometheus-compatible format. You can collect these metrics through a standard Prometheus polling process, or use agents like Grafana Alloy to push the data to Prometheus storage or compatible DB servers like Grafana Mimir or Thanos.

You can find my project in this GitHub repository: docker_container_exporter

In the README file, you’ll find details on how to use it, as well as instructions for integrating it with Grafana Alloy.

The Power of the .env File in Docker Compose

Reading time: 30 – 50 minutes

In this post, I’ll demonstrate how powerful and flexible the .env file can be when setting up a compose.yaml for Docker Compose. This approach allows for easy management of environment variables, making your Docker configurations more dynamic and manageable.

Let’s start with a simple .env file:

UBUNTU_VERSION=24.04

And a corresponding compose.yaml file:

services:
  test:
    image: ubuntu:${UBUNTU_VERSION}
    command: ["sh", "-c", "env"]

When you run the Docker Compose stack with the command docker compose up, you’ll see an output like this:

$ docker compose up
[+] Running 2/1
  Network tmp_default   Created                                                                          0.1s
  Container tmp-test-1  Created                                                                          0.1s
Attaching to test-1
test-1  | HOSTNAME=f9002b77bc79
test-1  | HOME=/root
test-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
test-1  | PWD=/
test-1 exited with code 0

However, to make the variables defined in the .env file available within the Docker container, you need to add a couple of lines to your compose.yaml file:

services:
  test:
    image: ubuntu:${UBUNTU_VERSION}
    command: ["sh", "-c", "env"]
    env_file:
      - .env

After updating the compose.yaml file, run the docker compose up command again. This time, you’ll notice that the UBUNTU_VERSION environment variable is now included in the container’s environment:

$ docker compose up
[+] Running 1/0
  Container tmp-test-1  Recreated                                                                        0.1s
Attaching to test-1
test-1  | HOSTNAME=069e3c4a4413
test-1  | HOME=/root
test-1  | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
test-1  | UBUNTU_VERSION=24.04
test-1  | PWD=/
test-1 exited with code 0

This is incredibly convenient because maintaining the .env file allows you to easily manage environment variables across different services without modifying the compose.yaml file each time. This example clearly illustrates how powerful and useful it is to use .env files in Docker Compose configurations.

Sniffing Network Traffic in Docker Containers: Leveraging Host’s tcpdump, tcpflow, and more

Reading time: 9 – 15 minutes

In a Dockerized environment, one often encounters the need to monitor network traffic. However, one might not always wish to install sniffing tools within the container itself. By diving into the network namespace of the container, one can employ the host’s network packages such as tcpdump, tcpflow, and others, to achieve this without augmenting the container’s environment.

Step 1: Dive into the Container’s Network Namespace

Fetch the SandboxKey, which denotes the container’s network namespace:

SANDBOX_KEY=$(docker inspect <CONTAINER_ID> --format '{{ .NetworkSettings.SandboxKey }}')

Enter the container’s network namespace:

sudo nsenter --net=$SANDBOX_KEY

Step 2: Sniff Network Traffic Using Host’s Tools

Having entered the namespace, you can now utilize the host’s packages.

Using tcpdump:

tcpdump -i <INTERFACE_NAME> -w <OUTPUT_FILE.pcap>

Replace <INTERFACE_NAME> as per requirement (typically eth0 for Docker containers). For tcpdump, <OUTPUT_FILE.pcap> is the desired capture file. For tcpflow, <OUTPUT_DIRECTORY> is where the captured streams will be saved.

Conclusion

By navigating into a Docker container’s network namespace, you can readily use the network tools installed on the host system. This strategy circumvents the need to pollute the container with additional packages, upholding the principle of container immutability.

Avoid Filling Your System with Docker Logs: A Quick Guide

Reading time: 7 – 12 minutes

If you’re using Docker, you might have noticed that over time, logs can accumulate and take up a significant amount of space on your system. This can be a concern, especially if you’re running containers that generate a lot of log data.

To help you avoid this issue, I’m sharing a quick configuration tweak for Docker. By adjusting the daemon.json file, you can limit the size and number of log files Docker retains.

Here’s the configuration:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "1"
  }
}

What does this configuration do?

  • “log-driver”: “json-file”: This ensures Docker uses the default json-file logging driver, which writes log messages in JSON format.
  • “log-opts”: {…}: This section contains the logging options.
    • “max-size”: “10m”: Limits the maximum size of each log file to 10MB.
    • “max-file”: “1”: Restricts Docker to retain only one log file.

By implementing this configuration, you ensure that Docker only keeps a single log file with a maximum size of 10MB. Once the log reaches this size, Docker will rotate it, ensuring that old logs don’t eat up your storage.

To apply this configuration, simply add the above JSON to your daemon.json file, typically located at /etc/docker/daemon.json on Linux systems. After making the change, restart the Docker service.

I hope this tip helps you manage your Docker logs more efficiently. Happy containerizing! ?

Introducing Netshoot: A Powerful Network Troubleshooting Tool for Docker

Reading time: 20 – 34 minutes

Networking issues can be a real headache, especially when dealing with containerized applications. Whether it’s latency, routing problems, DNS resolution, firewall issues, or incomplete ARPs, network problems can significantly degrade application performance. Fortunately, there’s a powerful tool that can help you troubleshoot and resolve these issues: netshoot.

What is Netshoot?

Netshoot is a Docker container equipped with a comprehensive set of networking troubleshooting tools. It’s designed to help you diagnose and fix Docker and Kubernetes networking issues. With a proper understanding of how Docker and Kubernetes networking works and the right tools, you can troubleshoot and resolve these networking issues more effectively.

Understanding Network Namespaces

Before diving into the usage of netshoot, it’s essential to understand a key concept: Network Namespaces. Network namespaces provide isolation of the system resources associated with networking. Docker uses network and other types of namespaces (pid,mount,user, etc.) to create an isolated environment for each container. Everything from interfaces, routes, and IPs is completely isolated within the network namespace of the container.

The cool thing about namespaces is that you can switch between them. You can enter a different container’s network namespace, perform some troubleshooting on its network stack with tools that aren’t even installed on that container. Additionally, netshoot can be used to troubleshoot the host itself by using the host’s network namespace. This allows you to perform any troubleshooting without installing any new packages directly on the host or your application’s package.

Using Netshoot with Docker

Container’s Network Namespace

If you’re having networking issues with your application’s container, you can launch netshoot with that container’s network namespace like this:

$ sudo docker run -it --net container:<container_name> nicolaka/netshoot

Host’s Network Namespace

If you think the networking issue is on the host itself, you can launch netshoot with that host’s network namespace:

$ sudo docker run -it --net host nicolaka/netshoot

Network’s Network Namespace

If you want to troubleshoot a Docker network, you can enter the network’s namespace using nsenter. This is explained in the nsenter section below.

Using Netshoot with Docker Compose

You can easily deploy netshoot using Docker Compose using something like this:

version: "3.6"
services:
  tcpdump:
    image: nicolaka/netshoot
    depends_on:
      - nginx
    command: tcpdump -i eth0 -w /data/nginx.pcap
    network_mode: service:nginx
    volumes:
      - $PWD/data:/data

  nginx:
    image: nginx:alpine
    ports:
      - 80:80

Included Packages

Netshoot includes a wide range of powerful tools for network troubleshooting. Here’s a list of the included packages along with a brief description of each:

  • apache2-utils: Utilities for web server benchmarking and server status monitoring.
  • bash: A popular Unix shell.
  • bind-tools: Tools for querying DNS servers.
  • bird: Internet routing daemon.
  • bridge-utils: Utilities for configuring the Linux Ethernet bridge.
  • busybox-extras: Provides several stripped-down Unix tools in a single executable.
  • conntrack-tools: Tools for managing connection tracking records.
  • curl: Tool for transferring data with URL syntax.
  • dhcping: Tool to send DHCP requests to DHCP servers.
  • drill: Tool similar to dig.
  • ethtool: Tool for displaying and changing NIC settings.
  • file: Tool to determine the type of a file.
  • fping: Tool to ping multiple hosts.
  • grpcurl: Command-line tool for interacting with gRPC servers.
  • iftop: Displays bandwidth usage on an interface.
  • iperf: Tool for measuring TCP and UDP bandwidth performance.
  • iperf3: A newer version of iperf.
  • iproute2: Collection of utilities for controlling TCP/IP networking.
  • ipset: Tool to manage IP sets.
  • iptables: User-space utility program for configuring the IP packet filter rules.
  • iptraf-ng: Network monitoring tool.
  • iputils: Set of small useful utilities for Linux networking.
  • ipvsadm: Utility to administer the IP Virtual Server services.
  • jq: Lightweight and flexible command-line JSON processor.
  • libc6-compat: Compatibility libraries for glibc.
  • liboping: C library to generate ICMP echo requests.
  • ltrace: A library call tracer.
  • mtr: Network diagnostic tool.
  • net-snmp-tools: Set of SNMP management tools.
  • netcat-openbsd: Networking tool known as the “Swiss army knife” of networking.
  • nftables: Successor to iptables.
  • ngrep: Network packet analyzer.
  • nmap: Network exploration tool and security scanner.
  • nmap-nping: Packet generation and response analysis tool.
  • nmap-scripts: Scripts for nmap.
  • openssl: Toolkit for the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols.
  • py3-pip: Package installer for Python.
  • py3-setuptools: Python Distutils Enhancements.
  • scapy: Packet manipulation tool.
  • socat: Relay for bidirectional data transfer.
  • speedtest-cli: Command-line interface for testing internet bandwidth.
  • openssh: OpenSSH client and server.
  • strace: System call tracer.
  • tcpdump: Packet analyzer.
  • tcptraceroute: Traceroute implementation using TCP packets.
  • tshark: Network protocol analyzer.
  • util-linux: Miscellaneous system utilities.
  • vim: Highly configurable text editor.
  • git: Distributed version control system.
  • zsh: Unix shell.
  • websocat: Simple WebSocket client.
  • swaks: Swiss Army Knife for SMTP.
  • perl-crypt-ssleay: Perl module for OpenSSL.
  • perl-net-ssleay: Perl module for using OpenSSL.

With this extensive set of tools, netshoot is a powerful ally in diagnosing and resolving network issues in your Docker and Kubernetes environments. Whether you’re dealing with latency, routing problems, DNS resolution, firewall issues, or incomplete ARPs, netshoot has the tools you need to troubleshoot and fix these issues.

If you’re interested in trying out netshoot for yourself, you can find the project on GitHub at https://github.com/nicolaka/netshoot. It’s a powerful tool that can help you troubleshoot and resolve network issues in your Docker and Kubernetes environments.

Get the IP addresses of local Docker containers

Reading time: 13 – 21 minutes

We have Docker running with containers that are connected to their own private network. To efficiently manage and monitor these containers, it’s often useful to retrieve their private IP addresses.

With the following command, you can easily obtain the private IP addresses of all running Docker containers:

sudo docker inspect $(docker ps -q) --format='{{ printf "%-50s" .Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' | sort -t. -k2,2n -k3,3n -k4,4n

Output example:

$ sudo docker inspect $(docker ps -q ) --format='{{ printf "%-50s" .Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}} {{end}}' | sort -t. -k2,2n -k3,3n -k4,4n
/rproxy                                            10.3.10.2
/n8n                                               10.3.10.4
/semaphore                                         10.3.10.6
/code                                              10.3.10.7
/ssh                                               10.3.10.9
/nodered                                           10.3.10.11
/pihole_opendns                                    10.3.10.23
/pihole_googledns                                  10.3.10.24

socat: publish a port only available in localhost

Reading time: 11 – 18 minutes

Assume that we have a service only available in localhost (127.0.0.1/8) and we want to expose this port temporarily. Of course, you can use iptables for redirecting the port. But take care, this is not a simple DNAT because packets will not be evaluated by PREROUTING (-t nat) rules.

Another option is using an old-powerful Swiss knife tool: socat (github) (my fork).

# binds public port to any local interface
socat TCP-LISTEN:<public_port>,fork TCP:127.0.0.1:<internal_port>
# binds only to an IP address
SOCAT_SOCKADDR=<interface_IP> socat TCP-LISTEN:<public_port>,fork TCP:127.0.0.1:<internal_port>

# examples:

# binds to all interfaces:
socat TCP-LISTEN:1880,fork TCP:127.0.0.1:1880
# just for an IP address of one interface:
SOCAT_SOCKADDR=10.2.0.110 socat TCP-LISTEN:1880,fork TCP:127.0.0.1:1880

Truncate docker logs

Reading time: 2 – 3 minutes

Sometimes when a container is running for a long time especially when docker logs command is called the logs dump is extra long and then a recurrent search on google for reminding how to truncate a file is mandatory for avoiding this repeating task this is the trick that it saves me from that uncomfortable long log dump.

truncate -s 0 $(docker inspect --format='{{.LogPath}}' CONTAINER_ID)

Windows 10: enable/disable Hyper-V from CLI

Reading time: 7 – 11 minutes

Assuming we’re running a Windows shell with administrator privileges, using next commands is possible to enable, or disable, Hyper-V. In my case this is needed because when Hyper-V is running Virtualbox only can run 32bit virtual machines. I require Microsoft VM manager Hyper-V because I also run Docker for Windows and it’s a requirement.

#enable Hyper-V
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All

#disable Hyper-V
dism.exe /Online /Disable-Feature:Microsoft-Hyper-V

Having docker in mind

Reading time: < 1 minute     Starting new year with innovation ideas in mind :) [gallery size="medium" columns="2" ids="8738,8743,8744,8745"]  

Scroll to Top