Grafana Alloy, parsing syslog RFC5424
Parsing syslog fields in a Grafana Alloy configuration file (config.alloy):
stage.regex {
expression = "^<(?P<priority>\\d|\\d{2}|1[1-8]\\d|19[01])>(?P<version>\\d{1,2})\\s(?P<timestamp>-|(?P<fullyear>[12]\\d{3})-(?P<month>0\\d|1[0-2])-(?P<mday>[0-2]\\d|3[01])T(?P<hour>[01]\\d|2[0-3]):(?P<minute>[0-5]\\d):(?P<second>[0-5]\\d|60)(?:\\.(?P<secfrac>\\d{1,6}))?(?P<numoffset>Z|[+-]\\d{2}:\\d{2}))\\s(?P<hostname>\\S{1,255})\\s(?P<appname>\\S{1,48})\\s(?P<procid>\\S{1,128})\\s(?P<msgid>\\S{1,32})\\s(?P<structureddata>-|\\[(?:[^\\[\\]]|\\\\.)*\\]) (?:\\s(?P<msg>.+))?$"
}
stage.labels {
values = {
application = "appname",
pid = "procid",
msgid = "msgid",
structureddata = "structureddata",
}
}
If Grafana Alloy logs are also parsed using this endpoint, it’s good to add next:
stage.regex {
expression = "^<(?P<priority>\\d|\\d{2}|1[1-8]\\d|19[01])>(?P<version>\\d{1,2})\\s(?P<timestamp>-|(?P<fullyear>[12]\\d{3})-(?P<month>0\\d|1[0-2])-(?P<mday>[0-2]\\d|3[01])T(?P<hour>[01]\\d|2[0-3]):(?P<minute>[0-5]\\d):(?P<second>[0-5]\\d|60)(?:\\.(?P<secfrac>\\d{1,6}))?(?P<numoffset>Z|[+-]\\d{2}:\\d{2}))\\s(?P<hostname>\\S{1,255})\\s(?P<appname>\\S{1,48})\\s(?P<procid>\\S{1,128})\\s(?P<msg>.*)$"
}
stage.output {
source = "msg"
}
stage.replace {
expression = "(ts=\\S+\\s+level=\\S+\\s+)"
source = "msg"
replace = ""
}
stage.output {
source = "msg"
}
Accessing Zerotier’s REST API
Zerotier offers a powerful REST API that allows for seamless integration and management of your network. By default, the API is accessible on TCP port 9993
. To securely interact with this API, an authentication token is required.
The authentication token is stored in the following file:
/var/lib/zerotier-one/authtoken.secret
To check if the Zerotier service is running correctly, you can use the curl
command with the necessary authentication header. Here’s how to do it:
curl -Lv -H "X-ZT1-Auth: $(cat /var/lib/zerotier-one/authtoken.secret)" http://localhost:9993/status 2>&1|less
Breaking Down the Command:
curl -Lv
-L
: Follows any redirects the server might issue.-v
: Enables verbose mode, providing detailed information about the request and response.
-H "X-ZT1-Auth: $(cat /var/lib/zerotier-one/authtoken.secret)"
- Adds a custom header
X-ZT1-Auth
with the value of your authentication token. This is essential for authorized access to the API.
- Adds a custom header
http://localhost:9993/status
- The endpoint to check the current status of the Zerotier service.
2>&1 | less
- Redirects both standard output and standard error to
less
for easy reading and navigation of the output.
- Redirects both standard output and standard error to
I hope you found this guide helpful in navigating Zerotier’s REST API.
Docker Container Status as Prometheus Exporter Metrics
Tracking Docker container status in real time is a common challenge in DevOps. Popular tools like cAdvisor
and the default Docker exporter for Prometheus often lack direct metrics for container states, meaning key insights—such as the number of containers that are running, stopped, or inactive—require complex workarounds. This limitation can complicate monitoring and lead to unreliable data.
Before creating docker_container_exporter
, I relied on complex Prometheus queries to retrieve container statuses. This often involved calculations based on the last time a container was seen as active, but this approach had a major flaw: if the query time range didn’t match the last activity timestamp precisely, the data could be inaccurate or incomplete. Monitoring container states shouldn’t be this difficult.
With docker_container_exporter
, this problem is solved. My tool captures real-time Docker container statuses, providing data on the number of running, stopped, and other container states, all in a Prometheus-compatible format. You can collect these metrics through a standard Prometheus polling process, or use agents like Grafana Alloy to push the data to Prometheus storage or compatible DB servers like Grafana Mimir or Thanos.
You can find my project in this GitHub repository: docker_container_exporter
In the README file, you’ll find details on how to use it, as well as instructions for integrating it with Grafana Alloy.
How to Keep Your Notion Docs Synced with Your Repositories Using readme2notion
Tired of updating documentation across multiple platforms? If your projects rely on Notion for planning and GitHub for collaboration, you might find yourself constantly re-copying information from your repository to your Notion pages.
Here’s a simpler solution: readme2notion
.
This handy command-line tool allows you to sync your Markdown files directly to Notion, saving time and keeping everything in one place.
Why Syncing Docs with Notion Matters
Many businesses, freelancers, and creators rely on Notion for organizing their workflows and GitHub for managing code and documentation. But each time your repository changes, your Notion page can easily fall out of date, creating a disconnect between your development work and your project documentation. readme2notion
bridges this gap by automatically updating your Notion pages with the latest content from your repositories. You get an always-up-to-date Notion page that syncs with your codebase, so everyone on your team has the latest info in real time.
What is readme2notion
?
At its core, readme2notion
is a command-line tool that takes a Markdown file, converts it into Notion blocks, and uploads the content to a specified Notion page. This tool is versatile, finding the appropriate Notion page by name or creating one if it doesn’t exist. Once set up, it can transform your Markdown files into well-organized Notion pages with just one command.
How readme2notion
Works
- Markdown Conversion: The tool reads your Markdown file and converts it into Notion-friendly blocks.
- Upload and Sync: It then uploads the converted content to a Notion page, either updating an existing page or creating a new one based on the settings you choose.
- Automated Updates: With a pre-push hook,
readme2notion
can automate the process, so every time you push a new change to your repository, your Notion page stays updated without any extra effort.
Key Features of readme2notion
- Simple Conversion: Easily converts Markdown text into Notion’s block-based format.
- Automatic Page Updates: Finds or creates the appropriate Notion page in the database you specify, meaning your docs are always up-to-date.
- Pre-Push Hook: This feature allows for completely automated updates to Notion. With every push, your Notion page gets a fresh update, making it perfect for remote teams or anyone who needs a reliable source of truth for documentation.
Why You Should Try readme2notion
Updating Notion pages by hand can be tedious, especially if you’re a developer or creator juggling multiple projects. This tool eliminates the hassle by letting you write documentation once in your repository’s README file and automatically reflecting those changes in Notion. Plus, readme2notion
works seamlessly within your existing Git workflows, allowing your team to focus on what matters—building and creating—while staying informed and organized.
If your documentation process could use an upgrade, give readme2notion
a try. It’s the easiest way to ensure your Notion workspace always reflects the latest state of your codebase.
Comparing Python Executable Packaging Tools: PEX, PyOxidizer, and PyInstaller
Packaging Python applications into standalone executables can simplify deployment and distribution, especially when dealing with users who may not have Python installed or when aiming for a seamless installation experience. Three prominent tools in this space are PEX, PyOxidizer, and PyInstaller. In this post, we’ll explore each of these tools, highlighting their features, how they work, and their pros and cons to help you decide which one suits your needs.
Table of Contents
PEX (Python EXecutable)
- Website: docs.pex-tool.org
- GitHub: github.com/pex-tool/pex
Overview
PEX stands for Python EXecutable. It creates self-contained executable Python environments that are runnable on other machines without requiring a Python interpreter or additional dependencies.
Features
- Self-contained Executables: Packages all dependencies into a single file.
- Virtual Environment Management: Manages dependencies in an isolated environment.
- Support for Multiple Python Versions: Can target different Python versions.
- Reproducible Builds: Ensures consistent builds across different environments.
How It Works
PEX files are ZIP files with a special header that makes them executable. When you run a PEX file, it sets up an isolated environment and executes your application within it. Dependencies are resolved and bundled at build time, ensuring that the executable has everything it needs to run.
Pros and Cons
Pros:
- Ease of Use: Straightforward command-line interface.
- Isolation: Avoids conflicts with system-installed packages.
- Flexible Configuration: Supports complex dependency management.
Cons:
- Platform Dependency: The generated PEX file is platform-specific. (OS and Python version)
- Size Overhead: Can result in large executable files due to included dependencies.
PyOxidizer
- Website: gregoryszorc.com
- GitHub: github.com/indygreg/PyOxidizer
Overview
PyOxidizer is a tool that produces distributable binaries from Python applications. It embeds the Python interpreter and your application into a single executable file.
Features
- Single Executable Output: Creates a single binary without external dependencies.
- Embedded Python Interpreter: Bundles a Rust-based Python interpreter.
- Cross-Compilation: Supports building executables for different platforms.
- Performance Optimization: Optimizes startup time and reduces runtime overhead.
How It Works
PyOxidizer uses Rust to compile your Python application into a binary. It embeds the Python interpreter and compiles your Python code into bytecode, which is then included in the binary. This approach results in a single executable that can be distributed without requiring a separate Python installation.
Pros and Cons
Pros:
- No Runtime Dependencies: Users don’t need Python installed.
- Cross-Platform Support: Can build executables for Windows, macOS, and Linux.
- Optimized Performance: Faster startup times compared to other tools.
Cons:
- Complex Configuration: Requires understanding of Rust and PyOxidizer’s configuration.
- Relatively New Tool: May have less community support and fewer resources.
PyInstaller
- Website: pyinstaller.org
- GitHub: github.com/pyinstaller/pyinstaller
Overview
PyInstaller bundles a Python application and all its dependencies into a single package, which can be a directory or a standalone executable.
Features
- Multi-Platform Support: Works on Windows, macOS, and Linux.
- Customizable Builds: Allows inclusion or exclusion of files and dependencies.
- Support for Various Libraries: Handles complex dependencies like NumPy, PyQt, etc.
- One-Folder and One-File Modes: Choose between a directory of files or a single executable.
How It Works
PyInstaller analyzes your Python script to discover every other module and library your script needs to run. It then collects copies of all those files—including the active Python interpreter—and packs them into a single executable or a folder.
Pros and Cons
Pros:
- Ease of Use: Simple command-line usage.
- Wide Compatibility: Supports many third-party packages.
- Flexible Output Options: Choose between single-file or directory output.
Cons:
- Executable Size: Can produce large files.
- Hidden Imports: May miss some dependencies, requiring manual specification.
Comparison
Feature | PEX | PyOxidizer | PyInstaller |
---|---|---|---|
Single Executable | Yes (but requires Python) | Yes | Yes |
No Python Required | No | Yes | Yes |
Cross-Platform | Yes (build on target OS) | Yes (cross-compilation) | Yes (build on target OS) |
Ease of Use | Moderate | Complex | Easy |
Executable Size | Large | Smaller | Large |
Configuration | Flexible | Requires Rust knowledge | Simple |
Community Support | Active | Growing | Extensive |
GitHub Activity | Actively maintained | Unmaintained | Actively maintained |
Conclusion
Choosing the right tool depends on your specific needs and the assurance of ongoing support:
- Use PEX if you need a self-contained environment for systems where Python is available. Its active maintenance ensures that you can rely on timely updates and community support.
- Use PyOxidizer if you prefer a single executable without runtime dependencies and are comfortable with Rust. Its growing GitHub activity signifies a promising future and dedicated maintenance.
- Use PyInstaller if you value simplicity and extensive community support. Its active maintenance status means you can expect regular updates and a wealth of community resources.
References:
How to Manage Environment Variables for Production and Development with Docker Compose
Managing environment variables for different environments, such as production and development, is crucial for deploying applications effectively. In this post, I’ll demonstrate how to use Docker Compose with .env files to easily switch between these environments, using the example of setting a DEBUG_LEVEL variable to control application logging.
To start, you’ll need different .env files for each environment:
1. .env (Common configuration)
ENVIRONMENT=prod UBUNTU_VERSION=24.04
This common .env file sets the default ENVIRONMENT to prod (production) and specifies the Ubuntu version. These variables are used across all environments.
2. .env.prod (Production-specific configuration)
DEBUG_LEVEL=ERROR
In the production environment, DEBUG_LEVEL is set to ERROR to minimize logging output and avoid exposing sensitive information.
3. .env.dev (Development-specific configuration)
DEBUG_LEVEL=DEBUG
In the development environment, DEBUG_LEVEL is set to DEBUG to provide detailed logs for troubleshooting and development purposes.
The compose.yaml file is set up to dynamically load the appropriate environment file based on the ENVIRONMENT variable, which can be set either in the shell or in the .env file:
services: test: image: ubuntu:${UBUNTU_VERSION} command: ["sh", "-c", "env"] env_file: - .env.${ENVIRONMENT}
This configuration uses the env_file directive to load the environment-specific file (.env.prod or .env.dev) based on the value of the ENVIRONMENT variable.
If the ENVIRONMENT variable is set in both the .env file and the shell, the value set in the shell will take precedence. This is useful for temporarily overriding the environment setting without modifying the .env file. For example:
Setting the ENVIRONMENT variable in the shell:
export ENVIRONMENT=dev
If you also have ENVIRONMENT=prod set in the .env file, the shell setting will overwrite it, and the development environment settings will be used:
$ docker compose up [+] Running 2/1 ✔ Network tmp_default Created 0.1s ✔ Container tmp-test-1 Created 0.1s Attaching to test-1 test-1 | DEBUG_LEVEL=DEBUG test-1 | HOSTNAME=f9002b77bc79 test-1 | HOME=/root test-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin test-1 exited with code 0
If you want to use the production settings instead, you can unset the shell variable and rely on the value in the .env file:
unset ENVIRONMENT
Then, when you run docker compose up again, the output will reflect the production environment:
$ docker compose up [+] Running 2/1 ✔ Network tmp_default Created 0.1s ✔ Container tmp-test-1 Created 0.1s Attaching to test-1 test-1 | DEBUG_LEVEL=ERROR test-1 | HOSTNAME=f9002b77bc79 test-1 | HOME=/root test-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin test-1 exited with code 0
By using .env files and setting the ENVIRONMENT variable in the shell, you have the flexibility to manage environment variables dynamically. This approach simplifies switching between environments and ensures consistent deployment settings, minimizing the risk of configuration errors and improving application reliability.