The English version of quarkus.io is the official project site. Translated sites are community supported on a best-effort basis.

Centralized log management (Graylog, Logstash, Fluentd)

This guide explains how you can send your logs to a centralized log management system like Graylog, Logstash (inside the Elastic Stack or ELK - Elasticsearch, Logstash, Kibana) or Fluentd (inside EFK - Elasticsearch, Fluentd, Kibana).

There are a lot of different ways to centralize your logs (if you are using Kubernetes, the simplest way is to log to the console and ask you cluster administrator to integrate a central log manager inside your cluster). In this guide, we will expose how to send them to an external tool using the quarkus-logging-gelf extension that can use TCP or UDP to send logs in the Graylog Extended Log Format (GELF).

The quarkus-logging-gelf extension will add a GELF log handler to the underlying logging backend that Quarkus uses (jboss-logmanager). By default, it is disabled, if you enable it but still use another handler (by default the console handler is enabled), your logs will be sent to both handlers.

先决条件

完成这个指南,你需要:

  • 大概15分钟

  • 编辑器

  • JDK 17+ installed with JAVA_HOME configured appropriately

  • Apache Maven 3.9.6

  • Docker and Docker Compose or Podman, and Docker Compose

  • 如果你愿意的话,还可以选择使用Quarkus CLI

  • 如果你想构建原生可执行程序,可以选择安装Mandrel或者GraalVM,并正确配置(或者使用Docker在容器中进行构建)

Example application

The following examples will all be based on the same example application that you can create with the following steps.

Create an application with the quarkus-logging-gelf extension. You can use the following command to create it:

CLI
quarkus create app org.acme:gelf-logging \
    --extension='rest,logging-gelf' \
    --no-code
cd gelf-logging

创建Grade项目,请添加 --gradle 或者 --gradle-kotlin-dsl 参数。

For more information about how to install and use the Quarkus CLI, see the Quarkus CLI guide.

Maven
mvn io.quarkus.platform:quarkus-maven-plugin:3.9.3:create \
    -DprojectGroupId=org.acme \
    -DprojectArtifactId=gelf-logging \
    -Dextensions='rest,logging-gelf' \
    -DnoCode
cd gelf-logging

创建Grade项目,请添加 -DbuildTool=gradle 或者 -DbuildTool=gradle-kotlin-dsl 参数。

For Windows users:

  • If using cmd, (don’t use backward slash \ and put everything on the same line)

  • If using Powershell, wrap -D parameters in double quotes e.g. "-DprojectArtifactId=gelf-logging"

If you already have your Quarkus project configured, you can add the logging-gelf extension to your project by running the following command in your project base directory:

CLI
quarkus extension add logging-gelf
Maven
./mvnw quarkus:add-extension -Dextensions='logging-gelf'
Gradle
./gradlew addExtension --extensions='logging-gelf'

This will add the following dependency to your build file:

pom.xml
<dependency>
    <groupId>io.quarkus</groupId>
    <artifactId>quarkus-logging-gelf</artifactId>
</dependency>
build.gradle
implementation("io.quarkus:quarkus-logging-gelf")

For demonstration purposes, we create an endpoint that does nothing but log a sentence. You don’t need to do this inside your application.

import jakarta.enterprise.context.ApplicationScoped;
import jakarta.ws.rs.GET;
import jakarta.ws.rs.Path;

import org.jboss.logging.Logger;

@Path("/gelf-logging")
@ApplicationScoped
public class GelfLoggingResource {
    private static final Logger LOG = Logger.getLogger(GelfLoggingResource.class);

    @GET
    public void log() {
        LOG.info("Some useful log message");
    }

}

Configure the GELF log handler to send logs to an external UDP endpoint on the port 12201:

quarkus.log.handler.gelf.enabled=true
quarkus.log.handler.gelf.host=localhost
quarkus.log.handler.gelf.port=12201

Send logs to Graylog

To send logs to Graylog, you first need to launch the components that compose the Graylog stack:

  • MongoDB

  • Elasticsearch

  • Graylog

You can do this via the following docker-compose.yml file that you can launch via docker-compose up -d:

version: '3.2'

services:
  elasticsearch:
    image: docker.io/elastic/elasticsearch:8.12.1
    ports:
      - "9200:9200"
    environment:
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      discovery.type: "single-node"
      cluster.routing.allocation.disk.threshold_enabled: false
    networks:
      - graylog

  mongo:
    image: mongo:4.0
    networks:
      - graylog

  graylog:
    image: graylog/graylog:4.3.0
    ports:
      - "9000:9000"
      - "12201:12201/udp"
      - "1514:1514"
    environment:
      GRAYLOG_HTTP_EXTERNAL_URI: "http://127.0.0.1:9000/"
      # CHANGE ME (must be at least 16 characters)!
      GRAYLOG_PASSWORD_SECRET: "forpasswordencryption"
      # Password: admin
      GRAYLOG_ROOT_PASSWORD_SHA2: "8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918"
    networks:
      - graylog
    depends_on:
      - elasticsearch
      - mongo

networks:
  graylog:
    driver: bridge

Then, you need to create a UDP input in Graylog. You can do it from the Graylog web console (System → Input → Select GELF UDP) available at http://localhost:9000 or via the API.

This curl example will create a new Input of type GELF UDP, it uses the default login from Graylog (admin/admin).

curl -H "Content-Type: application/json" -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "X-Requested-By: curl" -X POST -v -d \
'{"title":"udp input","configuration":{"recv_buffer_size":262144,"bind_address":"0.0.0.0","port":12201,"decompress_size_limit":8388608},"type":"org.graylog2.inputs.gelf.udp.GELFUDPInput","global":true}' \
http://localhost:9000/api/system/inputs

Launch your application, you should see your logs arriving inside Graylog.

Send logs to Logstash / the Elastic Stack (ELK)

Logstash comes by default with an Input plugin that can understand the GELF format, we will first create a pipeline that enables this plugin.

Create the following file in $HOME/pipelines/gelf.conf:

input {
  gelf {
    port => 12201
  }
}
output {
  stdout {}
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
  }
}

Finally, launch the components that compose the Elastic Stack:

  • Elasticsearch

  • Logstash

  • Kibana

You can do this via the following docker-compose.yml file that you can launch via docker-compose up -d:

# Launch Elasticsearch
version: '3.2'

services:
  elasticsearch:
    image: docker.io/elastic/elasticsearch:8.12.1
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      discovery.type: "single-node"
      cluster.routing.allocation.disk.threshold_enabled: false
    networks:
      - elk

  logstash:
    image: docker.io/elastic/logstash:8.12.1
    volumes:
      - source: $HOME/pipelines
        target: /usr/share/logstash/pipeline
        type: bind
    ports:
      - "12201:12201/udp"
      - "5000:5000"
      - "9600:9600"
    networks:
      - elk
    depends_on:
      - elasticsearch

  kibana:
    image: docker.io/elastic/kibana:8.12.1
    ports:
      - "5601:5601"
    networks:
      - elk
    depends_on:
      - elasticsearch

networks:
  elk:
    driver: bridge

Launch your application, you should see your logs arriving inside the Elastic Stack; you can use Kibana available at http://localhost:5601/ to access them.

Send logs to Fluentd (EFK)

First, you need to create a Fluentd image with the needed plugins: elasticsearch and input-gelf. You can use the following Dockerfile that should be created inside a fluentd directory.

FROM fluent/fluentd:v1.3-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]
RUN ["gem", "install", "fluent-plugin-input-gelf", "--version", "0.3.1"]

You can build the image or let docker-compose build it for you.

Then you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf

<source>
  type gelf
  tag example.gelf
  bind 0.0.0.0
  port 12201
</source>

<match example.gelf>
  @type elasticsearch
  host elasticsearch
  port 9200
  logstash_format true
</match>

Finally, launch the components that compose the EFK Stack:

  • Elasticsearch

  • Fluentd

  • Kibana

You can do this via the following docker-compose.yml file that you can launch via docker-compose up -d:

version: '3.2'

services:
  elasticsearch:
    image: docker.io/elastic/elasticsearch:8.12.1
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      discovery.type: "single-node"
      cluster.routing.allocation.disk.threshold_enabled: false
    networks:
      - efk

  fluentd:
    build: fluentd
    ports:
      - "12201:12201/udp"
    volumes:
      - source: $HOME/fluentd
        target: /fluentd/etc
        type: bind
    networks:
      - efk
    depends_on:
      - elasticsearch

  kibana:
    image: docker.io/elastic/kibana:8.12.1
    ports:
      - "5601:5601"
    networks:
      - efk
    depends_on:
      - elasticsearch

networks:
  efk:
    driver: bridge

Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them.

GELF alternative: use Syslog

You can also send your logs to Fluentd using a Syslog input. As opposed to the GELF input, the Syslog input will not render multiline logs in one event, that’s why we advise to use the GELF input that we implement in Quarkus.

First, you need to create a Fluentd image with the elasticsearch plugin. You can use the following Dockerfile that should be created inside a fluentd directory.

FROM fluent/fluentd:v1.3-debian
RUN ["gem", "install", "fluent-plugin-elasticsearch", "--version", "3.7.0"]

Then, you need to create a fluentd configuration file inside $HOME/fluentd/fluent.conf

<source>
  @type syslog
  port 5140
  bind 0.0.0.0
  message_format rfc5424
  tag system
</source>

<match **>
  @type elasticsearch
  host elasticsearch
  port 9200
  logstash_format true
</match>

Then, launch the components that compose the EFK Stack:

  • Elasticsearch

  • Fluentd

  • Kibana

You can do this via the following docker-compose.yml file that you can launch via docker-compose up -d:

version: '3.2'

services:
  elasticsearch:
    image: docker.io/elastic/elasticsearch:8.12.1
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xms512m -Xmx512m"
      discovery.type: "single-node"
      cluster.routing.allocation.disk.threshold_enabled: false
    networks:
      - efk

  fluentd:
    build: fluentd
    ports:
      - "5140:5140/udp"
    volumes:
      - source: $HOME/fluentd
        target: /fluentd/etc
        type: bind
    networks:
      - efk
    depends_on:
      - elasticsearch

  kibana:
    image: docker.io/elastic/kibana:8.12.1
    ports:
      - "5601:5601"
    networks:
      - efk
    depends_on:
      - elasticsearch

networks:
  efk:
    driver: bridge

Finally, configure your application to send logs to EFK using Syslog:

quarkus.log.syslog.enable=true
quarkus.log.syslog.endpoint=localhost:5140
quarkus.log.syslog.protocol=udp
quarkus.log.syslog.app-name=quarkus
quarkus.log.syslog.hostname=quarkus-test

Launch your application, you should see your logs arriving inside EFK: you can use Kibana available at http://localhost:5601/ to access them.

Elasticsearch indexing consideration

Be careful that, by default, Elasticsearch will automatically map unknown fields (if not disabled in the index settings) by detecting their type. This can become tricky if you use log parameters (which are included by default), or if you enable MDC inclusion (disabled by default), as the first log will define the type of the message parameter (or MDC parameter) field inside the index.

Imagine the following case:

LOG.info("some {} message {} with {} param", 1, 2, 3);
LOG.info("other {} message {} with {} param", true, true, true);

With log message parameters enabled, the first log message sent to Elasticsearch will have a MessageParam0 parameter with an int type; this will configure the index with a field of type integer. When the second message will arrive to Elasticsearch, it will have a MessageParam0 parameter with the boolean value true, and this will generate an indexing error.

To work around this limitation, you can disable sending log message parameters via logging-gelf by configuring quarkus.log.handler.gelf.include-log-message-parameters=false, or you can configure your Elasticsearch index to store those fields as text or keyword, Elasticsearch will then automatically make the translation from int/boolean to a String.

See the following documentation for Graylog (but the same issue exists for the other central logging stacks): Custom Index Mappings.

配置参考

Configuration is done through the usual application.properties file.

Configuration property fixed at build time - All other configuration properties are overridable at runtime

Configuration property

类型

默认

Determine whether to enable the GELF logging handler

Environment variable: QUARKUS_LOG_HANDLER_GELF_ENABLED

Show more

boolean

false

Hostname/IP-Address of the Logstash/Graylog Host By default it uses UDP, prepend tcp: to the hostname to switch to TCP, example: "tcp:localhost"

Environment variable: QUARKUS_LOG_HANDLER_GELF_HOST

Show more

string

localhost

The port

Environment variable: QUARKUS_LOG_HANDLER_GELF_PORT

Show more

int

12201

GELF version: 1.0 or 1.1

Environment variable: QUARKUS_LOG_HANDLER_GELF_VERSION

Show more

string

1.1

Whether to post Stack-Trace to StackTrace field.

Environment variable: QUARKUS_LOG_HANDLER_GELF_EXTRACT_STACK_TRACE

Show more

boolean

true

Only used when extractStackTrace is true. A value of 0 will extract the whole stack trace. Any positive value will walk the cause chain: 1 corresponds with exception.getCause(), 2 with exception.getCause().getCause(), …​ Negative throwable reference walk the exception chain from the root cause side: -1 will extract the root cause, -2 the exception wrapping the root cause, …​

Environment variable: QUARKUS_LOG_HANDLER_GELF_STACK_TRACE_THROWABLE_REFERENCE

Show more

int

0

Whether to perform Stack-Trace filtering

Environment variable: QUARKUS_LOG_HANDLER_GELF_FILTER_STACK_TRACE

Show more

boolean

false

Java date pattern, see java.text.SimpleDateFormat

Environment variable: QUARKUS_LOG_HANDLER_GELF_TIMESTAMP_PATTERN

Show more

string

yyyy-MM-dd HH:mm:ss,SSS

The logging-gelf log level.

Environment variable: QUARKUS_LOG_HANDLER_GELF_LEVEL

Show more

Level

ALL

Name of the facility.

Environment variable: QUARKUS_LOG_HANDLER_GELF_FACILITY

Show more

string

jboss-logmanager

Whether to include all fields from the MDC.

Environment variable: QUARKUS_LOG_HANDLER_GELF_INCLUDE_FULL_MDC

Show more

boolean

false

Send additional fields whose values are obtained from MDC. Name of the Fields are comma-separated. Example: mdcFields=Application,Version,SomeOtherFieldName

Environment variable: QUARKUS_LOG_HANDLER_GELF_MDC_FIELDS

Show more

string

Dynamic MDC Fields allows you to extract MDC values based on one or more regular expressions. Multiple regexes are comma-separated. The name of the MDC entry is used as GELF field name.

Environment variable: QUARKUS_LOG_HANDLER_GELF_DYNAMIC_MDC_FIELDS

Show more

string

Pattern-based type specification for additional and MDC fields. Key-value pairs are comma-separated. Example: my_field.*=String,business\..*\.field=double

Environment variable: QUARKUS_LOG_HANDLER_GELF_DYNAMIC_MDC_FIELD_TYPES

Show more

string

Maximum message size (in bytes). If the message size is exceeded, the appender will submit the message in multiple chunks.

Environment variable: QUARKUS_LOG_HANDLER_GELF_MAXIMUM_MESSAGE_SIZE

Show more

int

8192

Include message parameters from the log event

Environment variable: QUARKUS_LOG_HANDLER_GELF_INCLUDE_LOG_MESSAGE_PARAMETERS

Show more

boolean

true

Include source code location

Environment variable: QUARKUS_LOG_HANDLER_GELF_INCLUDE_LOCATION

Show more

boolean

true

Origin hostname

Environment variable: QUARKUS_LOG_HANDLER_GELF_ORIGIN_HOST

Show more

string

Bypass hostname resolution. If you didn’t set the originHost property, and resolution is disabled, the value “unknown” will be used as hostname

Environment variable: QUARKUS_LOG_HANDLER_GELF_SKIP_HOSTNAME_RESOLUTION

Show more

boolean

false

Post additional fields

类型

默认

Additional field value.

Environment variable: QUARKUS_LOG_HANDLER_GELF_ADDITIONAL_FIELD__FIELD_NAME__VALUE

Show more

string

required

Additional field type specification. Supported types: String, long, Long, double, Double and discover. Discover is the default if not specified, it discovers field type based on parseability.

Environment variable: QUARKUS_LOG_HANDLER_GELF_ADDITIONAL_FIELD__FIELD_NAME__TYPE

Show more

string

discover

This extension uses the logstash-gelf library that allow more configuration options via system properties, you can access its documentation here: https://logging.paluch.biz/ .

Related content