Packages fluent bit config to Elastic search
search cancel

Packages fluent bit config to Elastic search


Article ID: 345071


Updated On:




Fluent Bit should be configured to send logs in JSON format to Elasticsearch, and it should dynamically create separate indices in Elasticsearch based on the namespace where the pod is located.


VMware Tanzu Kubernetes Grid 1.x


The provided config effectively fulfills this requirement by allowing you to customize the Elasticsearch index name based on Kubernetes metadata, including the namespace. This customization ensures that logs from different namespaces are stored in separate Elasticsearch indices, facilitating efficient log management and analysis.

    filters: >-
        Name                kubernetes
        Match               kube.*
        Kube_URL            https://kubernetes.default.svc.cluster.local:443
        Merge_Log           On
        Merge_Log_Key       data
        K8S-Logging.Parser  On
        K8S-Logging.Exclude On
        Name    lua
        Match   kube.*
        script  /fluent-bit/etc/fluentbit.lua
        call    set_index
        code    function set_index(tag, timestamp, record) cluster = "myindexname-" if record["kubernetes"] ~= nil then if record["kubernetes"]["namespace_name"] ~= nil then if record["kubernetes"]["labels"][""] ~= nil then record["es_index"] = cluster .. record["kubernetes"]["namespace_name"] .. "-" .. record["kubernetes"]["labels"][""] return 1, timestamp, record end record["es_index"] = cluster .. record["kubernetes"]["namespace_name"] return 1, timestamp, record end end return 1, timestamp, record end
    inputs: |-
        Name tail
        Path                /var/log/containers/*.log
        Parser              cri
        Tag                 kube.*
        Mem_Buf_Limit       50MB
        Skip_Long_Lines     On
    outputs: |-
        Name                es
        Match               *
        Host                ELKServerName
        Port                9200
        Replace_Dots        On
        Retry_Limit         False
        Logstash_Format     On
        Logstash_Prefix_Key es_index
        Type                doc
        HTTP_User           myUser
        HTTP_Passwd         myPassWord
        tls                 On
        tls.verify          Off
    parsers: >
        Name cri
        Format regex
        Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[^ ]*) (?<log>.*)$
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L%z
        Name        cri
        Format      json
        Time_Key    time
        Time_Format %Y-%m-%dT%H:%M:%S.%L
        Time_Keep   On
  • Filters Configuration:

    • Two filters are defined: kubernetes and lua.
    • The kubernetes filter is used to parse Kubernetes logs and configure some options related to log merging and parsing.
    • The lua filter references an external Lua script (fluentbit.lua) and defines a custom function (set_index) to manipulate the Elasticsearch index based on Kubernetes metadata.
    • In summary, this Lua script sets the es_index field in the log record based on Kubernetes metadata. If the (in this example) label exists, it includes it in the es_index name. If not, it uses only the namespace_name and the cluster prefix. This script provides flexibility in how the Elasticsearch index names are derived from the incoming log data, allowing for custom index naming based on Kubernetes metadata.
  • Parsers Configuration:

    • Two parsers for the cri format are defined: one using regex and another for JSON format. Both are used to parse container logs. They specify the format of the timestamp and log messages.

Overall, this Fluent Bit configuration is designed to process and forward container logs from Kubernetes pods to an Elasticsearch instance. The custom Lua script (fluentbit.lua) dynamically determines the Elasticsearch index name based on Kubernetes metadata, allowing for log separation by namespace. Ensure that this configuration aligns with your specific log processing and security requirements and that you properly handle authentication and certificate verification in your production environment.