docker和kubernetes日志收集方案

docker和k8s日志收集方案

1. Docker 日志收集

1.1 Docker内置的log-drivers篇

目前 docker 已经内部支持了众多的log drivers,包括:

Driver Description Note
none No logs will be available for the container and docker logs will not return any output.
json-file The logs are formatted as JSON. The default logging driver for Docker. Default
syslog Writes logging messages to the syslog facility. The syslog daemon must be running on the host machine.
journald Writes log messages to journald. The journald daemon must be running on the host machine.
gelf Writes log messages to a Graylog Extended Log Format (GELF) endpoint such as Graylog or Logstash.
fluentd Writes log messages to fluentd (forward input). The fluentd daemon must be running on the host machine.
awslogs Writes log messages to Amazon CloudWatch Logs.
splunk Writes log messages to splunk using the HTTP Event Collector.
etwlogs Writes log messages as Event Tracing for Windows (ETW) events. Only available on Windows platforms.
gcplogs Writes log messages to Google Cloud Platform (GCP) Logging.
nats NATS logging driver for Docker. Publishes log entries to a NATS server.

注意: docker log命令仅仅能作用于 json-file 和 journald.

1.2 设置 log-drivers

通过以下参数设置: –log-driver= –log-opt =, log-opt 为对应log-driver中的参数项设置,参数设置可以针对docker deamon或者通过docker run的单个container。 如果不指定log-driver, 默认格式为 json-file。

可以通过以下命令进行查看:

# 检查 docker 全局的 log driver
$ docker info |grep 'Logging Driver'

# 检查单个 container的 log driver.
$ docker inspect -f '{{.HostConfig.LogConfig.Type}}' <CONTAINER>

json-file Options

The json-file logging driver supports the following logging options:

Option Description Example value
max-size The maximum size of the log before it is rolled. A positive integer plus a modifier representing the unit of measure (k, m, or g). --log-opt max-size=10m
max-file The maximum number of log files that can be present. If rolling the logs creates excess files, the oldest file is removed. Only effective when max-size is also set. A positive integer. --log-opt max-file=3
labels Applies when starting the Docker daemon. A comma-separated list of logging-related labels this daemon will accept. Used for advanced log tag options. --log-opt labels=production_status,geo
env Applies when starting the Docker daemon. A comma-separated list of logging-related environment variables this daemon will accept. Used for advanced log tag options. --log-opt env=os,customer

1.3 Docker json-file日志分割内置篇

设置单个containner

$ docker run -it --log-opt max-size=10m --log-opt max-file=3 busybox /bin/sh -c 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 0.001; done'

设置docker deamon全局

$ dockerd \
     --log-driver=json-file \
     --log-opt max-size=10m \
     --log-opt max-file=3

在centos7.2中docker通过systemd来进行管理,因此参数需要设置到对应的service文件 /usr/lib/systemd/system/docker.service

在日志收集网络异常情况下,对于容器生成的日志进行单个文件大小限制和最大文件数目限制,由于k8s不支持设置到单个pod,因此日志的分割问题需要在docker deamon侧进行设置。

1.3 Docker日志分割logrotate篇

如果不使用 docker 内置的 json-file 分割方式,可以使用 linux 系统内置的 logrotate 对于生成的日志文件按照条件进行分割。

以下样例来自于 google内部设置的 logrotate shell 函数,可以作为 logrotate 的配置文件参考。

# Installs logrotate configuration files
function setup-logrotate() {
  mkdir -p /etc/logrotate.d/
  cat >/etc/logrotate.d/docker-containers <<EOF
/var/lib/docker/containers/*/*-json.log {
    rotate 5
    copytruncate
    missingok
    notifempty
    compress
    maxsize 10M
    daily
    dateext
    dateformat -%Y%m%d-%s
    create 0644 root root
}
EOF

  # Configure log rotation for all logs in /var/log, which is where k8s services
  # are configured to write their log files. Whenever logrotate is ran, this
  # config will:
  # * rotate the log file if its size is > 100Mb OR if one day has elapsed
  # * save rotated logs into a gzipped timestamped backup
  # * log file timestamp (controlled by 'dateformat') includes seconds too. This
  #   ensures that logrotate can generate unique logfiles during each rotation
  #   (otherwise it skips rotation if 'maxsize' is reached multiple times in a
  #   day).
  # * keep only 5 old (rotated) logs, and will discard older logs.
  cat > /etc/logrotate.d/allvarlogs <<EOF
/var/log/*.log {
    rotate 5
    copytruncate
    missingok
    notifempty
    compress
    maxsize 100M
    daily
    dateext
    dateformat -%Y%m%d-%s
    create 0644 root root
}
EOF

}

由于logrotate作用于的应用程序,可能没有提供日志切换机制,因此logrotate提供了一个名为copytruncate的指令,此方法采用的是先拷贝再清空的方式,整个过程中日志文件的操作句柄没有发生改变,所以不需要通知应用程序重新打开日志文件,但是需要注意的是,在拷贝和清空之间有一个时间差,所以可能会丢失部分日志数据。

1.3 相关资料链接

  1. Docker Logging-Fluentd 使用内置的Docker Fulentd作为log-driver的测试方法
  2. 使用fluentd管理docker日志
  3. Configure logging drivers
  4. Logrotate for Docker container Docker version > 1.8
  5. 使用logrotate管理nginx日志文件

2. k8s日志收集

2.1 概述

k8s 集群中的 pod 相关日志,可以使用 kubectl logs pod_id, 来进行查看:

counter.yaml

apiVersion: v1
kind: Pod
metadata:
  name: counter
spec:
  containers:
  - name: count
    image: busybox
    args: [/bin/sh, -c,
            'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']

启动pod

kubectl create -f  counter.yaml     

日志查看:

$ kubectl logs counter
0: Mon Jan  1 00:00:00 UTC 2001
1: Mon Jan  1 00:00:01 UTC 2001
2: Mon Jan  1 00:00:02 UTC 2001
...

可以使用 kubectl logs–previous 参数, 来获取已经crashed container的相关日志 ![待确认]

kubectl logs cmd

  -c, --container string    Print the logs of this container
  -f, --follow              Specify if the logs should be streamed.
      --limit-bytes int     Maximum bytes of logs to return. Defaults to no limit.
  -p, --previous            If true, print the logs for the previous instance of the container in a pod if it exists.
      --since duration      Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used.
      --since-time string   Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.
      --tail int            Lines of recent log file to display. Defaults to -1, showing all log lines. (default -1)
      --timestamps          Include timestamps on each line in the log output

** 需要注意 docker json logging driver会将每行作为一个独立的消息,如果有处理 multi-line 消息的的场景,需要在 logging agent(fluentd) 或者更高的应用层进行处理。

k8s社区cluster-level logging 相关的讨论:

  1. Defining log-driver and log-opt when specifying pod in RC and Pod
  2. Use docker log rotation mechanism instead of logrotate
  3. Kubernetes logging, journalD, fluentD, and Splunk, oh my!
  4. gce 日志切割方案

2.2 k8s 集群日志收集方案

一般来讲集群的日志搜集方案为 EFK, Fluentd作为Logging Agent, Elsticsearch作为storage backend,Kibana作为web展示,整体结构如下:

logging

因为k8s集群已经将pod日志在集群的 /var/log/container/ 目录下做了软连接(xxxx.log)到 pod 所在 /var/lib/docker/containers/xxxxx/*-json.log 文件中,只需要将 fluentd 监控相关 /var/log/container/ 目录下的 *.log 文件,即可达到搜集日志的效果,同时由于在 docker 层设置了日志分割方式,将日志占用空间做了限制,基本上不会出现日志过期不能清理的问题。

2.2 快速验证

安装单机版本的k8s集群,采用官方提供的 minikube 工具进行安装。安装文档见 Running Kubernetes Locally via MinikubeHello Minikube

minikube 命令的相关帮助:

minikube -h

Minikube is a CLI tool that provisions and manages single-node Kubernetes clusters optimized for development workflows.

Usage:
  minikube [command]

Available Commands:
  addons           Modify minikube's kubernetes addons
  completion       Outputs minikube shell completion for the given shell (bash)
  config           Modify minikube config
  dashboard        Opens/displays the kubernetes dashboard URL for your local cluster
  delete           Deletes a local kubernetes cluster.
  docker-env       sets up docker env variables; similar to '$(docker-machine env)'
  get-k8s-versions Gets the list of available kubernetes versions available for minikube.
  ip               Retrieve the IP address of the running cluster.
  logs             Gets the logs of the running localkube instance, used for debugging minikube, not user code.
  service          Gets the kubernetes URL(s) for the specified service in your local cluster
  ssh              Log into or run a command on a machine with SSH; similar to 'docker-machine ssh'
  start            Starts a local kubernetes cluster.
  status           Gets the status of a local kubernetes cluster.
  stop             Stops a running local kubernetes cluster.
  version          Print the version of minikube.

Flags:
      --alsologtostderr                  log to standard error as well as files
      --log_backtrace_at traceLocation   when logging hits line file:N, emit a stack trace (default :0)
      --log_dir string                   If non-empty, write log files in this directory (default "")
      --logtostderr                      log to standard error instead of files
      --show-libmachine-logs             Deprecated: To enable libmachine logs, set --v=3 or higher
      --stderrthreshold severity         logs at or above this threshold go to stderr (default 2)
  -v, --v Level                          log level for V logs
      --vmodule moduleSpec               comma-separated list of pattern=N settings for file-filtered logging

按照官方提供的addons/fluentd-elasticsearch工程中样例进行创建。

由于官方的 fluentd yaml 文件中设置了 node 的 label,因此需要以下操作:

kubectl label nodes minikube alpha.kubernetes.io/fluentd-ds-ready="true"
get nodes --show-labels=true

此外由于 minikube 的目录是挂载的目录,fulentd 的监控日志目录也略有不同,需要修改,参见:minikube/issues/876

$cat fluentd-es-ds.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: fluentd-es-v1.22
  namespace: kube-system
  labels:
    k8s-app: fluentd-es
    kubernetes.io/cluster-service: "true"
    version: v1.22
spec:
  template:
    metadata:
      labels:
        k8s-app: fluentd-es
        kubernetes.io/cluster-service: "true"
        version: v1.22
    spec:
      containers:
      - name: fluentd-es
        image: gcr.io/google_containers/fluentd-elasticsearch:1.22
        command:
          - '/bin/sh'
          - '-c'
          - '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /mnt/sda1/var/lib/docker/containers   ---> 修改
          readOnly: true
      nodeSelector:
        alpha.kubernetes.io/fluentd-ds-ready: "true"
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /mnt/sda1/var/lib/docker/containers   --> 需要修改

k8s fluentd-elasticsearch官方的样例库

如果需要使用nodeport来进行访问,则在创建过程中不需要官方提供的 kibana 的 service 文件 或者 直接修改 service文件。
$ kubectl expose deployment kibana-logging --type=NodePort --name=kibana-logging --namespace=kube-system

apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "Kibana"
spec:
  type: NodePort
  ports:
  - port: ui
    protocol: TCP
    targetPort: ui
    nodePort: 30001
  selector:
    k8s-app: kibana-logging

访问 kibana 的方式 在安装minikube的主机上使用 kubectl proxy 设置代理,一般监听的地址为 http://127.0.0.1:8001, dashboard 的访问可以使用 http://127.0.0.1:8001 访问, kibana: http://127.0.0.1:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana, 具体地址可以通过 minikube cluster-info 查看后,将地址替换成 http://127.0.0.1:8001 接口

Kibana抓取到日志样例:

{
  "_index": "logstash-2017.02.12",
  "_type": "fluentd",
  "_id": "AVoxb-CeCrbCIC9LpJHo",
  "_score": null,
  "_source": {
    "log": "Sun Feb 12 08:28:16 UTC 2017 2690: \n",
    "stream": "stdout",
    "docker": {
      "container_id": "f1f5e8a00a000c5c5cf8a811be9b0e073a960eae51ffe07c4b47da98f7cdc83c"
    },
    "kubernetes": {
      "namespace_name": "default",
      "pod_id": "7b4f5106-f0f1-11e6-960f-0800275f9957",
      "pod_name": "counter",
      "container_name": "count",
      "labels": {},
      "host": "minikube"
    },
    "tag": "kubernetes.var.log.containers.counter_default_count-f1f5e8a00a000c5c5cf8a811be9b0e073a960eae51ffe07c4b47da98f7cdc83c.log",
    "@timestamp": "2017-02-12T08:28:16+00:00"
  },
  "fields": {
    "@timestamp": [
      1486888096000
    ]
  },
  "sort": [
    1486888096000
  ]
}

2.3 参考资料

  1. k8s addons-fluentd-elasticsearch
  2. Kubernetes Log Analysis with Fluentd, Elasticsearch and Kibana
  3. [Logging in Kubernetes with Fluentd and Elasticsearch](http://www.dasblinkenlichten.com/logg
  4. ing-in-kubernetes-with-fluentd-and-elasticsearch/)
  5. k8s user guide-Logging Overview

3. 日志收集方案其他问题

由于pod的创建会比较频繁,因此还需要在日志目录中设置定期清理脚本。

发表评论

您的电子邮箱地址不会被公开。 必填项已用*标注