linux怎么查看本机内存大小
255
2022-11-11
k8s集群中的EFK日志搜集系统
Kubernetes 集群本身不提供日志收集的解决方案,一般来说有主要的3种方案来做日志收集:1、在每个节点上运行一个 agent 来收集日志由于这种 agent 必须在每个节点上运行,所以直接使用 DaemonSet 控制器运行该应用程序即可这种方法也仅仅适用于收集输出到 stdout 和 stderr 的应用程序日志简单来说,本方式就是在每个node上各运行一个日志代理容器,对本节点/var/log和 /var/lib/docker/containers/两个目录下的日志进行采集2、在每个 Pod 中包含一个 sidecar 容器来收集应用日志在 sidecar 容器中运行日志采集代理程序会导致大量资源消耗,因为你有多少个要采集的 Pod,就需要运行多少个采集代理程序,另外还无法使用 kubectl logs 命令来访问这些日志3、直接在应用程序中将日志信息推送到采集后端
Kubernetes 中比较流行的日志收集解决方案是 Elasticsearch、Fluentd 和 Kibana(EFK)技术栈,也是官方现在比较推荐的一种方案Elasticsearch 是一个实时的、分布式的可扩展的搜索引擎,允许进行全文、结构化搜索,它通常用于索引和搜索大量日志数据,也可用于搜索许多不同类型的文档
创建 Elasticsearch 集群一般使用3个 Elasticsearch Pod 来避免高可用下多节点集群中出现的“脑裂”问题,并且使用StatefulSet控制器来创建Elasticsearch Pod创建StatefulSet pod时,直接在其pvc模板中使用StorageClass自动生成pv和pvc,可以实现数据持久化,nfs-client-provisioner已经提前准备好了。1、创建独立的命名空间
apiVersion: v1 kind: Namespace metadata: name: logging
2、创建StorageClas,也可以使用已经存在的StorageClas
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: es-data-db provisioner: fuseim.pri/ifs # 该值需要和 provisioner 配置的保持一致
3、创建StatefulSet pod前需要先创建无头服务
kind: Service apiVersion: v1 metadata: name: elasticsearch namespace: logging labels: app: elasticsearch spec: selector: app: elasticsearch clusterIP: None ports: - port: 9200 name: rest - port: 9300 name: inter-node
4、创建elasticsearch statefulset pod$ docker pull docker.elastic.co/elasticsearch/elasticsearch-oss:6.4.3$ docker pull busybox
apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster namespace: logging spec: serviceName: elasticsearch replicas: 3 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: docker.io/elasticsearch:latest resources: limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: rest protocol: TCP - containerPort: 9300 name: inter-node protocol: TCP volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.zen.ping.unicast.hosts value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch" - name: discovery.zen.minimum_master_nodes value: "2" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m" initContainers: - name: fix-permissions image: busybox command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data - name: increase-vm-max-map image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true volumeClaimTemplates: - metadata: name: data labels: app: elasticsearch spec: accessModes: [ "ReadWriteOnce" ] storageClassName: es-data-db resources: requests: storage: 100Gi
$ kubectl get pod -n loggingNAME READY STATUS RESTARTS AGEes-cluster-0 1/1 Running 0 42ses-cluster-1 1/1 Running 0 10mes-cluster-2 1/1 Running 0 9m49s在nfs服务器上会自动生成3个目录,用于这3个pod存储数据$ cd /data/k8s$ lslogging-data-es-cluster-0-pvc-98c87fc5-c581-11e9-964d-000c29d8512b/logging-data-es-cluster-1-pvc-07872570-c590-11e9-964d-000c29d8512b/logging-data-es-cluster-2-pvc-27e15977-c590-11e9-964d-000c29d8512b/检查es集群状态$ kubectl port-forward es-cluster-0 9200:9200 --namespace=logging在另外一个窗口执行$ curl v1 kind: Service metadata: name: kibana namespace: logging labels: app: kibana spec: ports: - port: 5601 type: NodePort selector: app: kibana --- apiVersion: apps/v1 kind: Deployment metadata: name: kibana namespace: logging labels: app: kibana spec: selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana-oss:6.4.3 resources: limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601
$ kubectl get svc -n logging |grep kibanakibana NodePort 10.111.239.0
kind: ConfigMap
apiVersion: v1
metadata:
name: fluentd-config
namespace: logging
labels:
addonmanager.kubernetes.io/mode: Reconcile
data:
system.conf: |-
上面配置文件中我们配置了 docker 容器日志目录以及 docker、kubelet 应用的日志的收集,收集到数据经过处理后发送到 elasticsearch:9200 服务2、使用DaemonSet创建fluentd pod $ docker pull cnych/fluentd-elasticsearch:v2.0.4$ docker infoDocker Root Dir: /var/lib/docker
apiVersion: v1 kind: ServiceAccount metadata: name: fluentd-es namespace: logging labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd-es labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile rules: - apiGroups: - "" resources: - "namespaces" - "pods" verbs: - "get" - "watch" - "list" --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: fluentd-es labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile subjects: - kind: ServiceAccount name: fluentd-es namespace: logging apiGroup: "" roleRef: kind: ClusterRole name: fluentd-es apiGroup: "" --- apiVersion: apps/v1 kind: DaemonSet metadata: name: fluentd-es namespace: logging labels: k8s-app: fluentd-es version: v2.0.4 kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile spec: selector: matchLabels: k8s-app: fluentd-es version: v2.0.4 template: metadata: labels: k8s-app: fluentd-es kubernetes.io/cluster-service: "true" version: v2.0.4 annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: serviceAccountName: fluentd-es containers: - name: fluentd-es image: cnych/fluentd-elasticsearch:v2.0.4 env: - name: FLUENTD_ARGS value: --no-supervisor -q resources: limits: memory: 500Mi requests: cpu: 100m memory: 200Mi volumeMounts: - name: varlog mountPath: /var/log - name: varlibdockercontainers mountPath: /var/lib/docker/containers readOnly: true - name: config-volume mountPath: /etc/fluent/config.d nodeSelector: beta.kubernetes.io/fluentd-ds-ready: "true" tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule terminationGracePeriodSeconds: 30 volumes: - name: varlog hostPath: path: /var/log - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers - name: config-volume configMap: name: fluentd-config
可以搜集/var/log和/var/log/containers和/var/lib/docker/containers内的日志还可以搜集docker服务和kubelet服务的日志为了能够灵活控制哪些节点的日志可以被收集,所以我们这里还添加了一个 nodSelector 属性
nodeSelector: beta.kubernetes.io/fluentd-ds-ready: "true"
所以要给所有节点打标签:$ kubectl get node$ kubectl label nodes server243.example.com beta.kubernetes.io/fluentd-ds-ready=true$ kubectl get nodes --show-labels由于我们的集群使用的是 kubeadm 搭建的,默认情况下 master 节点有污点,所以要想也收集 master 节点的日志,则需要添加上容忍
tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule
$ kubectl get pod -n loggingNAME READY STATUS RESTARTS AGEes-cluster-0 1/1 Running 0 10hes-cluster-1 1/1 Running 0 10hes-cluster-2 1/1 Running 0 10hfluentd-es-rf6p6 1/1 Running 0 9hfluentd-es-s99r2 1/1 Running 0 9hfluentd-es-snmtt 1/1 Running 0 9hkibana-bd6f49775-qsxb2 1/1 Running 0 11h3、在kibana上配置index pattern----第一步输入logstash-*,第二步选择@timestamp4、创建测试pod,在kibana上查看日志
apiVersion: v1 kind: Pod metadata: name: counter spec: containers: - name: count image: busybox args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
回到 Kibana Dashboard 页面,在上面的Discover页面搜索栏中输入kubernetes.pod_name:counter
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~