Kubernetes 部署 ElasticSearch7 集群

3 mins to read

组建 ES 集群

在 Kubernetes 集群上部署 ElasticSearch 的时候,我先按照网上的经验指南,发现 ES 节点之间无法互相发现,不能够组成集群。

对比后发现,我使用的 ES 版本是7.2,而目前我参照的大部分网页都是 6.8的。在 7.0 之前的协调方式是配置discovery.zen.minimum_master_nodes,让集群自行选举为指定数量的 master 节点。

如果没有设置 initial_master_nodes,则在启动全新节点时会尝试发现现有的集群。如果节点找不到可以加入的集群,则会定期记录一条警告消息

master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster

在 Kubernetes 里部署,为了有固定的 Pod name 配置到 initial_master_nodes,采用 StatefulSet 部署 master 节点。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  labels:
    app: elasticsearch
    role: master
  name: elasticsearch-master
  namespace: base
spec:
  replicas: 2
  serviceName: elasticsearch-master
  selector:
    matchLabels:
      app: elasticsearch
      role: master
  template:
    metadata:
      labels:
        app: elasticsearch
        role: master
    spec:
      serviceAccountName: elasticsearch-admin
      restartPolicy: Always
      securityContext:
        fsGroup: 1000
      containers:
        - name: elasticsearch-master
          image: elasticsearch:7.2.0
          imagePullPolicy: IfNotPresent
          securityContext:
            privileged: true
          ports:
            - containerPort: 9200
              protocol: TCP
            - containerPort: 9300
              protocol: TCP
          env:
            - name: cluster.name
              value: "es_cluster"
            - name: node.master
              value: "true"
            - name: node.data
              value: "false"
            - name: discovery.seed_hosts # 旧版本使用 discovery.zen.ping.unicast.hosts
              value: "elasticsearch-discovery" # Disvocery Service
            - name: cluster.initial_master_nodes # 初始化的 master 节点,旧版本相关配置 discovery.zen.minimum_master_nodes
              value: "elasticsearch-master-0,elasticsearch-master-1" # 根据副本数和name配置
            - name: node.ingest
              value: "false"
            - name: ES_JAVA_OPTS
              value: "-Xms1g -Xmx1g" 

根据副本数量配置相应数量的cluster.initial_master_nodes

Data 节点配置跟之前的版本基本一致,可以使用DaemonsetStatefulsetDeployment部署,只需将discovery.seed_hosts配置到 divcovery service 即可。

集群服务定义

apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch
  name: elasticsearch-discovery
  namespace: base
spec:
  publishNotReadyAddresses: true
  ports:
  - name: transport
    port: 9300
    targetPort: 9300
  selector:
    app: elasticsearch
    role: master
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: elasticsearch
    role: data
  name: elasticsearch-svc
  namespace: base
spec:
  type: ClusterIP
  ports:
  - name: http
    protocol: TCP
    port: 9200
  selector:
    app: elasticsearch
    role: master

memory lock 问题

但是按照上述配置之后, ES 成功组建集群,一段时间后由于如网络延迟或 Pod 重启等原因,ES可能会发生脑裂,导致数据节点无法再加入到集群。

这种情况最好开启 memory lock 缓解。

Swapping is very bad for performance, for node stability, and should be avoided at all costs. It can cause garbage collections to last for minutes instead of milliseconds and can cause nodes to respond slowly or even to disconnect from the cluster. In a resilient distributed system, it’s more effective to let the operating system kill the node. — https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration-memory.html

GET /_nodes?filter_path=**.mlockall # 查看集群节点是否开启memlock

env配置好 bootstrap.memory_locktrue 之后重启 StatefulSet,发现 Pod 无法启动:

ERROR: bootstrap checks failed
memory locking requested for elasticsearch process but memory is not locked

memlock 需修改系统 ulimit 参数,但 Kubernetes 对修改系统参数的支持 貌似一直有问题,我尝试通过 podStartinitContainer 都是无法生效。

目前我是通过修改镜像的方式:

FROM elasticsearch:7.2.0
COPY run.sh /
RUN chmod 755 /run.sh
CMD ["/run.sh"]

启动脚本只是在 ES 启动之前修改系统参数:

#!/bin/bash
ulimit -l unlimited
exec su elasticsearch /usr/local/bin/docker-entrypoint.sh

完整 es-data 节点 yaml 配置参考如 gist

master 节点同样需要使用修改后的镜像,以及各项参数的修改,可参照上面 data 节点的配置。

主节点重启后数据节点无法加入集群

但是上述 ES master 节点的配置(master节点不作为数据节点),有可能会导致一个问题,即 master 节点重启后(如有多个副本 pod 则是全部有统一时间停止工作),由于没有持久化 master 节点的数据目录,导致集群节点信息不一样而集群组成失败。

"type": "server", "timestamp": "", "level": "WARN", "component": "o.e.c.c.Coordinator", "cluster.name": "es_cluster", "node.name": "elasticsearch-master-0", "cluster.uuid": "ASrDJ7I0SYG_C1JKraaDGg", "node.id": "1TbxjPVEQXeq81rzLV8vYQ",  "message": "failed to validate incoming join request from node [{elasticsearch-data-1}{QUfrAuFATKqjrObm7AQ0CA}{ddM-T7MiRwiVlpHdDaMGCA}{10.244.6.5}{10.244.6.5:9300}{ml.machine_memory=33565286400, ml.max_open_jobs=20, xpack.installed=true}]" ,
"stacktrace": ["org.elasticsearch.transport.RemoteTransportException: [elasticsearch-data-1][10.244.6.5:9300][internal:cluster/coordination/join/validate]",
"Caused by: org.elasticsearch.cluster.coordination.CoordinationStateRejectedException: join validation on cluster state with a different cluster uuid ASrDJ7I0SYG_C1JKraaDGg than local cluster uuid iJNvN3w6QdCJe-wPaf_9iQ, rejecting",

解决方案有二:

  1. 不在同一时间停止/重启 ES master 节点
  2. 持久化 ES master 节点的数据目录,让重启后的 master 节点使用原来的元信息

参考