Elasticsearch知识汇总之ElasticSearch部署
五 ElasticSearch部署
部署Elasticsearch,可以在任何 Linux、MacOS 或 Windows 机器上运行 Elasticsearch。在Docker 容器 中运行 Elasticsearch 。使用Elastic Cloud on Kubernetes 设置和管理 Elasticsearch、Kibana、Elastic Agent 以及 Kubernetes 上的 Elastic Stack 。
Elasticsearch 安装包,以下包格式提供:
Zip在 Windows 上安装 Elasticsearch。
tar.gz可用于安装在任何 Linux 发行版和 MacOS 上。
Deb软件包适用于 Debian、Ubuntu 和其他基于 Debian 的系统。Debian 软件包可以从 Elasticsearch 网站或Debian 存储库下载。
rpm软件包适合安装在 Red Hat、Centos、SLES、OpenSuSE 和其他基于 RPM 的系统上。RPM 可以从 Elasticsearch 网站或我们的 RPM 存储库下载。
docker映像可用于将 Elasticsearch 作为 Docker 容器运行。它们可以从 Elastic Docker Registry 下载。
5.1 裸金属部署
本部署方案采用三台服务器部署master节点和data节点
服务器信息表
IP地址 | 配置 | 系统版本 | 角色 |
1xx.1xx.1xx.60 | 4C 8G 100G硬盘(SSD) | CentOS 7.9 64位 | Master/data |
1xx.1xx.1xx.61 | 4C 8G 100G硬盘(SSD) | CentOS 7.9 64位 | Master/data |
1xx.1xx.1xx.62 | 4C 8G 100G硬盘(SSD) | CentOS 7.9 64位 | Master/data |
修改系统参数
Linux默认配置的File descriptors(文件描述符)不能够满足elasticsearch高吞吐量的要求
vi /etc/security/limits.conf
# 在最后加入,修改完成后,重启系统生效。
* soft nofile 131072
* hard nofile 131072
部署流程
所有节点
→ 安装jdk环境
jdk1.8安装
将jdk-8u271-linux-x64.tar.gz上传到服务器,并解压
tar -zvxf jdk-8u271-linux-x64.tar.gz -C
mv jdk1.8.0_271 /usr/local/jdk
在 /etc/profile 添加以下环境变量
export JAVA_HOME=/usr/local/jdk
export CLASSPATH=.:%JAVA_HOME/lib/dt.jar:%JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
如下图:
重新加载环境变量
source /etc/profile
→ 创建目录(所有节点)
- 创建es用户 data目录和log目录
useradd elasticsearch
mkdir -pv /home/esdata
mkdir -pv /home/logs
chown -R elasticsearch.elasticsearch /home/esdata /home/logs
chown -R elasticsearch.elasticsearch /usr/local/elasticsearch/
(2)elasticsearch安装
将elasticsearch-7.9.0-linux-x86_64.tar.gz安装包上传到服务器并解压
tar xf elasticsearch-7.9.0-linux-x86_64.tar.gz
mv elasticsearch-7.9.0-linux /usr/local/elasticsearch
添加elasticsearch用户并根据系统配置修改elasticsearch配置文件
修改es配置文件
编辑 elasticsearch.yml
cluster.name: logdata
node.name:elkmaster
node.roles: [data,master]
path.data: /home/esdata
path.logs: /home/eslogs
bootstrap.memory_lock: true
network.host: 1xx.1xx.1xx.60
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
编辑 jvm.options
-Xms4g
-Xmx4g
备注: 一般设置为服务器内存的50%-60%
启动es服务
su - elasticsearch -c "/usr/local/elasticsearch/bin/elasticsearch &"
kibana安装(非必要组件)
将kibana-7.9.0.tar.gz安装包上传到服务器并解压
tar xf kibana-7.9.0.tar.gz
mv kibana-7.9.0 /usr/local/kibana
修改kibana.yml 连接es地址
server.port: 5601
server.host: "1xx.1xx.1xx.60"
elasticsearch.hosts: ["http://1xx.1xx.1xx.60:9200","http://1xx.1xx.1xx.61:9200","http://1xx.1xx.1xx.62:9200"]
启动kibana
./kibana --allow-root &
开放端口(若防火墙开启则需要执行以下命令,若防火墙关闭则不需要执行)
firewall-cmd --add-port=9200/tcp --permanent
firewall-cmd --add-port=9300/tcp --permanent
firewall-cmd --reload
firewall-cmd --list-all
→ 检查是否启动成功
jps查看或者直接查看elasticsearch进程
5.2 Docker-Compose部署ElasticSearch+Kibana
部署说明
用docker-compose快速部署es集群+kibana,这个集群是带安全检查的(自签证书+账号密码)
找个干净目录,新建名为.env的文件,内容如下,
# elastic账号的密码 (至少六个字符)
ELASTIC_PASSWORD=************
# kibana_system账号的密码 (至少六个字符),该账号仅用于一些kibana的内部设置,不能用来查询es
KIBANA_PASSWORD=************
# es和kibana的版本
STACK_VERSION=8.2.2
# 集群名字
CLUSTER_NAME=docker-cluster
# x-pack安全设置,这里选择basic,基础设置,如果选择了trail,则会在30天后到期
LICENSE=basic
#LICENSE=trial
# es映射到宿主机的的端口
ES_PORT=9200
# kibana映射到宿主机的的端口
KIBANA_PORT=5601
# es容器的内存大小,请根据自己硬件情况调整 在此验证给与1G配置
MEM_LIMIT=1073741824
# 命名空间,会体现在容器名的前缀上
COMPOSE_PROJECT_NAME=demo
然后是docker-compose.yaml文件,这里面会用到刚才创建的.env文件,一共创建了五个容器:启动操作、三个es组成集群,一个kibana
version: "2.2"
services:
setup:
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es02\n"\
" dns:\n"\
" - es02\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es03\n"\
" dns:\n"\
" - es03\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es02:
depends_on:
- es01
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata02:/usr/share/elasticsearch/data
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es02/es02.key
- xpack.security.http.ssl.certificate=certs/es02/es02.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es02/es02.key
- xpack.security.transport.ssl.certificate=certs/es02/es02.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es03:
depends_on:
- es02
image: elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata03:/usr/share/elasticsearch/data
environment:
- node.name=es03
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es02
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es03/es03.key
- xpack.security.http.ssl.certificate=certs/es03/es03.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es03/es03.key
- xpack.security.transport.ssl.certificate=certs/es03/es03.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
kibana:
depends_on:
es01:
condition: service_healthy
es02:
condition: service_healthy
es03:
condition: service_healthy
image: kibana:${STACK_VERSION}
volumes:
- certs:/usr/share/kibana/config/certs
- kibanadata:/usr/share/kibana/data
ports:
- ${KIBANA_PORT}:5601
environment:
- SERVERNAME=kibana
- ELASTICSEARCH_HOSTS=https://es01:9200
- ELASTICSEARCH_USERNAME=kibana_system
- ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD}
- ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt
mem_limit: ${MEM_LIMIT}
healthcheck:
test:
[
"CMD-SHELL",
"curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'",
]
interval: 10s
timeout: 10s
retries: 120
volumes:
certs:
driver: local
esdata01:
driver: local
esdata02:
driver: local
esdata03:
driver: local
kibanadata:
driver: local
注意:.env和docker-compose.yaml两个文件在同一目录下
启动应用
在docker-compose.yaml文件所在目录,执行命令docker-compose up -d启动所有容器
[root@node212 es]# docker-compose up -d
Creating network "demo_default" with the default driver
Creating volume "demo_certs" with local driver
Creating volume "demo_esdata01" with local driver
Creating volume "demo_esdata02" with local driver
Creating volume "demo_esdata03" with local driver
Creating volume "demo_kibanadata" with local driver
Pulling setup (elasticsearch:8.2.2)...
8.2.2: Pulling from library/elasticsearch
d5fd17ec1767: Pull complete
960bdea67557: Pull complete
87e8a9ab5eb5: Pull complete
d1a41a1f6148: Pull complete
2f30a84c2b73: Pull complete
2c111419937d: Pull complete
a098105ec516: Pull complete
4c72f9050453: Pull complete
77b3d5560f6a: Pull complete
Digest: sha256:8c666cb1e76650306655b67644a01663f9c7a5422b2c51dd570524267f11ce3d
Status: Downloaded newer image for elasticsearch:8.2.2
Pulling kibana (kibana:8.2.2)...
8.2.2: Pulling from library/kibana
d5fd17ec1767: Already exists
0e13695e6282: Pull complete
f4c86adffcb8: Pull complete
37df8a7a2f1c: Pull complete
605b30158b0c: Pull complete
4f4fb700ef54: Pull complete
8789a463d8bc: Pull complete
6c1b4670a98a: Pull complete
787921eb6497: Pull complete
7833e8f6b5e0: Pull complete
60937e7413ca: Pull complete
a04fb33dd003: Pull complete
5fcdf8cb4a0b: Pull complete
929af379dbc3: Pull complete
Digest: sha256:cf34801f36a2e79c834b3cdeb0a3463ff34b8d8588c3ccdd47212c4e0753f8a5
Status: Downloaded newer image for kibana:8.2.2
Creating demo_setup_1 ... done
Creating demo_es01_1 ... done
Creating demo_es02_1 ... done
Creating demo_es03_1 ... done
Creating demo_kibana_1 ... done
查看容器状态,负责启动的demo_setup_1已退出,其他的正常运行
最终部署成功示例如下:
查看demo_setup_1的日志,提示启动顺利
[root@node212 ~]# docker logs demo_setup_1
Setting file permissions
Waiting for Elasticsearch availability
Setting kibana_system password
All done!
如果要使用curl命令向ES发请求,需要提前将crt文件从容器中复制出来
docker cp demo_es01_1:/usr/share/elasticsearch/config/certs/es01/es01.crt .
验证
现在来验证es集群和kibana能不能正常工作
浏览器访问https://localhost:9200/,注意是https,会看到以下警告页面
根据之前的配置账号elastic,浏览器显示如下,证明es成功响应了
如果chrome上安装了eshead插件,此时就能查看es集群情况了(注意内部的地址栏中,要用https,而非http),如下图,一共三个节点,es02前面有五角星标志,表示其主节点的身份
目前看 es集群部署和运行都已经正常,再看kibana是否可用
访问http://localhost:5601/,账号elastic,密码*********
清理
如果要删除es,执行docker-compose down就会删除容器,但是,此命令不会删除数据,下次执行docker-compose up -d后,新的es集群中会出现刚才创建的test001索引,并且数据也在
这是因为docker-compose.yaml中使用了数据卷volume存储es集群的关键数据,这些输入被保存在宿主机的磁盘上
❯ docker volume ls
DRIVER VOLUME NAME
local demo_certs
local demo_esdata01
local demo_esdata02
local demo_esdata03
local demo_kibanadata
执行docker volume rm demo_certs demo_esdata01 demo_esdata02
demo_esdata03即可将它们彻底清除
以上就是快速部署es集群+kibana的整个过程了
5.3 Kubernetes部署ElasticSearch集群
由3个master节点和3个data节点组成, master采用statefulset,不进行数据持久化, data节点使用持久化存储卷采用NFS
5.3.1 部署nfs
yum install nfs-utils rpcbind
创建共享目录 cat /etc/exports
/root/es/data1 *(rw,sync,no_subtree_check,no_root_squash)
/root/es/data2 *(rw,sync,no_subtree_check,no_root_squash)
/root/es/data3 *(rw,sync,no_subtree_check,no_root_squash)
启动nfs、rpcbing
systemctl restart nfs
systemctl restart rpcbind
在其他工作节点安装nfs-utils并测试挂载
yum install nfs-utils
mount -t nfs1xx.1xx.0.4:/nfs/data /mnt
查看挂载信息
mount
5.3.2 创建PV
[root@test-1 es]# cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-store-0
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
capacity:
storage: 1Gi
mountOptions:
- nolock
nfs:
path: /root/es/data1
server: 1xx.1xx.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-store-1
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
capacity:
storage: 1Gi
mountOptions:
- nolock
nfs:
path: /root/es/data2
server: 1xx.1xx.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: es-store-2
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
capacity:
storage: 1Gi
mountOptions:
- nolock
nfs:
path: /root/es/data3
server: 1xx.1xx.0.4
kubectl apply -f svc.yaml
5.3.3 创建svc
[root@test-1 es]# cat svc.yaml
apiVersion: v1
kind: Service
metadata:
name: es-svc
namespace: test
labels:
app: es-svc
spec:
selector:
app: es
ports:
- name: http
port: 9200 # 需要暴露的集群端口(service暴露的)
protocol: TCP
targetPort: 9200 # 容器的端口(后端容器提供服务的端口)
nodePort: 30200 # type=NodePort时有效 : 映射到物理机的端口 (范围30000-32767)
type: NodePort
# clusterIP: ""
---
apiVersion: v1
kind: Service
metadata:
name: es-master-svc
namespace: test
labels:
app: es-master-svc
spec:
selector:
app: es
role: master
publishNotReadyAddresses: true
ports:
- protocol: TCP
name: transport
port: 9300 # 需要暴露的集群端口(service暴露的)
# targetPort: 9300 # 容器的端口(后端容器提供服务的端口)
# nodePort: 30300 # type=NodePort时有效 : 映射到物理机的端口 (范围30000-32767)
type: ClusterIP
clusterIP: None
kubectl apply -f svc.yaml
5.3.4 创建es-master
[root@test-1 es]# cat es-master.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: es-master
name: es-master
namespace: test
spec:
replicas: 3
serviceName: es-master-svc
selector:
matchLabels:
app: es
role: master
template:
metadata:
labels:
app: es
role: master
spec:
initContainers:
- name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-master
image: elasticsearch:7.16.2
imagePullPolicy: Never
ports:
- containerPort: 9200
protocol: TCP
name: http
- containerPort: 9300
protocol: TCP
name: transport
env:
- name: "node.name"
valueFrom:
fieldRef:
fieldPath: "metadata.name"
- name: "cluster.name"
value: "es-cluster"
- name: "cluster.remote.connect"
value: "false"
- name: "node.master"
value: "true"
- name: "node.data"
value: "false"
- name: "node.ingest"
value: "false"
- name: "network.host"
value: "0.0.0.0"
- name: "path.data"
value: "/usr/share/elasticsearch/data"
- name: "path.logs"
value: "/usr/share/elasticsearch/logs"
- name: "bootstrap.memory_lock"
value: "false"
- name: "http.compression"
value: "true"
- name: "http.cors.enabled"
value: "true"
- name: "http.cors.allow-origin"
value: "*"
- name: "cluster.initial_master_nodes"
value: "es-master-0,es-master-1,es-master-2"
- name: "discovery.seed_hosts"
value: "es-master-svc"
- name: "xpack.ml.enabled"
value: "false"
- name: "ES_JAVA_OPTS"
value: "-Xms50m -Xmx50m"
resources:
requests:
cpu: 100m
limits:
cpu: 500m
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 60
periodSeconds: 10
kubectl apply -f es-master.yaml
测试:(进入其中一个pod,查看状态)
5.3.5 创建es-data
[root@test-1 es]# cat es-data.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: es-data
name: es-data
namespace: test
spec:
selector:
matchLabels:
app: es
role: data
serviceName: es-svc
replicas: 1
template:
metadata:
labels:
app: es
role: data
spec:
initContainers:
- name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: elasticsearch:7.16.2
imagePullPolicy: Never
env:
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "node.max_local_storage_nodes"
value: "2"
- name: "cluster.name"
value: "es-cluster"
- name: "cluster.remote.connect"
value: "false"
- name: "node.master"
value: "false"
- name: "node.data"
value: "true"
- name: "node.ingest"
value: "false"
- name: "network.host"
value: "0.0.0.0"
- name: "path.data"
value: "/usr/share/elasticsearch/data"
- name: "path.logs"
value: "/usr/share/elasticsearch/logs"
- name: "bootstrap.memory_lock"
value: "false"
- name: "http.compression"
value: "true"
- name: "http.cors.enabled"
value: "true"
- name: "http.cors.allow-origin"
value: "*"
- name: "discovery.seed_hosts"
value: "es-master-svc"
- name: "xpack.ml.enabled"
value: "false"
- name: "ES_JAVA_OPTS"
value: "-Xms50m -Xmx50m"
resources:
requests:
cpu: 200m
limits:
cpu: 500m
ports:
- containerPort: 9200
protocol: TCP
name: http
- containerPort: 9300
protocol: TCP
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 60
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 10
volumeMounts:
- mountPath: /usr/share/elasticsearch/data
name: storage
volumeClaimTemplates:
- metadata:
name: storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
kubectl apply -f es-data.yaml