当前位置: 首页 > news >正文

Datasophon1.2.1安装HDFS开启Kerberos

Datasophon1.2.1是一个很优秀的大数据管理平台,安装管理起来非常直观。但由于公司级的使用必须要开kerberos,Datasophon开启kerberos后,是不能自动安装完成的,需要手动做一些工作。以下是我们安装的方法:

安装kerberos

首先我们要自己安装kerberos服务,DataSophon也是可以帮我们安装kerberos,但它不能生成证书,还是得我们自己动手生成。
我把krb5服务安装在dmp-mng-svr3上,执行命令:

[root@dmp-mng-svr3 ~]#  yum install -y krb5-libs krb5-server krb5-workstation

其它各机器也要安装kerberos client:

yum install -y krb5-libs krb5-workstation

生成keytab

由于我安装用了5台机,namenode两台dmp-hdfs-ns*,datanode三台dmp-hdfs-dt*,所以脚本比较长。这个脚本要在kerberos服务上执行:

kadmin.local -q "addprinc -randkey jn/dmp-hdfs-dt1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/jn.service.keytab jn/dmp-hdfs-dt1@HADOOP.COM"kadmin.local -q "addprinc -randkey jn/dmp-hdfs-dt2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/jn.service.keytab jn/dmp-hdfs-dt2@HADOOP.COM"kadmin.local -q "addprinc -randkey jn/dmp-hdfs-dt3@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/jn.service.keytab jn/dmp-hdfs-dt3@HADOOP.COM"kadmin.local -q "addprinc -randkey jn/dmp-hdfs-ns1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/jn.service.keytab jn/dmp-hdfs-ns1@HADOOP.COM"kadmin.local -q "addprinc -randkey jn/dmp-hdfs-ns2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/jn.service.keytab jn/dmp-hdfs-ns2@HADOOP.COM"kadmin.local -q "addprinc -randkey nn/dmp-hdfs-dt1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab nn/dmp-hdfs-dt1@HADOOP.COM"kadmin.local -q "addprinc -randkey nn/dmp-hdfs-dt2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab nn/dmp-hdfs-dt2@HADOOP.COM"kadmin.local -q "addprinc -randkey nn/dmp-hdfs-dt3@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab nn/dmp-hdfs-dt3@HADOOP.COM"kadmin.local -q "addprinc -randkey nn/dmp-hdfs-ns1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab nn/dmp-hdfs-ns1@HADOOP.COM"kadmin.local -q "addprinc -randkey nn/dmp-hdfs-ns2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab nn/dmp-hdfs-ns2@HADOOP.COM"kadmin.local -q "addprinc -randkey dn/dmp-hdfs-dt1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/dn.service.keytab dn/dmp-hdfs-dt1@HADOOP.COM"kadmin.local -q "addprinc -randkey dn/dmp-hdfs-dt2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/dn.service.keytab dn/dmp-hdfs-dt2@HADOOP.COM"kadmin.local -q "addprinc -randkey dn/dmp-hdfs-dt3@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/dn.service.keytab dn/dmp-hdfs-dt3@HADOOP.COM"kadmin.local -q "addprinc -randkey dn/dmp-hdfs-ns1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/dn.service.keytab dn/dmp-hdfs-ns1@HADOOP.COM"kadmin.local -q "addprinc -randkey dn/dmp-hdfs-ns2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/dn.service.keytab dn/dmp-hdfs-ns2@HADOOP.COM"kadmin.local -q "addprinc -randkey HTTP/dmp-hdfs-dt1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab HTTP/dmp-hdfs-dt1@HADOOP.COM"kadmin.local -q "addprinc -randkey HTTP/dmp-hdfs-dt2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab HTTP/dmp-hdfs-dt2@HADOOP.COM"kadmin.local -q "addprinc -randkey HTTP/dmp-hdfs-dt3@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab HTTP/dmp-hdfs-dt3@HADOOP.COM"kadmin.local -q "addprinc -randkey HTTP/dmp-hdfs-ns1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab HTTP/dmp-hdfs-ns1@HADOOP.COM"kadmin.local -q "addprinc -randkey HTTP/dmp-hdfs-ns2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/nn.service.keytab HTTP/dmp-hdfs-ns2@HADOOP.COM"

这样我们得到了三个keytab文件:nn.service.keytab,jn.service.keytab,dn.service.keytab。为了省事,我把5台机器的相同用户证书放到同一个文件了,所以,三个用户,三个文件。还有一个zookeeper的用户也要创建,我的zookeeper服务安装在dmp-mng-svr1-3三台机上:

kadmin.local -q "addprinc -randkey zookeeper/dmp-mng-svr1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/zookeeper.keytab zookeeper/dmp-mng-svr1@HADOOP.COM"
kadmin.local -q "addprinc -randkey zookeeper/dmp-mng-svr2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/zookeeper.keytab zookeeper/dmp-mng-svr2@HADOOP.COM"
kadmin.local -q "addprinc -randkey zookeeper/dmp-mng-svr3@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/zookeeper.keytab zookeeper/dmp-mng-svr3@HADOOP.COM"scp /var/kerberos/krb5kdc/zookeeper.keytab dmp-mng-svr2:/etc/security/keytab/zkserver.service.keytab
scp /var/kerberos/krb5kdc/zookeeper.keytab dmp-mng-svr3:/etc/security/keytab/zkserver.service.keytab
scp /var/kerberos/krb5kdc/zookeeper.keytab dmp-mng-svr1:/etc/security/keytab/zkserver.service.keytab

为其它机器创建zkclient证书:

kadmin.local -q "addprinc -randkey zkcli/dmp-mng-svr1@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/zkcli.keytab zkcli/dmp-mng-svr1@HADOOP.COM"
kadmin.local -q "addprinc -randkey zkcli/dmp-mng-svr2@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/zkcli.keytab zkcli/dmp-mng-svr2@HADOOP.COM"
kadmin.local -q "addprinc -randkey zkcli/dmp-mng-svr3@HADOOP.COM"
kadmin.local -q "xst -k /var/kerberos/krb5kdc/zkcli.keytab zkcli/dmp-mng-svr3@HADOOP.COM"scp /var/kerberos/krb5kdc/zkcli.keytab dmp-mng-svr1:/etc/security/keytab/zkclient.service.keytab
scp /var/kerberos/krb5kdc/zkcli.keytab dmp-mng-svr2:/etc/security/keytab/zkclient.service.keytab
scp /var/kerberos/krb5kdc/zkcli.keytab dmp-mng-svr3:/etc/security/keytab/zkclient.service.keytab

后记:为了方便批量生成成需要的keytab,我写了两个脚本可以一次生成,大家参考一下,把机器名换成自己的就行了:

[root@dmp-mng-svr3 keytab]# cat keytabs.sh
for unm in "jn" "dn" "nn" "rm" "nm" "zookeeper" "zcli"
dosh ./kt-hosts.sh $unm
done
[root@dmp-mng-svr3 keytab]# cat kt-hosts.sh
kt=$1
echo "=========== keytab for $kt =============="
for hnm in "dmp-hdfs-ns1" "dmp-hdfs-ns2" "dmp-hdfs-dt1" "dmp-hdfs-dt2" "dmp-hdfs-dt3" "dmp-mng-svr1" "dmp-mng-svr2" "dmp-mng-svr3" "dmp-rdb-svr1" "dmp-rdb-svr2"
doecho "The value is: $hnm"kadmin.local -q "addprinc -randkey ${kt}/${hnm}@HADOOP.COM"kadmin.local -q "xst -k ${kt}.service.keytab ${kt}/${hnm}@HADOOP.COM"
done

这样只要执行一下 keytabs.sh 就能全部生成了:

 sh keytabs.sh

分发keytab文件

把这些keytab文件复制到每台机器的/etc/security/keytab/目录下。为了datasophon不出错,最好是在manager机器上的/etc/security/keytab/目录下以5台机器的名字建5个目录,把这些文件复制到5个目录里,以便datasophon可以下载,目录结构如下:

[root@dmp-mng-svr1 ~]# ls /etc/security/keytab/ -l
total 52
drwxr-xr-x 2 hdfs hadoop  272 Jun 27 10:15 dmp-hdfs-dt1
drwxr-xr-x 2 hdfs hadoop  292 Jun 27 10:15 dmp-hdfs-dt2
drwxr-xr-x 2 hdfs hadoop  292 Jun 27 10:15 dmp-hdfs-dt3
drwxr-xr-x 2 hdfs hadoop  332 Jun 27 10:16 dmp-hdfs-ns1
drwxr-xr-x 2 hdfs hadoop  332 Jun 27 10:16 dmp-hdfs-ns2
drwxrwx--- 2 hdfs hadoop  232 Jun 27 10:08 dmp-mng-svr1
drwxrwx--- 2 hdfs hadoop 4096 Jun 27 10:16 dmp-mng-svr2
drwxr-xr-x 2 hdfs hadoop 4096 Jun 27 10:16 dmp-mng-svr3
-rwxrwx--- 1 hdfs hadoop 2882 Jun 27 10:05 dn.service.keytab
-rwxrwx--- 1 hdfs hadoop  108 Jun 27 09:59 downloadKeytab
-rwxrwx--- 1 hdfs hadoop 2962 Jun 27 10:05 HTTP.service.keytab
-rwxrwx--- 1 hdfs hadoop 2882 Jun 27 10:05 jn.service.keytab
-rw-rw---- 1 root hadoop 4821 Jun 27 16:06 keystore
-rwxrwx--- 1 hdfs hadoop 7010 Jun 27 10:05 nn.service.keytab
-rw-r--r-- 1 hdfs hadoop  990 Jun 27 16:06 truststore
-rwxrwx--- 1 hdfs hadoop 2426 Jun 27 10:05 zkclient.service.keytab
-rwxrwx--- 1 hdfs hadoop 1898 Jun 27 10:05 zkserver.service.keytab

在这里插入图片描述

那两个keystore,truststore文件是SSL证书,下面会介绍到。
现在在master机dmp-mng-svr1上,把.keytab复制到所有机器的子目录里,后面datasophon安装服务时会尝试从各子目录下载证书的,避免出错信息产生。*

生成ssl证书

在master主机上生成证书:

openssl req -new -x509 -keyout hdfs_ca_key -out hdfs_ca_cert -days 36500 -subj '/C=CN/ST=jiangsu/S=xuzhou/L=yunlong/O=xcmg/OU=fin_tech/CN=$HOSTNAME'

密码就输入admin123, 和datasophon要求一致,省得后面手动修改配置文件了。
把生成的hdfs_ca_key和hdfs_ca_cert两个文件复制到每台机器的/etc/security/kerberos_https目录下面。

导入SSL证书

先在主机上生成一个脚本文件,我叫kimport.sh

cd /etc/security/kerberos_https/
rm -f cert cert_signed hdfs_ca_cert.srl keystore keystore.old truststore
sname="CN="$HOSTNAME", OU=fintech, O=xcmg, L=yunlong, ST=xuzhou, C=CN"
echo $sname
keytool -keystore keystore -alias $HOSTNAME -validity 36500 -genkey -keyalg RSA -keysize 2048 -dname "$sname"# 2 输入密码和确认密码:admin123,提示是否信任证书:输入yes,此命令成功后输出truststore文件
keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert# 3 输入密码和确认密码:admin123,此命令成功后输出cert文件
keytool -certreq -alias $HOSTNAME -keystore keystore -file cert
keytool -importkeystore -srckeystore keystore -destkeystore keystore -deststoretype pkcs12# 4 此命令成功后输出cert_signed文件
openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 36500 -CAcreateserial -passin pass:admin123# 5 输入密码和确认密码:admin123,是否信任证书,输入yes,此命令成功后更新keystore文件
keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert# 6 输入密码和确认密码:admin123
keytool -keystore keystore -alias $HOSTNAME -import -file cert_signedcp *store ../keytab/
[root@dmp-mng-svr3 ~]# ^C
[root@dmp-mng-svr3 ~]# cat kimport.sh
cd /etc/security/kerberos_https/
rm -f cert cert_signed hdfs_ca_cert.srl keystore keystore.old truststore
sname="CN="$HOSTNAME", OU=fintech, O=xcmg, L=yunlong, ST=xuzhou, C=CN"
echo $sname
keytool -keystore keystore -alias $HOSTNAME -validity 36500 -genkey -keyalg RSA -keysize 2048 -dname "$sname"# 2 输入密码和确认密码:admin123,提示是否信任证书:输入yes,此命令成功后输出truststore文件
keytool -keystore truststore -alias CARoot -import -file hdfs_ca_cert# 3 输入密码和确认密码:admin123,此命令成功后输出cert文件
keytool -certreq -alias $HOSTNAME -keystore keystore -file cert
keytool -importkeystore -srckeystore keystore -destkeystore keystore -deststoretype pkcs12# 4 此命令成功后输出cert_signed文件
openssl x509 -req -CA hdfs_ca_cert -CAkey hdfs_ca_key -in cert -out cert_signed -days 36500 -CAcreateserial -passin pass:admin123# 5 输入密码和确认密码:admin123,是否信任证书,输入yes,此命令成功后更新keystore文件
keytool -keystore keystore -alias CARoot -import -file hdfs_ca_cert# 6 输入密码和确认密码:admin123
keytool -keystore keystore -alias $HOSTNAME -import -file cert_signedcp *store ../keytab/

然后复制到每台机器的用户家目录里。
再到每台机器里执行命令:

sh kimport.sh

按提示输入密码admin123

到这里基本算是完成手动操作了,下面就到datasophon界面里去正常安装hdfs, yarn, zookeeper等,记得看到有"开启Kerberos认证"的选项时,都要开启。
其它相关文章:
https://blog.csdn.net/weixin_45357522/article/details/148851091
https://blog.csdn.net/weixin_45357522/article/details/148956032

相关文章:

  • java+vue+SpringBoo海鲜市场系统(程序+数据库+报告+部署教程+答辩指导)
  • 【MySQL进阶】服务器配置与管理——系统变量,选项,状态变量
  • 为什么在linux中不能直接使用pip进行安装
  • MySQL(1)——count()聚合函数
  • 【记录】Ubuntu|Ubuntu服务器挂载新的硬盘的流程(开机自动挂载)
  • UI前端与数字孪生结合案例分享:智慧零售的可视化解决方案
  • 【深度学习新浪潮】MoE技术入门(简要版)
  • 关于css的height:100%
  • MCP-安全(entra)
  • 使用OpenCV进行3D重建:详细指南
  • 【MariaDB】MariaDB Server 11.3.0 Alpha下载、安装、配置
  • 链表题解——两数相加【LeetCode】
  • 雷卯针对灵眸科技EASY Orin-nano RK3516 开发板防雷防静电方案
  • 【数据分析,相关性分析】Matlab代码#数学建模#创新算法
  • 远眺科技工业园区数字孪生方案,如何实现智能管理升级?
  • java+vue+SpringBoo数字科技风险报告管理系统(程序+数据库+报告+部署教程+答辩指导)
  • ESP32 008 MicroPython Web框架库 Microdot 实现的网络文件服务器
  • QT Creator的返回到上一步、下一步的快捷键是什么?
  • Python Async 编程快速入门 | 超简明异步协程指南
  • Prism框架实战:WPF企业级开发全解