构建和部署Spark、Hadoop与Zeppelin集成环境
构建和部署Spark、Hadoop与Zeppelin集成环境
zeppelin官方文档:https://zeppelin.apache.org/docs/0.12.0/interpreter/spark.html
阿里云hadoop镜像站:https://mirrors.aliyun.com/apache/hadoop/common/?spm=a2c6h.25603864.0.0.25594c98Zq721o
spark下载地址:https://spark.apache.org/downloads.html
hadoop官方下载:https://hadoop.apache.org/release/3.4.0.html
本文档详细记录了如何构建和部署一个集成了Spark、Hadoop和Zeppelin的Docker化环境。该环境旨在为数据工程师提供一个开箱即用的大数据分析平台。
如果只是希望能直接部署,请直接看本文中 docker-compose-with-zeppelin.yml 内容,然后直接 docker-compose -f docker-compose-with-zeppelin.yml up -d
。如果希望知道这个集成的镜像是怎么构建起来的,可以详细看全文。
1. 概述
通过Docker容器化技术,将Spark、Hadoop和Zeppelin三个核心组件打包在一起,形成一个统一、可移植的开发和分析环境。主要优势包括:
- 环境一致性: 避免了在不同机器上因环境差异导致的问题。
- 快速部署: 通过Docker Compose可以一键启动整个集群。
- 资源隔离: 每个组件都在独立的容器中运行,互不干扰。
- 易于扩展: 可以轻松地增加或减少Spark工作节点。
1.1. 核心组件
- Spark: 一个快速、通用的大数据处理引擎,支持批处理、流处理、机器学习和图计算。
- Hadoop: 一个分布式系统基础架构,主要使用其HDFS(分布式文件系统)和YARN(资源管理器)。
- Zeppelin: 一个基于Web的笔记本,支持交互式数据分析和可视化。
1.2. 架构图
2. 环境准备
在开始之前,请确保您的系统已安装以下软件:
- Docker: https://www.docker.com/get-started
- Docker Compose: 通常随Docker一同安装。
- 下载hadoop-3.4.0.tar.gz:https://hadoop.apache.org/release/3.4.0.html
3. 目录结构
.
├── Dockerfile
├── config
│ ├── core-site.xml
│ ├── hadoop-env.sh
│ ├── hdfs-site.xml
│ ├── mapred-site.xml
│ ├── ssh_config
│ ├── workers
│ └── yarn-site.xml
├── custom-entrypoint.sh
├── docker-compose.yml
├── hadoop-3.4.0.tar.gz
├── share
│ ├── my_script.py
│ └── words.txt
└── spark-hadoop├── README.md├── docker-compose-with-zeppelin.yml├── docker-compose.yml└── start-cluster.sh
Dockerfile
: 用于构建包含Spark和Hadoop的自定义Docker镜像。config/
: 存放Hadoop的配置文件。custom-entrypoint.sh
: 自定义的容器入口脚本,用于初始化Hadoop服务。docker-compose.yml
: 用于启动一个基础的Spark和Hadoop集群。hadoop-3.4.0.tar.gz
: Hadoop的二进制包。share/
: 用于与容器共享文件的本地目录。spark-hadoop/
: 存放与Zeppelin集成的相关文件。docker-compose-with-zeppelin.yml
: 用于启动包含Zeppelin的完整集群。start-cluster.sh
: 启动集群的便捷脚本。
4. 构建Docker镜像
镜像是整个环境的基础。Dockerfile
基于bitnami/spark
镜像,并在此基础上添加了Hadoop。
4.1. Dockerfile制作
- 基础镜像:
FROM docker.io/bitnami/spark:3.5.6
- 安装依赖: 安装
openssh-server
等Hadoop运行所需的依赖。 - 配置SSH: 为Hadoop的Master和Worker节点之间的免密登录生成SSH密钥。
- 安装Hadoop:
- 将
hadoop-3.4.0.tar.gz
复制到容器中并解压。 - 设置Hadoop相关的环境变量,如
HADOOP_HOME
。
- 将
- 复制Hadoop配置: 将
config/
目录下的配置文件复制到容器的Hadoop配置目录中。 - 自定义入口点:
- 将
custom-entrypoint.sh
复制到容器中。 - 设置该脚本为容器的
ENTRYPOINT
,以确保在容器启动时首先执行Hadoop的初始化操作。
- 将
Dockerfile内容:
FROM docker.io/bitnami/spark:3.5.6
LABEL maintainer="kxr <kongxiaoranx@gmail.com>"
LABEL description="Docker image with Spark (3.5.6) and Hadoop (3.4.0), based on bitnami/spark:3.5.6 \
For more information, please visit https://github.com/kongxiaoran/spark-hadoop-docker."USER rootENV HADOOP_HOME="/opt/hadoop"
ENV HADOOP_CONF_DIR="$HADOOP_HOME/etc/hadoop"
ENV HADOOP_LOG_DIR="/var/log/hadoop"
ENV PATH="$HADOOP_HOME/hadoop/sbin:$HADOOP_HOME/bin:$PATH"WORKDIR /optRUN apt-get update && apt-get install -y \openssh-server \curl \tar \gzip \sudo RUN ssh-keygen -t rsa -f /root/.ssh/id_rsa -P '' && \cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keysCOPY ./hadoop-3.4.0.tar.gz /opt/RUN tar -xzvf hadoop-3.4.0.tar.gz && \mv hadoop-3.4.0 hadoop && \rm -rf hadoop-3.4.0.tar.gz && \mkdir /var/log/hadoopRUN mkdir -p /root/hdfs/namenode && \ mkdir -p /root/hdfs/datanode COPY config/* /tmp/RUN mv /tmp/ssh_config /root/.ssh/config && \mv /tmp/hadoop-env.sh $HADOOP_CONF_DIR/hadoop-env.sh && \mv /tmp/hdfs-site.xml $HADOOP_CONF_DIR/hdfs-site.xml && \ mv /tmp/core-site.xml $HADOOP_CONF_DIR/core-site.xml && \mv /tmp/mapred-site.xml $HADOOP_CONF_DIR/mapred-site.xml && \mv /tmp/yarn-site.xml $HADOOP_CONF_DIR/yarn-site.xml && \mv /tmp/workers $HADOOP_CONF_DIR/workersRUN chmod +x $HADOOP_HOME/sbin/start-dfs.sh && \chmod +x $HADOOP_HOME/sbin/start-yarn.sh
# sed -i 's/renice "\${HADOOP_NICENESS}" \$\$/echo "Skipping renice for \$\$" \&\& true/g' $HADOOP_HOME/libexec/hadoop-functions.sh RUN hdfs namenode -format# 使用自定义entrypoint脚本(优雅方案)
COPY custom-entrypoint.sh /opt/custom-entrypoint.sh
RUN chmod +x /opt/custom-entrypoint.shENTRYPOINT [ "/opt/custom-entrypoint.sh" ]
CMD ["/opt/bitnami/scripts/spark/run.sh"]
4.2. 构建命令
在项目根目录下执行以下命令来构建镜像:
docker build -t kongxr7/spark-hadoop:3.5.6-hadoop3.4.0 .
5. 配置
5.1. Hadoop配置
Hadoop的配置文件位于config/
目录下。您可以根据需要修改这些文件,例如调整HDFS的副本数或YARN的内存分配。
core-site.xml
: 配置HDFS的默认文件系统。hdfs-site.xml
: 配置HDFS的NameNode和DataNode。yarn-site.xml
: 配置YARN的ResourceManager和NodeManager。mapred-site.xml
: 配置MapReduce作业。workers
: 指定Hadoop集群中的Worker节点。hadoop-env.sh
: Hadoop的核心环境变量配置文件。它用于设置Hadoop运行所需的环境变量,例如JAVA_HOME
(指定Java安装路径)、HADOOP_HOME
(指定Hadoop安装路径)、HADOOP_CONF_DIR
(指定Hadoop配置目录)等。此外,该文件还允许配置Hadoop守护进程(如NameNode, DataNode, ResourceManager, NodeManager)的JVM选项、日志目录、用户身份等关键参数。ssh_config
: SSH客户端配置文件。在此项目中,ssh_config
文件用于定义SSH连接的默认行为,特别是为了实现Hadoop和Spark集群节点间的无密码SSH登录。通过配置StrictHostKeyChecking no
和UserKnownHostsFile=/dev/null
,可以避免在节点间首次建立SSH连接时需要手动确认主机密钥的提示,从而自动化集群的启动和管理。
这些配置文件的内容都记录在文章末尾的附录中
5.2. Docker Compose配置
spark-hadoop/docker-compose-with-zeppelin.yml
文件定义了整个集群的服务。
version: '2'services:spark:image: kongxr7/spark-hadoop:3.5.6-hadoop3.4.0hostname: masterenvironment:- SPARK_MODE=master- SPARK_RPC_AUTHENTICATION_ENABLED=no- SPARK_RPC_ENCRYPTION_ENABLED=no- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no- SPARK_SSL_ENABLED=novolumes:- /Users/kxr/learning/docker/spark/share:/opt/share# 直接将Hadoop配置目录挂载到共享卷- hadoop_conf:/opt/hadoop/etc/hadoop# 将容器内的Spark库挂载到共享卷- spark_lib:/opt/bitnami/sparkports:- '8090:8080'- '4040:4040'- '8088:8088'- '8042:8042'- '9870:9870'- '19888:19888'- '7077:7077'spark-worker-1:image: kongxr7/spark-hadoop:3.5.6-hadoop3.4.0hostname: worker1environment:- SPARK_MODE=worker- SPARK_MASTER_URL=spark://master:7077- SPARK_WORKER_MEMORY=1G- SPARK_WORKER_CORES=1- SPARK_RPC_AUTHENTICATION_ENABLED=no- SPARK_RPC_ENCRYPTION_ENABLED=no- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no- SPARK_SSL_ENABLED=novolumes:- /Users/kxr/learning/docker/spark/share:/opt/shareports:- '8081:8081'spark-worker-2:image: kongxr7/spark-hadoop:3.5.6-hadoop3.4.0hostname: worker2environment:- SPARK_MODE=worker- SPARK_MASTER_URL=spark://master:7077- SPARK_WORKER_MEMORY=1G- SPARK_WORKER_CORES=1- SPARK_RPC_AUTHENTICATION_ENABLED=no- SPARK_RPC_ENCRYPTION_ENABLED=no- SPARK_LOCAL_STORAGE_ENCRYPTION_ENABLED=no- SPARK_SSL_ENABLED=novolumes:- /Users/kxr/learning/docker/spark/share:/opt/shareports:- '8082:8081'zeppelin:image: apache/zeppelin:0.12.0hostname: zeppelindepends_on:- sparkenvironment:- SPARK_HOME=/opt/spark- HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop- YARN_CONF_DIR=/opt/hadoop/etc/hadoopvolumes:# 使用共享卷中的Spark库- spark_lib:/opt/spark# 挂载Hadoop配置目录,从spark容器共享- hadoop_conf:/opt/hadoop/etc/hadoop:ro# 持久化Zeppelin notebook数据- zeppelin_notebooks:/opt/zeppelin/notebook# 持久化Zeppelin配置- zeppelin_conf:/opt/zeppelin/confports:- '8089:8080'volumes:# 创建一个命名卷来共享Hadoop配置hadoop_conf:driver: local# 创建一个命名卷来共享Spark库spark_lib:driver: local# Zeppelin notebook持久化卷zeppelin_notebooks:driver: local# Zeppelin配置持久化卷zeppelin_conf:driver: local
5.2.1. 共享卷
为了实现Zeppelin与Spark/Hadoop的无缝集成,我们使用了Docker的共享卷:
hadoop_conf
:- 在
spark
服务中,我们将容器内的Hadoop配置目录/opt/hadoop/etc/hadoop
挂载到该卷。 - 在
zeppelin
服务中,我们将该卷挂载到/opt/hadoop/etc/hadoop
,从而使Zeppelin能够访问Hadoop的配置。
- 在
spark_lib
:- 在
spark
服务中,我们将Spark的库目录/opt/bitnami/spark
挂载到该卷。 - 在
zeppelin
服务中,我们将该卷挂载到/opt/spark
,从而使Zeppelin能够使用与集群版本一致的Spark库。
- 在
这种设计避免了在Zeppelin容器中重复安装Spark和Hadoop,也无需手动复制配置文件,大大简化了架构。
5.2.2. 环境变量
在zeppelin
服务中,我们设置了以下环境变量,以告知Zeppelin如何找到Spark和Hadoop:
SPARK_HOME=/opt/spark
HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
YARN_CONF_DIR=/opt/hadoop/etc/hadoop
6. 部署
进入spark-hadoop/
目录,执行以下脚本即可一键启动整个集群:
cd spark-hadoop
./start-cluster.sh
该脚本会执行docker-compose -f docker-compose-with-zeppelin.yml up -d
命令,在后台启动所有服务。
7. 使用
7.1. 访问服务
- Spark UI: http://localhost:8090
- YARN ResourceManager: http://localhost:8088
- HDFS NameNode: http://localhost:9870
- Zeppelin: http://localhost:8089
7.2. 在Zeppelin中使用Spark
- 打开Zeppelin界面。
- 创建一个新的笔记本。
- 在笔记本的解释器设置中,将
spark
解释器的master
属性设置为yarn
。 - 现在,就可以在笔记本中编写Spark代码,并将其提交到YARN集群上运行。
7.3. 示例:Word Count
在Zeppelin笔记本中运行以下代码,体验在HDFS上进行单词计数的完整流程:
// 1. 写入数据到HDFS
// 使用相对路径 "words",文件将保存在Zeppelin用户的HDFS主目录 /user/zeppelin/words
val text = "hello world hello spark hello hadoop"
sc.parallelize(text.split(" ")).saveAsTextFile("words")// 2. 从HDFS读取数据并执行Word Count
val wordCounts = sc.textFile("words").flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)// 3. 打印结果
wordCounts.collect().foreach(println)
重要提示:HDFS 权限说明
为确保
zeppelin
用户拥有必要的 HDFS 操作权限,系统在启动时已自动创建/user/zeppelin
目录并将其设置为zeppelin
用户的主目录。因此,在 Zeppelin 中读写 HDFS 时,请使用相对路径(如
words
)或以/user/zeppelin/
开头的绝对路径,以避免权限错误。
附录:配置文件内容
A.1 Hadoop 配置
core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property>
</configuration>
hadoop-env.sh
export JAVA_HOME=/opt/bitnami/java
export HADOOP_HOME=/opt/hadoop
export HADOOP_MAPRED_HOME=/opt/hadoop
export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib --add-opens java.base/java.lang=ALL-UNNAMED --add-opens java.base/java.util=ALL-UNNAMED --add-opens java.base/java.lang.reflect=ALL-UNNAMED --add-opens java.base/java.text=ALL-UNNAMED --add-opens java.desktop/java.awt.font=ALL-UNNAMED"export HDFS_NAMENODE_USER="root"
export HDFS_DATANODE_USER="root"
export HDFS_SECONDARYNAMENODE_USER="root"
export YARN_RESOURCEMANAGER_USER="root"
export YARN_NODEMANAGER_USER="root"
hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.namenode.name.dir</name><value>file:///root/hdfs/namenode</value><description>NameNode directory for namespace and transaction logs storage.</description></property><property><name>dfs.datanode.data.dir</name><value>file:///root/hdfs/datanode</value><description>DataNode directory</description></property><property><name>dfs.replication</name><value>2</value></property>
</configuration>
mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>yarn.app.mapreduce.am.env</name><value>HADOOP_MAPRED_HOME=/opt/hadoop</value></property><property><name>mapreduce.map.env</name><value>HADOOP_MAPRED_HOME=/opt/hadoop</value></property><property><name>mapreduce.reduce.env</name><value>HADOOP_MAPRED_HOME=/opt/hadoop</value></property><property> <name>mapreduce.application.classpath</name><value>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/common/*,$HADOOP_MAPRED_HOME/share/hadoop/common/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/yarn/*,$HADOOP_MAPRED_HOME/share/hadoop/yarn/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/hdfs/*,$HADOOP_MAPRED_HOME/share/hadoop/hdfs/lib/*</value></property>
</configuration>
yarn-site.xml
<?xml version="1.0"?>
<!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file.
--><!-- Site specific YARN configuration properties -->
<configuration><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><name>yarn.resourcemanager.hostname</name><value>master</value></property><property><name>yarn.nodemanager.env-whitelist</name><value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value></property>
</configuration>
workers
worker1
worker2
ssh_config
Host localhostStrictHostKeyChecking noHost 0.0.0.0StrictHostKeyChecking noHost hadoop-*StrictHostKeyChecking noUserKnownHostsFile=/dev/null
start-cluster.sh
#!/bin/bash# 启动集群的脚本
# 此脚本用于启动整个Spark-Hadoop-Zeppelin集群echo "=== 启动Spark-Hadoop-Zeppelin集群 ==="# 确保docker-compose已停止
docker-compose -f docker-compose-with-zeppelin.yml down# 启动docker-compose
docker-compose -f docker-compose-with-zeppelin.yml up -decho "=== 集群启动完成 ==="
echo "Spark UI: http://localhost:8090"
echo "HDFS UI: http://localhost:9870"
echo "YARN UI: http://localhost:8088"
echo "Zeppelin UI: http://localhost:8089"
custom-entrypoint.sh
#!/bin/bash# 自定义entrypoint脚本 - 优雅的启动方案
# 不修改原始entrypoint.sh,完全自定义启动流程echo "=== Starting Custom Entrypoint ==="# 启动SSH服务
echo "Starting SSH service..."
service ssh start# 启动Hadoop服务
echo "Starting Hadoop services..."# 设置Hadoop环境变量
export HADOOP_NICENESS=0
export HADOOP_SECURE_DN_USER=""# 启动HDFS服务(如果未运行)
if ! pgrep -f "org.apache.hadoop.hdfs.server.namenode.NameNode" > /dev/null; thenecho "Starting HDFS services..."$HADOOP_HOME/sbin/start-dfs.sh || true
elseecho "HDFS services already running"
fi# 启动YARN服务(如果未运行)
if ! pgrep -f "org.apache.hadoop.yarn.server.resourcemanager.ResourceManager" > /dev/null; thenecho "Starting YARN services..."$HADOOP_HOME/sbin/start-yarn.sh || true
elseecho "YARN services already running"
fi# 为Zeppelin用户创建HDFS目录并授权
echo "Creating HDFS directories for Zeppelin user..."
# 等待HDFS完全启动
sleep 5
# 创建zeppelin用户目录
hdfs dfs -mkdir -p /user/zeppelin || true
# 更改目录所有权
hdfs dfs -chown zeppelin:zeppelin /user/zeppelin || true
# 设置目录权限
hdfs dfs -chmod 777 /user/zeppelin || trueecho "HDFS directories for Zeppelin user created and authorized."echo "=== All services started successfully! ==="
echo "=== Calling original Spark entrypoint ==="# 调用原始Spark entrypoint,传递所有参数
exec /opt/bitnami/scripts/spark/entrypoint.sh "$@"