当前位置: 首页 > news >正文

ZooKeeper Java客户端与分布式应用实战

1. ZooKeeper Java客户端实战

ZooKeeper应用开发主要通过Java客户端API连接和操作ZooKeeper集群,有官方和第三方两种客户端选择。

1.1 ZooKeeper原生Java客户端

依赖引入
<dependency><groupId>org.apache.zookeeper</groupId><artifactId>zookeeper</artifactId><version>3.8.0</version>
</dependency>

注意:客户端版本需与服务端保持一致,避免兼容性问题

基本使用
public class ZkClientDemo {private static final String CLUSTER_CONNECT_STR = "192.168.22.156:2181,192.168.22.190:2181,192.168.22.200:2181";public static void main(String[] args) throws Exception {CountDownLatch countDownLatch = new CountDownLatch(1);ZooKeeper zooKeeper = new ZooKeeper(CLUSTER_CONNECT_STR, 4000, new Watcher() {@Overridepublic void process(WatchedEvent event) {if (Event.KeeperState.SyncConnected == event.getState() && event.getType() == Event.EventType.None) {countDownLatch.countDown();System.out.println("连接建立");}}});countDownLatch.await();System.out.println(zooKeeper.getState()); // CONNECTED// 创建持久节点zooKeeper.create("/user", "fox".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);}
}
原生API的局限性
  • Watcher监测为一次性,需重复注册
  • 无自动重连机制
  • 异常处理复杂
  • 仅提供byte[]接口,缺少POJO序列化支持
  • 需手动检查节点存在性
  • 不支持级联删除
常用方法
  • create(path, data, acl, createMode):创建节点
  • delete(path, version):删除节点
  • exists(path, watch):判断节点存在性
  • getData(path, watch):获取节点数据
  • setData(path, data, version):设置节点数据
  • getChildren(path, watch):获取子节点列表
  • sync(path):同步客户端与leader节点

所有方法都提供同步和异步两个版本,且支持条件更新(通过version参数控制)。

同步创建节点
@Test
public void createTest() throws KeeperException, InterruptedException {String path = zooKeeper.create(ZK_NODE, "data".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);log.info("created path: {}", path);
}
异步创建节点
@Test
public void createAsyncTest() throws InterruptedException {zooKeeper.create(ZK_NODE, "data".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE,CreateMode.PERSISTENT,(rc, path, ctx, name) -> log.info("rc {}, path {}, ctx {}, name {}", rc, path, ctx, name),"context");
}
修改节点数据
@Test
public void setTest() throws KeeperException, InterruptedException {Stat stat = new Stat();byte[] data = zooKeeper.getData(ZK_NODE, false, stat);log.info("修改前: {}", new String(data));zooKeeper.setData(ZK_NODE, "changed!".getBytes(), stat.getVersion());byte[] dataAfter = zooKeeper.getData(ZK_NODE, false, stat);log.info("修改后: {}", new String(dataAfter));
}

1.2 Curator开源客户端(常用)

依赖引入
<dependency><groupId>org.apache.zookeeper</groupId><artifactId>zookeeper</artifactId><version>3.8.0</version>
</dependency><dependency><groupId>org.apache.curator</groupId><artifactId>curator-recipes</artifactId><version>5.1.0</version><exclusions><exclusion><groupId>org.apache.zookeeper</groupId><artifactId>zookeeper</artifactId></exclusion></exclusions>
</dependency>
客户端创建
// 方式一:使用newClient方法
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
CuratorFramework client = CuratorFrameworkFactory.newClient(zookeeperConnectionString, retryPolicy);
client.start();// 方式二:使用builder模式(推荐)
RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
CuratorFramework client = CuratorFrameworkFactory.builder().connectString("192.168.128.129:2181").sessionTimeoutMs(5000).connectionTimeoutMs(5000).retryPolicy(retryPolicy).namespace("base") // 命名空间隔离.build();
client.start();
重试策略类型
  • ExponentialBackoffRetry:重试间隔按指数增长
  • RetryNTimes:最大重试次数
  • RetryOneTime:只重试一次
  • RetryUntilElapsed:在指定时间内重试
基本操作
// 创建节点
@Test
public void testCreate() throws Exception {String path = curatorFramework.create().forPath("/curator-node");curatorFramework.create().withMode(CreateMode.PERSISTENT).forPath("/curator-node", "some-data".getBytes());log.info("curator create node :{} successfully.", path);
}// 创建层级节点
@Test
public void testCreateWithParent() throws Exception {String pathWithParent = "/node-parent/sub-node-1";String path = curatorFramework.create().creatingParentsIfNeeded().forPath(pathWithParent);log.info("curator create node :{} successfully.", path);
}// 获取数据
@Test
public void testGetData() throws Exception {byte[] bytes = curatorFramework.getData().forPath("/curator-node");log.info("get data from node :{} successfully.", new String(bytes));
}// 更新数据
@Test
public void testSetData() throws Exception {curatorFramework.setData().forPath("/curator-node", "changed!".getBytes());byte[] bytes = curatorFramework.getData().forPath("/curator-node");log.info("get data from node /curator-node :{} successfully.", new String(bytes));
}// 删除节点
@Test
public void testDelete() throws Exception {String pathWithParent = "/node-parent";curatorFramework.delete().guaranteed().deletingChildrenIfNeeded().forPath(pathWithParent);
}
异步接口
@Test
public void testAsync() throws Exception {// 默认在EventThread中执行curatorFramework.getData().inBackground((item1, item2) -> {log.info("background: {}", item2);}).forPath(ZK_NODE);// 指定自定义线程池ExecutorService executorService = Executors.newSingleThreadExecutor();curatorFramework.getData().inBackground((item1, item2) -> {log.info("background: {}", item2);}, executorService).forPath(ZK_NODE);
}
监听器机制

Curator提供了三种Cache监听模式:

  1. NodeCache - 监听单个节点
public class NodeCacheTest {public static final String NODE_CACHE = "/node-cache";@Testpublic void testNodeCacheTest() throws Exception {createIfNeed(NODE_CACHE);NodeCache nodeCache = new NodeCache(curatorFramework, NODE_CACHE);nodeCache.getListenable().addListener(() -> {log.info("{} path nodeChanged: ", NODE_CACHE);printNodeData();});nodeCache.start();}
}
  1. PathChildrenCache - 监听子节点(不包含二级子节点)
public class PathCacheTest {public static final String PATH = "/path-cache";@Testpublic void testPathCache() throws Exception {createIfNeed(PATH);PathChildrenCache pathChildrenCache = new PathChildrenCache(curatorFramework, PATH, true);pathChildrenCache.getListenable().addListener((client, event) -> {log.info("event: {}", event);});pathChildrenCache.start(true);}
}
  1. TreeCache - 监听当前节点及所有递归子节点
public class TreeCacheTest {public static final String TREE_CACHE = "/tree-path";@Testpublic void testTreeCache() throws Exception {createIfNeed(TREE_CACHE);TreeCache treeCache = new TreeCache(curatorFramework, TREE_CACHE);treeCache.getListenable().addListener((client, event) -> {log.info("tree cache: {}", event);});treeCache.start();}
}

2. ZooKeeper在分布式命名服务中的实战

2.1 分布式API目录

Dubbo框架使用ZooKeeper实现分布式JNDI功能:

  • 服务提供者在启动时向/dubbo/${serviceName}/providers节点写入API地址
  • 服务消费者订阅该节点下的URL地址,获取所有服务提供者的API

2.2 分布式节点命名

动态节点命名方案:

  1. 使用数据库自增ID特性
  2. 使用ZooKeeper持久顺序节点的顺序特性

ZooKeeper方案流程:

  • 启动服务,连接ZooKeeper,检查/创建根节点
  • 在根节点下创建临时顺序节点,取回编号作为NodeId
  • 根据需要删除临时顺序节点

2.3 分布式ID生成器

方案对比
  1. Java UUID
  2. Redis INCR/INCRBY操作
  3. Twitter SnowFlake算法
  4. ZooKeeper顺序节点
  5. MongoDB ObjectId
基于ZooKeeper的实现
public class IDMaker extends CuratorBaseOperations {private String createSeqNode(String pathPefix) throws Exception {CuratorFramework curatorFramework = getCuratorFramework();String destPath = curatorFramework.create().creatingParentsIfNeeded().withMode(CreateMode.EPHEMERAL_SEQUENTIAL).forPath(pathPefix);return destPath;}public String makeId(String path) throws Exception {String str = createSeqNode(path);if (null != str) {int index = str.lastIndexOf(path);if (index >= 0) {index += path.length();return index <= str.length() ? str.substring(index) : "";}}return str;}
}
基于SnowFlake算法的实现
public class SnowflakeIdGenerator {private static final long START_TIME = 1483200000000L;private static final int WORKER_ID_BITS = 13;private static final int SEQUENCE_BITS = 10;private static final long MAX_WORKER_ID = ~(-1L << WORKER_ID_BITS);private static final long MAX_SEQUENCE = ~(-1L << SEQUENCE_BITS);private static final long WORKER_ID_SHIFT = SEQUENCE_BITS;private static final long TIMESTAMP_LEFT_SHIFT = WORKER_ID_BITS + SEQUENCE_BITS;private long workerId;private long lastTimestamp = -1L;private long sequence = 0L;public synchronized void init(long workerId) {if (workerId > MAX_WORKER_ID) {throw new IllegalArgumentException("worker Id wrong: " + workerId);}this.workerId = workerId;}private synchronized long generateId() {long current = System.currentTimeMillis();if (current < lastTimestamp) {return -1; // 时钟回拨}if (current == lastTimestamp) {sequence = (sequence + 1) & MAX_SEQUENCE;if (sequence == MAX_SEQUENCE) {current = this.nextMs(lastTimestamp);}} else {sequence = 0L;}lastTimestamp = current;long time = (current - START_TIME) << TIMESTAMP_LEFT_SHIFT;long workerId = this.workerId << WORKER_ID_SHIFT;return time | workerId | sequence;}
}

3. ZooKeeper实现分布式队列

3.1 设计思路

  1. 创建持久节点作为队列根节点
  2. 入队:在根节点下创建临时有序节点
  3. 出队:获取最小序号节点,读取数据后删除

3.2 Curator实现

public class CuratorDistributedQueueDemo {private static final String QUEUE_ROOT = "/curator_distributed_queue";public static void main(String[] args) throws Exception {CuratorFramework client = CuratorFrameworkFactory.newClient("localhost:2181",new ExponentialBackoffRetry(1000, 3));client.start();// 序列化器QueueSerializer<String> serializer = new QueueSerializer<String>() {@Overridepublic byte[] serialize(String item) {return item.getBytes();}@Overridepublic String deserialize(byte[] bytes) {return new String(bytes);}};// 消费者QueueConsumer<String> consumer = new QueueConsumer<String>() {@Overridepublic void consumeMessage(String message) throws Exception {System.out.println("消费消息: " + message);}@Overridepublic void stateChanged(CuratorFramework curatorFramework, ConnectionState connectionState) {}};// 创建队列(可指定锁路径保证原子性)DistributedQueue<String> queue = QueueBuilder.builder(client, consumer, serializer, QUEUE_ROOT).lockPath("/orderlock") // 可选:分布式锁路径.buildQueue();queue.start();// 生产消息for (int i = 0; i < 5; i++) {String message = "Task-" + i;System.out.println("生产消息: " + message);queue.put(message);Thread.sleep(1000);}Thread.sleep(10000);queue.close();client.close();}
}

3.3 注意事项

  • ZooKeeper不适合大数据量存储,官方不推荐作为队列使用
  • 在吞吐量不高的小型系统中较为适用
  • 使用锁路径(lockPath)可保证操作的原子性和顺序性
  • 不指定锁路径可提高性能,但可能面临并发问题

总结

ZooKeeper提供了强大的分布式协调能力,通过原生API或Curator客户端可以实现多种分布式场景下的解决方案。在选择方案时需要根据具体需求权衡性能、一致性和复杂性,特别是在高并发场景下需要考虑ZooKeeper的适用性和局限性。


文章转载自:

http://TE5ecUyV.hjwkq.cn
http://RhP7tq5b.hjwkq.cn
http://0xtNqVaH.hjwkq.cn
http://gp24jUIn.hjwkq.cn
http://xTzSSme6.hjwkq.cn
http://WA1GdSyg.hjwkq.cn
http://CJKmCAg4.hjwkq.cn
http://95Zfic5e.hjwkq.cn
http://f2E9x2TW.hjwkq.cn
http://l2NINHvk.hjwkq.cn
http://CT5P6fYG.hjwkq.cn
http://49loFtPY.hjwkq.cn
http://kDfKbJrG.hjwkq.cn
http://pAzXtrwS.hjwkq.cn
http://D3Ac2zpu.hjwkq.cn
http://a4duumZ3.hjwkq.cn
http://teHLKl8z.hjwkq.cn
http://OOCzwlRa.hjwkq.cn
http://I5Ps7Huy.hjwkq.cn
http://GIpvLgbI.hjwkq.cn
http://TAFlhXAH.hjwkq.cn
http://vrjkH92l.hjwkq.cn
http://f2WhFu6t.hjwkq.cn
http://zaSsshaT.hjwkq.cn
http://zPnSOVMB.hjwkq.cn
http://K2b8Yp48.hjwkq.cn
http://zeIXooYe.hjwkq.cn
http://g5Q9TuZJ.hjwkq.cn
http://9FuNwkhf.hjwkq.cn
http://bLFrP0DE.hjwkq.cn
http://www.dtcms.com/a/378593.html

相关文章:

  • 【复习】计网每日一题---传输层无连接不可靠服务
  • 2025年秋招答疑:AI面试如何破解在线作弊难题?
  • KafKa01:在Windows系统上安装Kafka
  • 【Big Data】Amazon S3 专为从任何位置检索任意数量的数据而构建的对象存储
  • C++:模版进阶
  • 【Canvas与旗帜】圆角红面白边蓝底梅花五星旗
  • 不同局域网远程桌面连接:设置让外网电脑直接windows自带远程桌面访问内网计算机,简单3步实现通用详细教程
  • set 认识及使用
  • 如何打造“高效、安全、精准、可持续”的智能化实验室?
  • 究竟什么时候用shared_ptr,什么时候用unique_ptr?
  • 前端抽象化,打破框架枷锁:react现代化项目中的思想体现
  • 基于开源AI智能名片、链动2+1模式与S2B2C商城小程序的流量运营与个人IP构建研究
  • gstreamer:创建组件、管道和总线,实现简单的播放器(Makefile,代码测试通过)
  • Kibana 双栈网络(Dual-Stack)支持能力评估
  • go 日志的分装和使用 Zap + lumberjack
  • 河北智算中心绿色能源占比多少?
  • 在能源互联网时代天硕工业级SSD固态硬盘为何更受青睐?
  • 关于rust的crates.io
  • 使用Rust实现服务配置/注册中心
  • C++ 类与对象(下):从构造函数到编译器优化深度解析
  • DNS 域名解析
  • EasyDSS重装系统后启动失败?解决RTMP推流平台EasyDss服务启动失败的详细步骤
  • 自动驾驶中的传感器技术45——Radar(6)
  • 第四章 Elasticsearch索引管理与查询优化
  • 拆分了解HashMap的数据结构
  • Sqlite“无法加载 DLL“e_sqlite3”: 找不到指定的模块”解决方法
  • 项目 PPT 卡壳?模型效果 + 训练数据展示模块直接填 ,451ppt.vip预制PPT也香
  • react-native项目通过华为OBS预签名url实现前端直传
  • Linux-> UDP 编程1
  • Pytest+requests进行接口自动化测试2.0(yaml)