当前位置: 首页 > wzjs >正文

平湖网站建设公司克建立网站谁给你钱

平湖网站建设公司克,建立网站谁给你钱,网站怎么自己做,chromeseo是什么集群版本介绍 clickhouse是表级别的集群,一个clickhouse实例可以有分布式表,也可以有本地表。本文介绍4个节点的clickhouse情况下部署配置。 分布式表数据分成2个分片,2个副本,总共4份数据: 节点1数据:分…

集群版本介绍

clickhouse是表级别的集群,一个clickhouse实例可以有分布式表,也可以有本地表。本文介绍4个节点的clickhouse情况下部署配置。

分布式表数据分成2个分片,2个副本,总共4份数据:

  • 节点1数据:分片1,副本1
  • 节点2数据:分片1,副本2
  • 节点3数据:分片2,副本1
  • 节点4数据:分片2,副本2

分片1,分片2组成一份完整的数据。这种配置可以在使用中4个节点共同处理一个sql查询,在一个节点宕机情况下保证数据不丢失,

集群版ClickHouse部署

集群版本与单机版本区别在于节点间要相互发现,与单机版部署区别如下:

  • 需要静态配置其他节点信息,配置clickhouse keeper(也可以单独部署zookeeper)

以192.168.1.10,192.168.1.11,192.168.1.12,192.168.1.13四台集群模式为例配置/etc/clickhouse-server/config.d/config.xml

实际部署时需批量替换上述4个ip

192.168.1.10

<?xml version="1.0"?>
<clickhouse><listen_host>0.0.0.0</listen_host><timezone>Asia/Shanghai</timezone><logger><level>information</level></logger><background_pool_size>4</background_pool_size><background_schedule_pool_size>1</background_schedule_pool_size><mark_cache_size>268435456</mark_cache_size><merge_tree><index_granularity>1024</index_granularity><merge_max_block_size>1024</merge_max_block_size><max_bytes_to_merge_at_max_space_in_pool>1610612736</max_bytes_to_merge_at_max_space_in_pool><number_of_free_entries_in_pool_to_lower_max_size_of_merge>1</number_of_free_entries_in_pool_to_lower_max_size_of_merge><number_of_free_entries_in_pool_to_execute_mutation>4</number_of_free_entries_in_pool_to_execute_mutation><number_of_free_entries_in_pool_to_execute_optimize_entire_partition>4</number_of_free_entries_in_pool_to_execute_optimize_entire_partition></merge_tree><query_cache><max_size_in_bytes>134217728</max_size_in_bytes><max_entries>1024</max_entries><max_entry_size_in_bytes>10485760</max_entry_size_in_bytes><max_entry_size_in_rows>8192</max_entry_size_in_rows></query_cache><macros><replica>replica1</replica><cluster_shard>shard1</cluster_shard></macros><default_replica_path>/clickhouse/tables/{cluster_shard}/{database}/{table}</default_replica_path><default_replica_name>{replica}</default_replica_name><remote_servers><!--集群名称,默认为bpi--><bpi><secret>econage123</secret><shard><!--分片自动复制--><internal_replication>true</internal_replication><replica><host>192.168.1.10</host><!--节点host--><port>9000</port><!--节点port--></replica><replica><host>192.168.1.11</host><!--节点host--><port>9000</port><!--节点port--></replica></shard><shard><internal_replication>true</internal_replication><replica><host>192.168.1.12</host><port>9000</port></replica><replica><host>192.168.1.13</host><port>9000</port></replica></shard></bpi></remote_servers><zookeeper><node><host>192.168.1.10</host><port>8181</port></node><node><host>192.168.1.11</host><port>8181</port></node><node><host>192.168.1.12</host><port>8181</port></node><node><host>192.168.1.13</host><port>8181</port></node></zookeeper><keeper_server><tcp_port>8181</tcp_port><server_id>1</server_id><log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path><snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path><coordination_settings><operation_timeout_ms>10000</operation_timeout_ms><session_timeout_ms>30000</session_timeout_ms><raft_logs_level>warning</raft_logs_level></coordination_settings><raft_configuration><server><id>1</id><hostname>192.168.1.10</hostname><port>9444</port></server><server><id>2</id><hostname>192.168.1.11</hostname><port>9444</port></server><server><id>3</id><hostname>192.168.1.12</hostname><port>9444</port></server><server><id>4</id><hostname>192.168.1.13</hostname><port>9444</port></server></raft_configuration></keeper_server></clickhouse>

192.168.1.11

<?xml version="1.0"?>
<clickhouse><listen_host>0.0.0.0</listen_host><timezone>Asia/Shanghai</timezone><logger><level>information</level></logger><background_pool_size>4</background_pool_size><background_schedule_pool_size>1</background_schedule_pool_size><mark_cache_size>268435456</mark_cache_size><merge_tree><index_granularity>1024</index_granularity><merge_max_block_size>1024</merge_max_block_size><max_bytes_to_merge_at_max_space_in_pool>1610612736</max_bytes_to_merge_at_max_space_in_pool><number_of_free_entries_in_pool_to_lower_max_size_of_merge>1</number_of_free_entries_in_pool_to_lower_max_size_of_merge><number_of_free_entries_in_pool_to_execute_mutation>4</number_of_free_entries_in_pool_to_execute_mutation><number_of_free_entries_in_pool_to_execute_optimize_entire_partition>4</number_of_free_entries_in_pool_to_execute_optimize_entire_partition></merge_tree><query_cache><max_size_in_bytes>134217728</max_size_in_bytes><max_entries>1024</max_entries><max_entry_size_in_bytes>10485760</max_entry_size_in_bytes><max_entry_size_in_rows>8192</max_entry_size_in_rows></query_cache><macros><replica>replica2</replica><cluster_shard>shard1</cluster_shard></macros><default_replica_path>/clickhouse/tables/{cluster_shard}/{database}/{table}</default_replica_path><default_replica_name>{replica}</default_replica_name><remote_servers><bpi><secret>econage123</secret><shard><internal_replication>true</internal_replication><replica><host>192.168.1.10</host><!--节点host--><port>9000</port><!--节点port--></replica><replica><host>192.168.1.11</host><!--节点host--><port>9000</port><!--节点port--></replica></shard><shard><internal_replication>true</internal_replication><replica><host>192.168.1.12</host><port>9000</port></replica><replica><host>192.168.1.13</host><port>9000</port></replica></shard></bpi></remote_servers><zookeeper><node><host>192.168.1.10</host><port>8181</port></node><node><host>192.168.1.11</host><port>8181</port></node><node><host>192.168.1.12</host><port>8181</port></node><node><host>192.168.1.13</host><port>8181</port></node></zookeeper><keeper_server><tcp_port>8181</tcp_port><server_id>2</server_id><log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path><snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path><coordination_settings><operation_timeout_ms>10000</operation_timeout_ms><session_timeout_ms>30000</session_timeout_ms><raft_logs_level>warning</raft_logs_level></coordination_settings><raft_configuration><server><id>1</id><hostname>192.168.1.10</hostname><port>9444</port></server><server><id>2</id><hostname>192.168.1.11</hostname><port>9444</port></server><server><id>3</id><hostname>192.168.1.12</hostname><port>9444</port></server><server><id>4</id><hostname>192.168.1.13</hostname><port>9444</port></server></raft_configuration></keeper_server></clickhouse>

192.168.1.12

<?xml version="1.0"?>
<clickhouse><listen_host>0.0.0.0</listen_host><timezone>Asia/Shanghai</timezone><logger><level>information</level></logger><background_pool_size>4</background_pool_size><background_schedule_pool_size>1</background_schedule_pool_size><mark_cache_size>268435456</mark_cache_size><merge_tree><index_granularity>1024</index_granularity><merge_max_block_size>1024</merge_max_block_size><max_bytes_to_merge_at_max_space_in_pool>1610612736</max_bytes_to_merge_at_max_space_in_pool><number_of_free_entries_in_pool_to_lower_max_size_of_merge>1</number_of_free_entries_in_pool_to_lower_max_size_of_merge><number_of_free_entries_in_pool_to_execute_mutation>4</number_of_free_entries_in_pool_to_execute_mutation><number_of_free_entries_in_pool_to_execute_optimize_entire_partition>4</number_of_free_entries_in_pool_to_execute_optimize_entire_partition></merge_tree><query_cache><max_size_in_bytes>134217728</max_size_in_bytes><max_entries>1024</max_entries><max_entry_size_in_bytes>10485760</max_entry_size_in_bytes><max_entry_size_in_rows>8192</max_entry_size_in_rows></query_cache><macros><replica>replica1</replica><cluster_shard>shard2</cluster_shard></macros><default_replica_path>/clickhouse/tables/{cluster_shard}/{database}/{table}</default_replica_path><default_replica_name>{replica}</default_replica_name><remote_servers><bpi><secret>econage123</secret><shard><internal_replication>true</internal_replication><replica><host>192.168.1.10</host><!--节点host--><port>9000</port><!--节点port--></replica><replica><host>192.168.1.11</host><!--节点host--><port>9000</port><!--节点port--></replica></shard><shard><internal_replication>true</internal_replication><replica><host>192.168.1.12</host><port>9000</port></replica><replica><host>192.168.1.13</host><port>9000</port></replica></shard></bpi></remote_servers><zookeeper><node><host>192.168.1.10</host><port>8181</port></node><node><host>192.168.1.11</host><port>8181</port></node><node><host>192.168.1.12</host><port>8181</port></node><node><host>192.168.1.13</host><port>8181</port></node></zookeeper><keeper_server><tcp_port>8181</tcp_port><server_id>3</server_id><log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path><snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path><coordination_settings><operation_timeout_ms>10000</operation_timeout_ms><session_timeout_ms>30000</session_timeout_ms><raft_logs_level>warning</raft_logs_level></coordination_settings><raft_configuration><server><id>1</id><hostname>192.168.1.10</hostname><port>9444</port></server><server><id>2</id><hostname>192.168.1.11</hostname><port>9444</port></server><server><id>3</id><hostname>192.168.1.12</hostname><port>9444</port></server><server><id>4</id><hostname>192.168.1.13</hostname><port>9444</port></server></raft_configuration></keeper_server></clickhouse>

192.168.1.13

<?xml version="1.0"?>
<clickhouse><listen_host>0.0.0.0</listen_host><timezone>Asia/Shanghai</timezone><logger><level>information</level></logger><background_pool_size>4</background_pool_size><background_schedule_pool_size>1</background_schedule_pool_size><mark_cache_size>268435456</mark_cache_size><merge_tree><index_granularity>1024</index_granularity><merge_max_block_size>1024</merge_max_block_size><max_bytes_to_merge_at_max_space_in_pool>1610612736</max_bytes_to_merge_at_max_space_in_pool><number_of_free_entries_in_pool_to_lower_max_size_of_merge>1</number_of_free_entries_in_pool_to_lower_max_size_of_merge><number_of_free_entries_in_pool_to_execute_mutation>4</number_of_free_entries_in_pool_to_execute_mutation><number_of_free_entries_in_pool_to_execute_optimize_entire_partition>4</number_of_free_entries_in_pool_to_execute_optimize_entire_partition></merge_tree><query_cache><max_size_in_bytes>134217728</max_size_in_bytes><max_entries>1024</max_entries><max_entry_size_in_bytes>10485760</max_entry_size_in_bytes><max_entry_size_in_rows>8192</max_entry_size_in_rows></query_cache><macros><replica>replica2</replica><cluster_shard>shard2</cluster_shard></macros><default_replica_path>/clickhouse/tables/{cluster_shard}/{database}/{table}</default_replica_path><default_replica_name>{replica}</default_replica_name><remote_servers><bpi><secret>econage123</secret><shard><internal_replication>true</internal_replication><replica><host>192.168.1.10</host><!--节点host--><port>9000</port><!--节点port--></replica><replica><host>192.168.1.11</host><!--节点host--><port>9000</port><!--节点port--></replica></shard><shard><internal_replication>true</internal_replication><replica><host>192.168.1.12</host><port>9000</port></replica><replica><host>192.168.1.13</host><port>9000</port></replica></shard></bpi></remote_servers><zookeeper><node><host>192.168.1.10</host><port>8181</port></node><node><host>192.168.1.11</host><port>8181</port></node><node><host>192.168.1.12</host><port>8181</port></node><node><host>192.168.1.13</host><port>8181</port></node></zookeeper><keeper_server><tcp_port>8181</tcp_port><server_id>4</server_id><log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path><snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path><coordination_settings><operation_timeout_ms>10000</operation_timeout_ms><session_timeout_ms>30000</session_timeout_ms><raft_logs_level>warning</raft_logs_level></coordination_settings><raft_configuration><server><id>1</id><hostname>192.168.1.10</hostname><port>9444</port></server><server><id>2</id><hostname>192.168.1.11</hostname><port>9444</port></server><server><id>3</id><hostname>192.168.1.12</hostname><port>9444</port></server><server><id>4</id><hostname>192.168.1.13</hostname><port>9444</port></server></raft_configuration></keeper_server></clickhouse>

端口

  • 8123(clickhouse http)
  • 9000(clickhouse tcp)
  • 8181(clickhouse-keeper)
  • 9444(raft)
http://www.dtcms.com/wzjs/612016.html

相关文章:

  • 东莞网上推广怎么做seo做的比较好的网站的几个特征
  • 呼和浩特网站建设哪家最便宜建e
  • 网站备案添加域名绍兴seo计费管理
  • 公司注册网站多少钱网站建设如何把更改内容
  • 哪家企业做网站好中国咖啡网站建设方案
  • 做计算机网站有哪些功能网站建设可行性方案
  • 如何做网站公司名seowordpress转盘抽奖源码
  • 网站加载慢图片做延时加载有用合肥做网站的公司有哪些
  • 网站开发进入腾信职位做ipo尽调需要用到的网站
  • 商用图片的网站做返利网站能赚钱
  • 深圳专业做网站和seo的公司免费做app的软件有哪些
  • 企业网站设计的重要性网络系统建设方案
  • 番禺网站建设部招标网站
  • 万江网站制作python安装wordpress
  • 哪个地方旅游网站做的比较好微信公众号里的小网站怎么做的
  • 网站模块顺序调整做任务的网站有哪些
  • wordpress做小程序灯塔seo
  • 海南省建设考试网站首页广州网站推广平台
  • 北京微信网站制作电话做视频网站推广
  • 国内优秀的设计网站推荐有利于seo的网站底部
  • 诸城网站建设费用电商卖货平台
  • 有了域名后怎么做网站百度公司推广电话
  • vs 2015可以做网站吗做调查问卷网挣钱的网站
  • 想不到的网站域名新品牌推广策划方案
  • 网站内容建设和管理系统公司logo查询网站
  • 科技公司内蒙古网站制作北京企业建站服务中企
  • 公司网站设计维护网站安全漏洞扫描工具
  • dream网站怎么做框架网站 搜索引擎 提交
  • 商城网站前期准备芜湖做的好的招聘网站
  • 网站组成费用品牌建设模型