Starrocks的CBO基石--统计信息的来源 StatisticAutoCollector
背景
本文来从底层代码的实现来分析一下Starrocks怎么获取统计信息,这些统计信息在后续基于CBO的代价计算的时候有着重要的作用
本文基于Starrrocks 3.3.5
结论
Starrocks的统计信息的收集是通过周期性的运行一系列的SQL(以分区为维度,如果不是分区表,其实也有个默认的分区,也就是单个分区),之后插入到_statistics_.column_statistics
表中,并会存储在 GlobalStateMgr.CachedStatisticStorage,后续所有的统计信息的获取也是通过这里获取的
分析
直接到StatisticAutoCollector类
public StatisticAutoCollector() {super("AutoStatistic", Config.statistic_collect_interval_sec * 1000);}
这里默认的调度周期是 statistic_collect_interval_sec (也就是5分钟)
@Overrideprotected void runAfterCatalogReady() {// update intervalif (getInterval() != Config.statistic_collect_interval_sec * 1000) {setInterval(Config.statistic_collect_interval_sec * 1000);}if (!Config.enable_statistic_collect || FeConstants.runningUnitTest) {return;}if (!checkoutAnalyzeTime(LocalTime.now(TimeUtils.getTimeZone().toZoneId()))) {return;}// check statistic table stateif (!StatisticUtils.checkStatisticTableStateNormal()) {return;}initDefaultJob();runJobs();}
- 强制 调度周期设置为5分钟
- 进行 调度时间的检查,默认是一天,也可以设置开始和结束时间,
statistic_auto_analyze_start_time
,statistic_auto_analyze_end_time
- 还可以设置enable_statistic_collect为false,如果不想进行统计信息的采集的话
- initDefaultJob 初始化统计信息采集任务,默认是
enable_collect_full_statistic
为 true,也就是全量采集 - runJobs 运行采集任务,也就是最核心的阶段
protected List<StatisticsCollectJob> runJobs() {...Set<Long> analyzeTableSet = Sets.newHashSet();for (NativeAnalyzeJob nativeAnalyzeJob : allNativeAnalyzeJobs) {List<StatisticsCollectJob> jobs = nativeAnalyzeJob.instantiateJobs();result.addAll(jobs);ConnectContext statsConnectCtx = StatisticUtils.buildConnectContext();statsConnectCtx.setThreadLocalInfo();nativeAnalyzeJob.run(statsConnectCtx, STATISTIC_EXECUTOR, jobs);for (StatisticsCollectJob job : jobs) {if (job.isAnalyzeTable()) {analyzeTableSet.add(job.getTable().getId());}}}LOG.info("auto collect statistic on analyze job[{}] end", analyzeJobIds);if (Config.enable_collect_full_statistic) {LOG.info("auto collect full statistic on all databases start");List<StatisticsCollectJob> allJobs =StatisticsCollectJobFactory.buildStatisticsCollectJob(createDefaultJobAnalyzeAll());for (StatisticsCollectJob statsJob : allJobs) {// user-created analyze job has a higher priorityif (statsJob.isAnalyzeTable() && analyzeTableSet.contains(statsJob.getTable().getId())) {continue;}result.add(statsJob);AnalyzeStatus analyzeStatus = new NativeAnalyzeStatus(GlobalStateMgr.getCurrentState().getNextId(),statsJob.getDb().getId(), statsJob.getTable().getId(), statsJob.getColumnNames(),statsJob.getType(), statsJob.getScheduleType(), statsJob.getProperties(), LocalDateTime.now());analyzeStatus.setStatus(StatsConstants.ScheduleStatus.FAILED);GlobalStateMgr.getCurrentState().getAnalyzeMgr().addAnalyzeStatus(analyzeStatus);ConnectContext statsConnectCtx = StatisticUtils.buildConnectContext();statsConnectCtx.setThreadLocalInfo();STATISTIC_EXECUTOR.collectStatistics(statsConnectCtx, statsJob, analyzeStatus, true);}LOG.info("auto collect full statistic on all databases end");}...return result;}
- nativeAnalyzeJob.instantiateJobs 构造统计信息
这里调用了StatisticsCollectJobFactory.buildStatisticsCollectJob
方法,
首先这里有个配置statistic_exclude_pattern
可以排除不需要进行统计的表(以db.table格式)
其次是会根据当前所谓的健康度(也就是分区更新的时间比例)和statistic_auto_collect_ratio
大小比较,如果健康度小于该值,则调用createFullStatsJob
方法,创建全量统计任务。
这里 主要用buildStatisticsCollectJob
构造一个FullStatisticsCollectJob
类型的job - nativeAnalyzeJob.run 运行统计信息任务
这个方法会调用StatisticExecutor.collectStatistics
,最终会调用FullStatisticsCollectJob.collect
方法int parallelism = Math.max(1, context.getSessionVariable().getStatisticCollectParallelism());List<List<String>> collectSQLList = buildCollectSQLList(parallelism);long totalCollectSQL = collectSQLList.size();...Exception lastFailure = null;for (List<String> sqlUnion : collectSQLList) {if (sqlUnion.size() < parallelism) {context.getSessionVariable().setPipelineDop(parallelism / sqlUnion.size());} else {context.getSessionVariable().setPipelineDop(1);}String sql = Joiner.on(" UNION ALL ").join(sqlUnion);try {collectStatisticSync(sql, context);} catch (Exception e) {...}finishedSQLNum++;analyzeStatus.setProgress(finishedSQLNum * 100 / totalCollectSQL);GlobalStateMgr.getCurrentState().getAnalyzeMgr().addAnalyzeStatus(analyzeStatus);}...flushInsertStatisticsData(context, true);
- 首先设置一个 运行sql的并行度
statistic_collect_parallel
默认是1,这个意思就是这个统计sql会分多少次运行 - buildCollectSQLList 这里会构建具体运行统计信息的SQL,这会具体的分区级别
- collectStatisticSync 这里会执行具体的SQL
SQL如下:SELECT cast(4 as INT) ,cast($partitionId as BIGINT) ,'$columnNameStr' ,cast(COUNT(1) as BIGINT) ,cast($dataSize as BIGINT) ,hex(hll_serialize(IFNULL(hll_raw(column_key), hll_empty()))),cast( (COUNT(1) - COUNT(column_key)) as BIGINT) ,MAX(column_key) ,MIN(column_key) FROM (select $quoteColumnName as column_key from `$dbName`.`$tableName` partition `$partitionName`) tt
- flushInsertStatisticsData 这里会把执行的结果数据存储到
_statistics_.column_statistics
- 首先设置一个 运行sql的并行度
analyzeMgr.refreshBasicStatisticsCache
这个主要的作用是 更新CachedStatisticStorage 里的统计信息
主要通过 refreshTableStatistic 和 getColumnStatistics
这两个方法分别会调用 TableStatsCacheLoader 和 ColumnBasicStatsCacheLoader 去执行SQL从而获取对应的统计信息,调用的SQL如下:select cast(3 as INT), partition_id, any_value(row_count)FROM column_statisticsWHERE table_id = $tableId and partition_id = $partitionIdGROUP BY partition_id;
SELECT cast( 1 as INT), $updateTime, db_id, table_id, column_name,sum(row_count), cast(sum(data_size) as bigint), hll_union_agg(ndv), sum(null_count), cast(max(cast(max as $type)) as string), cast(min(cast(min as $type)) as string)FROM column_statisticsWHERE table_id = $table_id and column_name in (xxx,xxx,xxx)GROUP BY db_id, table_id, column_name;
- nativeAnalyzeJob.instantiateJobs 构造统计信息
其他
- StatisticAutoCollector 是通过周期性的任务来进行统计信息的收集
- 手动的收集
ANALYZE TABLE
如命令:ANALYZE [FULL|SAMPLE] TABLE tbl_name (col_name [,col_name]) [WITH SYNC | ASYNC MODE] PROPERTIES (property [,property])
- 手动触发自动收集
CREATE ANALYZE
如命令:CREATE ANALYZE [FULL|SAMPLE] TABLE tbl_name (col_name [,col_name]) PROPERTIES (property [,property])
以上都会触发统计信息的收集。