随笔之TDengine基准测试示例
文章目录
- 一、基本信息
- 二、基准测试策略
- 三、基准测试过程
- 1. 模拟高并发写入场景
- 2. 模拟并发查询场景
- 四、基准测试结论
一、基本信息
- TDengine 版本:3.3.6.13(目前最新版本)
- 服务器配置:16核CPU,32GB内存,高IO 1000GB存
二、基准测试策略
- 测试数据库:
- 数据库:test
- 超级表:meters(含字段 ts, current, voltage, phase 和标签 groupid, location)
- 测试工具:taosBenchmark(TDengine 自带的性能测试工具)
- 测试目标:
- 模拟高并发写入场景,评估数据库的写入性能。
- 模拟并发查询场景,评估数据库的查询性能。
三、基准测试过程
1. 模拟高并发写入场景
root@sjfwq-v1-p-0-107:~# taosBenchmark -y
[07/21 11:48:25.593007] INFO: client version: 3.3.6.13Connect mode is : Native[07/21 11:48:26.910499] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.188987] INFO: command to create database: <CREATE DATABASE IF NOT EXISTS test PRECISION 'ms';>
[07/21 11:48:27.189020] SUCC: created database (test)
[07/21 11:48:27.189332] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.191098] INFO: stable meters does not exist, will create one
[07/21 11:48:27.191125] INFO: create stable: <CREATE TABLE IF NOT EXISTS test.meters (ts TIMESTAMP,current float,voltage int,phase float) TAGS (groupid int,location binary(24))>
[07/21 11:48:27.191389] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.200606] INFO: generate stable<meters> columns data with lenOfCols<80> * prepared_rand<20000>
[07/21 11:48:27.219827] INFO: restful connect -> convertServAddr host=localhost port:6041 to serv_addr=0x2ad77f24 iface=0
[07/21 11:48:27.219839] INFO: start creating 10000 table(s) with 8 thread(s)
[07/21 11:48:27.220192] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.220668] INFO: thread[0] start creating table from 0 to 1249
[07/21 11:48:27.220788] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.221215] INFO: thread[1] start creating table from 1250 to 2499
[07/21 11:48:27.221367] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.221881] INFO: thread[2] start creating table from 2500 to 3749
[07/21 11:48:27.221922] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.222315] INFO: thread[3] start creating table from 3750 to 4999
[07/21 11:48:27.222393] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.223102] INFO: thread[4] start creating table from 5000 to 6249
[07/21 11:48:27.223197] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.223581] INFO: thread[5] start creating table from 6250 to 7499
[07/21 11:48:27.223683] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.224055] INFO: thread[6] start creating table from 7500 to 8749
[07/21 11:48:27.224163] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.225133] INFO: thread[7] start creating table from 8750 to 9999
[07/21 11:48:27.662916] SUCC: Spent 0.4430 seconds to create 10000 table(s) with 8 thread(s) speed: 22573 tables/s, already exist 0 table(s), actual 10000 table(s) pre created, 0 table(s) will be auto created
[07/21 11:48:27.663173] INFO: init pthread_join 0 ...
[07/21 11:48:27.663240] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663282] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663320] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663243] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663362] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663362] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663409] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663418] SUCC: host:(null) port:0 connect successfully.
[07/21 11:48:27.663853] INFO: init pthread_join 1 ...
[07/21 11:48:27.663864] INFO: init pthread_join 2 ...
[07/21 11:48:27.663949] INFO: init pthread_join 3 ...
[07/21 11:48:27.663956] INFO: init pthread_join 4 ...
[07/21 11:48:27.663983] INFO: init pthread_join 5 ...
[07/21 11:48:27.664106] INFO: init pthread_join 6 ...
[07/21 11:48:27.664125] INFO: init pthread_join 7 ...
[07/21 11:48:27.664188] INFO: Estimate memory usage: 12.20MB
[07/21 11:48:27.664196] INFO: run insert thread. real nthread=8
[07/21 11:48:27.664234] INFO: thread[0] start progressive inserting into table from 0 to 1250
[07/21 11:48:27.664255] INFO: thread[1] start progressive inserting into table from 1250 to 2500
[07/21 11:48:27.664278] INFO: thread[2] start progressive inserting into table from 2500 to 3750
[07/21 11:48:27.664295] INFO: thread[3] start progressive inserting into table from 3750 to 5000
[07/21 11:48:27.664323] INFO: thread[4] start progressive inserting into table from 5000 to 6250
[07/21 11:48:27.664358] INFO: thread[5] start progressive inserting into table from 6250 to 7500
[07/21 11:48:27.664381] INFO: thread[6] start progressive inserting into table from 7500 to 8750
[07/21 11:48:27.664388] INFO: pthread_join 0 ...
[07/21 11:48:27.664401] INFO: thread[7] start progressive inserting into table from 8750 to 10000
[07/21 11:48:52.467670] SUCC: thread[1] progressive mode, completed total inserted rows: 12500000, 537815.02 records/second
[07/21 11:48:52.528920] SUCC: thread[3] progressive mode, completed total inserted rows: 12500000, 535889.46 records/second
[07/21 11:48:52.644246] SUCC: thread[7] progressive mode, completed total inserted rows: 12500000, 534693.89 records/second
[07/21 11:48:52.693441] SUCC: thread[4] progressive mode, completed total inserted rows: 12500000, 533009.17 records/second
[07/21 11:48:52.753257] SUCC: thread[0] progressive mode, completed total inserted rows: 12500000, 532702.66 records/second
[07/21 11:48:52.753402] INFO: pthread_join 1 ...
[07/21 11:48:52.753416] INFO: pthread_join 2 ...
[07/21 11:48:52.772948] SUCC: thread[5] progressive mode, completed total inserted rows: 12500000, 532017.61 records/second
[07/21 11:48:52.879224] SUCC: thread[6] progressive mode, completed total inserted rows: 12500000, 529537.68 records/second
[07/21 11:48:53.156213] SUCC: thread[2] progressive mode, completed total inserted rows: 12500000, 524104.91 records/second
[07/21 11:48:53.156314] INFO: pthread_join 3 ...
[07/21 11:48:53.156329] INFO: pthread_join 4 ...
[07/21 11:48:53.156374] INFO: pthread_join 5 ...
[07/21 11:48:53.156401] INFO: pthread_join 6 ...
[07/21 11:48:53.156413] INFO: pthread_join 7 ...
[07/21 11:48:53.157329] SUCC: Spent 25.492230 (real 23.476738) seconds to insert rows: 100000000 with 8 thread(s) into test 3922763.92 (real 4259535.55) records/second
[07/21 11:48:53.157361] SUCC: insert delay, min: 8.7650ms, avg: 18.7814ms, p90: 17.1450ms, p95: 29.5320ms, p99: 366.1130ms, max: 670.7390ms
[07/21 11:48:53.157378] INFO: free resource and exit ...
结果解读:
- 数据库初始化与表创建测试:创建超级表、子表创建 (10,000张),耗时:0.443秒,22,573表/秒
- 数据写入测试:数据量(1亿行),写入线程(8线程),总耗时(25.49秒),写入吞吐量(3,922,764行/秒),平均延迟(18.78ms)
2. 模拟并发查询场景
root@sjfwq-v1-p-0-107:~/taosBenchmark# taosBenchmark -f query.json
[07/21 17:02:22.358884] INFO: query.json
{"filetype": "query","cfgdir": "/etc/taos","host": "127.0.0.1","port": 6030,"user": "root","password": "taosdata","confirm_parameter_prompt": "no","continue_if_fail": "yes","databases": "test","query_times": 10,"query_mode": "taosc","specified_table_query": {"query_interval": 1,"threads": 3,"sqls": [{"sql": "select last_row(*) from meters","result": "./query_res0.txt"}, {"sql": "select count(*) from d0","result": "./query_res1.txt"}]}
}
[07/21 17:02:22.359004] INFO: read host from json: 127.0.0.1 .
[07/21 17:02:22.359011] INFO: read user from json: root .
[07/21 17:02:22.359013] INFO: read password from json: ** .
[07/21 17:02:22.408407] INFO: client version: 3.3.6.13Connect mode is : Native[07/21 17:02:22.411254] INFO: Set engine cfgdir successfully, dir:/etc/taos
[07/21 17:02:23.693357] SUCC: host:127.0.0.1 port:0 connect successfully.
[07/21 17:02:23.722231] SUCC: host:127.0.0.1 port:0 connect successfully.
[07/21 17:02:23.722993] SUCC: host:127.0.0.1 port:0 connect successfully.
complete query with 3 threads and 30 sql 1 spend 3.845735s QPS: 7.801 query delay avg: 0.382693s min: 0.367124s max: 0.419617s p90: 0.395527s p95: 0.416581s p99: 0.419617s SQL command: select last_row(*) from meters
[07/21 17:02:27.569303] SUCC: host:127.0.0.1 port:0 connect successfully.
[07/21 17:02:27.569746] SUCC: host:127.0.0.1 port:0 connect successfully.
[07/21 17:02:27.570143] SUCC: host:127.0.0.1 port:0 connect successfully.
[07/21 17:02:27.572642] INFO: do sleep 1ms ...
[07/21 17:02:27.573148] INFO: do sleep 1ms ...
[07/21 17:02:27.573166] INFO: do sleep 1ms ...
[07/21 17:02:27.574560] INFO: do sleep 1ms ...
[07/21 17:02:27.577159] INFO: do sleep 1ms ...
[07/21 17:02:27.577603] INFO: do sleep 1ms ...
[07/21 17:02:27.577804] INFO: do sleep 1ms ...
[07/21 17:02:27.579129] INFO: do sleep 1ms ...
[07/21 17:02:27.579714] INFO: do sleep 1ms ...
[07/21 17:02:27.581079] INFO: do sleep 1ms ...
[07/21 17:02:27.581663] INFO: do sleep 1ms ...
[07/21 17:02:27.582607] INFO: do sleep 1ms ...
[07/21 17:02:27.583555] INFO: do sleep 1ms ...
[07/21 17:02:27.584650] INFO: do sleep 1ms ...
[07/21 17:02:27.585559] INFO: do sleep 1ms ...
complete query with 3 threads and 30 sql 2 spend 0.016352s QPS: 1834.638 query delay avg: 0.000997s min: 0.000745s max: 0.001392s p90: 0.001370s p95: 0.001387s p99: 0.001392s SQL command: select count(*) from d0
[07/21 17:02:27.586718] INFO: Spend 5.1750 second completed total queries: 60, the QPS of all threads: 11.594 ,error 0 (rate:0.000%)[07/21 17:02:27.586729] INFO: free resource and exit ...
结果解读:
- 数据查询测试:SELECT COUNT(*) FROM d0(1,654.6 QPS,平均延迟0.000948s )
- 数据查询测试: SELECT last_row(*) from METERS( 5.415 QPS), 查询 METERS 表的最新行数, METERS 表数据量很大(1 亿行),TDengine 需要扫描索引找到最新数据,可能涉及磁盘 I/O,所以很慢。
四、基准测试结论
-
写入性能:
- 1亿行数据写入仅耗时25.49秒,吞吐量达 400万行/秒 级别,远超传统时序数据库(如InfluxDB的10万~50万行/秒)。
- 子表创建速度极快(22,573表/秒),适合大规模设备接入场景。
-
查询性能:
- COUNT(*)查询:QPS极高(1,654.6),因仅需统计单表行数,适合高频监控场景。
- select last_row():查询 meters 表的 最新行数据(类似于 SELECT * FROM meters ORDER BY ts DESC LIMIT 1),如果 meters 表数据量极大(如 1 亿行),last_row() 需要扫描所有子表或索引,导致延迟高。