深入了解 TiDB 的 SQL 优化流程,涵盖逻辑优化 (RBO) 和物理优化 (CBO),助你编写更高效的 SQL 查询。
原文标题:数据库系列之TiDB的SQL优化流程
原文作者:牧羊人的方向
冷月清谈:
怜星夜思:
2、文章中提到了谓词下推可以提高查询效率,但在某些情况下,谓词下推反而可能降低性能。大家有没有遇到过这种情况?能否分享一下具体场景和原因?
3、TiDB 的 SQL 优化器还在不断发展,未来可能会有哪些新的优化策略或改进方向?结合当前的数据库技术趋势,大家可以畅想一下。
原文内容
上一篇介绍到,本文将继续介绍TiDB中的SQL优化流程,从逻辑优化和物理优化两个阶段了解TiDB优化器的实现逻辑。
1、SQL优化流程
TiDB中的SQL优化分为逻辑优化和物理优化两个阶段:在逻辑优化过程中,依次遍历内部定义实现的优化规则,不断地调整SQL的逻辑执行计划,对查询做一些逻辑上的等价变化,称为基于规则的优化RBO(rule based optimization);物理优化则是将改写后逻辑执行计划变成可以执行的物理执行计划,需要用到一些统计信息,这一过程会决定执行操作的具体方法,比如用什么索引读表,用什么算法做Join操作等,称为基于代价的优化CBO(cost based optimization)。
TiDB 中的逻辑算子主要有以下几个:
-
DataSource:数据源,如SQL中的from t中的t
-
Selection:代表了相应的过滤条件,如SQL中where谓词语句的where a = 5
-
Projection:投影操作,也用于表达式计算, 如select c, a + b from t里面的c和a + b就是投影和表达式计算操作
-
Join:两个表的连接操作,如select t1.b, t2.c from t1 join t2 on t1.a = t2.a 中的t1 join t2 on t1.a = t2.a就是两个表t1和t2的连接操作。Join有内连接、左连接、右连接等多种连接方式。
Selection,Projection,Join(简称SPJ)是3种最基本的算子。
1)列裁剪
列裁剪的基本思想在于:对于算子中实际用不上的列,优化器在优化的过程中删除这些列以减少I/O资源的占用。假设表t里面有a b c d四列,执行如下语句:
select a from t where b > 5
在该查询的过程中,t表实际上只有a, b两列会被用到,c,d的数据没有被使用。在该语句的查询计划中Selection算子会用到b列、接下去的DataSource算子会用到a, b两列,而剩下的c, d列在读数据时不需要将它们读进来。
出于上述考量,TiDB会在逻辑优化阶段进行自上而下的扫描,裁剪不需要的列,减少资源浪费。该扫描过程称作 “列裁剪”,对应逻辑优化规则中的columnPruner。
2)分区裁剪
分区裁剪是针对分区表的优化,通过分析查询语句中的过滤条件,只选择可能满足条件的分区,进而减少计算的数据量。
-
Hash分区表上可以使用分区裁剪的场景
只有等值比较的查询条件能够支持Hash分区表的裁剪,如下所示:
mysql> create table t1 (x int) partition by hash(x) partitions 4;
mysql> explain select * from t1 where x = 1;
+-------------------------+----------+-----------+------------------------+--------------------------------+
| id | estRows | task | access object | operator info |
+-------------------------+----------+-----------+------------------------+--------------------------------+
| TableReader_8 | 10.00 | root | | data:Selection_7 |
| └─Selection_7 | 10.00 | cop[tikv] | | eq(tango.t1.x, 1) |
| └─TableFullScan_6 | 10000.00 | cop[tikv] | table:t1, partition:p1 | keep order:false, stats:pseudo |
+-------------------------+----------+-----------+------------------------+--------------------------------+
以上SQL中由条件x = 1可以知道所有结果均在一个分区上。数值 1 在经过 Hash 后,可以确定其在分区 p1 中。因此只需要扫描分区 p1 ,而无需访问一定不会出现相关结果的p2 、p3 、p4分区。从执行计划来看,其中只出现了一个TableFullScan算子,且在 access object 中指定了p1分区,确认partition pruning生效了。
-
Hash分区上不能使用分区裁剪的场景
场景一:使用in, between, > < >= <=等查询条件,不能确定查询结果只在一个分区上,不能使用分区裁剪的优化
mysql> explain select * from t1 where x >2;
+------------------------------+----------+-----------+------------------------+--------------------------------+
| id | estRows | task | access object | operator info |
+------------------------------+----------+-----------+------------------------+--------------------------------+
| Union_10 | 13333.33 | root | | |
| ├─TableReader_13 | 3333.33 | root | | data:Selection_12 |
| │ └─Selection_12 | 3333.33 | cop[tikv] | | gt(tango.t1.x, 2) |
| │ └─TableFullScan_11 | 10000.00 | cop[tikv] | table:t1, partition:p0 | keep order:false, stats:pseudo |
| ├─TableReader_16 | 3333.33 | root | | data:Selection_15 |
| │ └─Selection_15 | 3333.33 | cop[tikv] | | gt(tango.t1.x, 2) |
| │ └─TableFullScan_14 | 10000.00 | cop[tikv] | table:t1, partition:p1 | keep order:false, stats:pseudo |
| ├─TableReader_19 | 3333.33 | root | | data:Selection_18 |
| │ └─Selection_18 | 3333.33 | cop[tikv] | | gt(tango.t1.x, 2) |
| │ └─TableFullScan_17 | 10000.00 | cop[tikv] | table:t1, partition:p2 | keep order:false, stats:pseudo |
| └─TableReader_22 | 3333.33 | root | | data:Selection_21 |
| └─Selection_21 | 3333.33 | cop[tikv] | | gt(tango.t1.x, 2) |
| └─TableFullScan_20 | 10000.00 | cop[tikv] | table:t1, partition:p3 | keep order:false, stats:pseudo |
+------------------------------+----------+-----------+------------------------+--------------------------------+
场景二:由于分区裁剪的规则优化是在查询计划的生成阶段,对于执行阶段才能获取到过滤条件的场景,无法利用分区裁剪的优化。比如以下场景:
mysql> explain select * from t2 where x = (select * from t1 where t2.x = t1.x and t2.x < 2);
+--------------------------------------+----------+-----------+------------------------+------------------------------------------------+
| id | estRows | task | access object | operator info |
+--------------------------------------+----------+-----------+------------------------+------------------------------------------------+
| Projection_20 | 39960.00 | root | | tango.t2.x |
| └─Apply_22 | 39960.00 | root | | inner join, equal:[eq(tango.t2.x, tango.t1.x)] |
| ├─Union_23(Build) | 39960.00 | root | | |
| │ ├─TableReader_26 | 9990.00 | root | | data:Selection_25 |
| │ │ └─Selection_25 | 9990.00 | cop[tikv] | | not(isnull(tango.t2.x)) |
| │ │ └─TableFullScan_24 | 10000.00 | cop[tikv] | table:t2, partition:p0 | keep order:false, stats:pseudo |
| │ ├─TableReader_29 | 9990.00 | root | | data:Selection_28 |
| │ │ └─Selection_28 | 9990.00 | cop[tikv] | | not(isnull(tango.t2.x)) |
| │ │ └─TableFullScan_27 | 10000.00 | cop[tikv] | table:t2, partition:p1 | keep order:false, stats:pseudo |
| │ ├─TableReader_32 | 9990.00 | root | | data:Selection_31 |
| │ │ └─Selection_31 | 9990.00 | cop[tikv] | | not(isnull(tango.t2.x)) |
| │ │ └─TableFullScan_30 | 10000.00 | cop[tikv] | table:t2, partition:p2 | keep order:false, stats:pseudo |
| │ └─TableReader_35 | 9990.00 | root | | data:Selection_34 |
| │ └─Selection_34 | 9990.00 | cop[tikv] | | not(isnull(tango.t2.x)) |
| │ └─TableFullScan_33 | 10000.00 | cop[tikv] | table:t2, partition:p3 | keep order:false, stats:pseudo |
| └─Selection_36(Probe) | 0.80 | root | | not(isnull(tango.t1.x)) |
| └─MaxOneRow_37 | 1.00 | root | | |
| └─Union_38 | 2.00 | root | | |
| ├─TableReader_41 | 2.00 | root | | data:Selection_40 |
| │ └─Selection_40 | 2.00 | cop[tikv] | | eq(tango.t2.x, tango.t1.x), lt(tango.t2.x, 2) |
| │ └─TableFullScan_39 | 2500.00 | cop[tikv] | table:t1, partition:p0 | keep order:false, stats:pseudo |
| ├─TableReader_44 | 2.00 | root | | data:Selection_43 |
| │ └─Selection_43 | 2.00 | cop[tikv] | | eq(tango.t2.x, tango.t1.x), lt(tango.t2.x, 2) |
| │ └─TableFullScan_42 | 2500.00 | cop[tikv] | table:t1, partition:p1 | keep order:false, stats:pseudo |
| ├─TableReader_47 | 2.00 | root | | data:Selection_46 |
| │ └─Selection_46 | 2.00 | cop[tikv] | | eq(tango.t2.x, tango.t1.x), lt(tango.t2.x, 2) |
| │ └─TableFullScan_45 | 2500.00 | cop[tikv] | table:t1, partition:p2 | keep order:false, stats:pseudo |
| └─TableReader_50 | 2.00 | root | | data:Selection_49 |
| └─Selection_49 | 2.00 | cop[tikv] | | eq(tango.t2.x, tango.t1.x), lt(tango.t2.x, 2) |
| └─TableFullScan_48 | 2500.00 | cop[tikv] | table:t1, partition:p3 | keep order:false, stats:pseudo |
+--------------------------------------+----------+-----------+------------------------+------------------------------------------------+
-
Range分区表上可以使用分区裁剪的场景
-
等值比较的查询条件,如x=1或x in (1,2)
-
区间比较的查询条件如between, > < = >= <=
-
分区表达式为fn(col)的简单形式,查询条件是> < = >= <=之一,且fn是单调函数
以第二种情况为例:
mysql> create table t_r1 (x int) partition by range (x) (
-> partition p0 values less than (5),
-> partition p1 values less than (10),
-> partition p2 values less than (15)
-> );
Query OK, 0 rows affected (1.11 sec)
mysql> explain select * from t_r1 where x between 7 and 14;
+-----------------------------+----------+-----------+--------------------------+-------------------------------------------+
| id | estRows | task | access object | operator info |
+-----------------------------+----------+-----------+--------------------------+-------------------------------------------+
| Union_8 | 500.00 | root | | |
| ├─TableReader_11 | 250.00 | root | | data:Selection_10 |
| │ └─Selection_10 | 250.00 | cop[tikv] | | ge(tango.t_r1.x, 7), le(tango.t_r1.x, 14) |
| │ └─TableFullScan_9 | 10000.00 | cop[tikv] | table:t_r1, partition:p1 | keep order:false, stats:pseudo |
| └─TableReader_14 | 250.00 | root | | data:Selection_13 |
| └─Selection_13 | 250.00 | cop[tikv] | | ge(tango.t_r1.x, 7), le(tango.t_r1.x, 14) |
| └─TableFullScan_12 | 10000.00 | cop[tikv] | table:t_r1, partition:p2 | keep order:false, stats:pseudo |
+-----------------------------+----------+-----------+--------------------------+-------------------------------------------+
-
Range分区表不能使用分区裁剪的场景
由于分区裁剪的规则优化是在查询计划的生成阶段,对于执行阶段才能获取到过滤条件的场景,无法利用分区裁剪的优化。
mysql> create table t_r2 (x int);
mysql> explain select * from t_r2 where x < (select * from t_r1 where t_r2.x < t_r1.x and t_r2.x < 2);
+--------------------------------------+----------+-----------+--------------------------+-----------------------------------------------------------------+
| id | estRows | task | access object | operator info |
+--------------------------------------+----------+-----------+--------------------------+-----------------------------------------------------------------+
| Projection_14 | 9990.00 | root | | tango.t_r2.x |
| └─Apply_16 | 9990.00 | root | | CARTESIAN inner join, other cond:lt(tango.t_r2.x, tango.t_r1.x) |
| ├─TableReader_19(Build) | 9990.00 | root | | data:Selection_18 |
| │ └─Selection_18 | 9990.00 | cop[tikv] | | not(isnull(tango.t_r2.x)) |
| │ └─TableFullScan_17 | 10000.00 | cop[tikv] | table:t_r2 | keep order:false, stats:pseudo |
| └─Selection_20(Probe) | 0.80 | root | | not(isnull(tango.t_r1.x)) |
| └─MaxOneRow_21 | 1.00 | root | | |
| └─Union_22 | 2.00 | root | | |
| ├─TableReader_25 | 2.00 | root | | data:Selection_24 |
| │ └─Selection_24 | 2.00 | cop[tikv] | | lt(tango.t_r2.x, 2), lt(tango.t_r2.x, tango.t_r1.x) |
| │ └─TableFullScan_23 | 2.50 | cop[tikv] | table:t_r1, partition:p0 | keep order:false, stats:pseudo |
| ├─TableReader_28 | 2.00 | root | | data:Selection_27 |
| │ └─Selection_27 | 2.00 | cop[tikv] | | lt(tango.t_r2.x, 2), lt(tango.t_r2.x, tango.t_r1.x) |
| │ └─TableFullScan_26 | 2.50 | cop[tikv] | table:t_r1, partition:p1 | keep order:false, stats:pseudo |
| └─TableReader_31 | 2.00 | root | | data:Selection_30 |
| └─Selection_30 | 2.00 | cop[tikv] | | lt(tango.t_r2.x, 2), lt(tango.t_r2.x, tango.t_r1.x) |
| └─TableFullScan_29 | 2.50 | cop[tikv] | table:t_r1, partition:p2 | keep order:false, stats:pseudo |
+--------------------------------------+----------+-----------+--------------------------+-----------------------------------------------------------------+
3)Max/Min优化
TiDB中SQL有语句包含max/min函数并且聚合函数没有相应的group by语句的时候时,查询优化器尝试将max/min聚合函数转换为TopN算子,从而有效利用索引进行查询。比如:
select max(a) from t
这时max/min消除优化规则会将其重写为:
select max(a) from (select a from t where a is not null order by a desc limit 1) t
这个新的SQL语句在a列存在索引(或 a 列是某个联合索引的前缀)时,能够利用索引只扫描一行数据来得到最大或者最小值,从而避免对整个表的扫描。
mysql> create table tt1 (a int(11),b float,key idx1(a));
mysql> explain select max(a) from tt1;
+------------------------------+---------+-----------+--------------------------+-------------------------------------+
| id | estRows | task | access object | operator info |
+------------------------------+---------+-----------+--------------------------+-------------------------------------+
| StreamAgg_13 | 1.00 | root | | funcs:max(tango.tt1.a)->Column#4 |
| └─Limit_17 | 1.00 | root | | offset:0, count:1 |
| └─IndexReader_27 | 1.00 | root | | index:Limit_26 |
| └─Limit_26 | 1.00 | cop[tikv] | | offset:0, count:1 |
| └─IndexFullScan_25 | 1.00 | cop[tikv] | table:tt1, index:idx1(a) | keep order:true, desc, stats:pseudo |
+------------------------------+---------+-----------+--------------------------+-------------------------------------+
5 rows in set (0.01 sec)
如果使用到group by语句:
mysql> explain select b,max(a) from tt1 group by b;
+-----------------------------+----------+-----------+---------------+-----------------------------------------------------------------------------------------------+
| id | estRows | task | access object | operator info |
+-----------------------------+----------+-----------+---------------+-----------------------------------------------------------------------------------------------+
| Projection_4 | 8000.00 | root | | tango.tt1.b, Column#4 |
| └─HashAgg_9 | 8000.00 | root | | group by:tango.tt1.b, funcs:max(Column#5)->Column#4, funcs:firstrow(tango.tt1.b)->tango.tt1.b |
| └─TableReader_10 | 8000.00 | root | | data:HashAgg_5 |
| └─HashAgg_5 | 8000.00 | cop[tikv] | | group by:tango.tt1.b, funcs:max(tango.tt1.a)->Column#5 |
| └─TableFullScan_8 | 10000.00 | cop[tikv] | table:tt1 | keep order:false, stats:pseudo |
+-----------------------------+----------+-----------+---------------+-----------------------------------------------------------------------------------------------+
4)谓词下推
谓词下推是将查询语句中的过滤表达式的计算尽可能下推到距离数据源最近的地方,以尽早完成数据的过滤,进而显著地减少数据传输或计算的开销。
-
谓词下推到存储层
mysql> CREATE TABLE `tt01` (`id` int(11) NOT NULL AUTO_INCREMENT,a int,`city` varchar(20) NOT NULL DEFAULT '',PRIMARY KEY (`id`));
mysql> explain select * from tt01 where a<10 ;
+-------------------------+----------+-----------+---------------+--------------------------------+
| id | estRows | task | access object | operator info |
+-------------------------+----------+-----------+---------------+--------------------------------+
| TableReader_7 | 3323.33 | root | | data:Selection_6 |
| └─Selection_6 | 3323.33 | cop[tikv] | | lt(tango.tt01.a, 10) |
| └─TableFullScan_5 | 10000.00 | cop[tikv] | table:tt01 | keep order:false, stats:pseudo |
+-------------------------+----------+-----------+---------------+--------------------------------+
在该查询中,将谓词id < 10下推到TiKV上对数据进行过滤,可以减少由于网络传输带来的开销。
-
谓词下推到join下方
mysql> explain select * from tt01 join tt02 on tt01.a= tt02.a where tt01.a < 1;
+------------------------------+----------+-----------+---------------+----------------------------------------------------+
| id | estRows | task | access object | operator info |
+------------------------------+----------+-----------+---------------+----------------------------------------------------+
| HashJoin_8 | 4154.17 | root | | inner join, equal:[eq(tango.tt01.a, tango.tt02.a)] |
| ├─TableReader_15(Build) | 3323.33 | root | | data:Selection_14 |
| │ └─Selection_14 | 3323.33 | cop[tikv] | | lt(tango.tt02.a, 1), not(isnull(tango.tt02.a)) |
| │ └─TableFullScan_13 | 10000.00 | cop[tikv] | table:tt02 | keep order:false, stats:pseudo |
| └─TableReader_12(Probe) | 3323.33 | root | | data:Selection_11 |
| └─Selection_11 | 3323.33 | cop[tikv] | | lt(tango.tt01.a, 1), not(isnull(tango.tt01.a)) |
| └─TableFullScan_10 | 10000.00 | cop[tikv] | table:tt01 | keep order:false, stats:pseudo |
+------------------------------+----------+-----------+---------------+----------------------------------------------------+
在该查询中,将谓词tt01.a < 1下推到join前进行过滤,可以减少join时的计算开销。
-
读取数据的方式是使用全表扫描还是通过索引读取数据
-
如果存在多个索引,选择哪个索引
-
逻辑算子的物理实现,即实际使用的算法
-
是否可以将算子下推到存储层执行,以提升执行效率
TiDB在选择索引时,会基于每个读表算子(TableReader/IndexReader/IndexLookUp)的代价估算, 在此基础上提供了启发式规则"Skyline-Pruning",以降低错误估算导致选错索引的概率。
-
是否需要回表(即该索引生成的计划是IndexReader还是IndexLookupReader),不用回表的索引在这个维度上优于需要回表的索引。
-
是否能满足一定的顺序。因为索引的读取可以保证某些列集合的顺序,所以满足查询要求顺序的索引在这个维度上优于不满足的索引。
-
索引的列涵盖了多少访问条件。“访问条件”指的是可以转化为某列范围的 where 条件,如果某个索引的列集合涵盖的访问条件越多,那么它在这个维度上更优。
-
索引的每行数据在存储层的平均长度
-
索引生成的查询范围的行数量
-
索引的回表代价
-
索引查询时的范围数量
根据这些因子和代价模型,优化器会选择一个代价最低的索引进行读表。
在TiDB中维护的统计信息统计信息包括表的总行数、列的等深直方图、Count-Min Sketch,Null值的个数、平均长度、不同值的数目等等。
1)手动收集
TiDB中使用ANALYZE语句来收集统计信息,分为全量收集和增量收集。
-
全量收集
#收集TableNameList中所有表的统计信息:
ANALYZE TABLE TableNameList
#收集TableName中所有的IndexNameList中的索引列的统计信息
ANALYZE TABLE TableName INDEX [IndexNameList]
#收集TableName中所有的PartitionNameList中分区的统计信息
ANALYZE TABLE TableName PARTITION PartitionNameList
#收集TableName中所有的PartitionNameList中分区的索引列统计信息
ANALYZE TABLE TableName PARTITION PartitionNameList [IndexNameList]
-
增量收集
在进行全量收集后,可以使用增量收集来单独分析新增的部分,以提高分析的速度。目前只有索引提供了增量收集的功能。
#增量收集 TableName 中所有的 IndexNameList 中的索引列的统计信息
ANALYZE INCREMENTAL TABLE TableName INDEX [IndexNameList]
#增量收集 TableName 中所有的 PartitionNameList 中分区的索引列统计信息
ANALYZE INCREMENTAL TABLE TableName PARTITION PartitionNameList INDEX [IndexNameList]
2)自动更新
在执行DML语句时,TiDB会自动更新表的总行数以及修改的行数。这些信息会定期自动持久化,更新周期默认是1分钟(20*stats-lease)。stats-lease的默认值是3s,如果将其指定为0,那么将不会自动更新。和统计信息自动更新相关的三个系统变量如下:
mysql> show variables like 'tidb_auto_analyze%';
+------------------------------+-------------+
| Variable_name | Value |
+------------------------------+-------------+
| tidb_auto_analyze_ratio | 0.5 |
| tidb_auto_analyze_start_time | 00:00 +0000 |
| tidb_auto_analyze_end_time | 23:59 +0000 |
+------------------------------+-------------+
3)控制analyze并发度
mysql> show analyze status;
+--------------+------------+----------------+--------------------------+----------------+---------------------+----------+
| Table_schema | Table_name | Partition_name | Job_info | Processed_rows | Start_time | State |
+--------------+------------+----------------+--------------------------+----------------+---------------------+----------+
| bikeshare | trips | | analyze index start_date | 340000 | 2021-03-14 13:59:38 | finished |
| bikeshare | trips | | analyze columns | 340000 | 2021-03-14 13:59:38 | running |
+--------------+------------+----------------+--------------------------+----------------+---------------------+----------+
2 rows in set (0.00 sec)
5)统计信息的查看
-
查看表的统计信息meta信息:
mysql> show stats_meta where table_name = 'trips';
+-----------+------------+----------------+---------------------+--------------+-----------+
| Db_name | Table_name | Partition_name | Update_time | Modify_count | Row_count |
+-----------+------------+----------------+---------------------+--------------+-----------+
| bikeshare | trips | | 2021-03-14 22:23:19 | 0 | 340000 |
+-----------+------------+----------------+---------------------+--------------+-----------+
1 row in set (0.08 sec)
-
表的健康度信息
通过SHOW STATS_HEALTHY可以查看表的统计信息健康度,并粗略估计表上统计信息的准确度。当modify_count >= row_count时,健康度为0;当modify_count < row_count时,健康度为(1 - modify_count/row_count) * 100。
mysql> show stats_healthy where table_name = 'trips';
+-----------+------------+----------------+---------+
| Db_name | Table_name | Partition_name | Healthy |
+-----------+------------+----------------+---------+
| bikeshare | trips | | 100 |
+-----------+------------+----------------+---------+
1 row in set (0.00 sec)
-
通过SHOW STATS_HISTOGRAMS来查看列的不同值数量以及NULL数量等信息
mysql> show stats_histograms where table_name = 'trips';
+-----------+------------+----------------+-------------+----------+---------------------+----------------+------------+--------------+-------------+
| Db_name | Table_name | Partition_name | Column_name | Is_index | Update_time | Distinct_count | Null_count | Avg_col_size | Correlation |
+-----------+------------+----------------+-------------+----------+---------------------+----------------+------------+--------------+-------------+
| bikeshare | trips | | trip_id | 0 | 2021-03-14 13:59:45 | 340000 | 0 | 8 | 0 |
| bikeshare | trips | | start_date | 1 | 2021-03-14 13:59:41 | 167954 | 0 | 0 | 0 |
+-----------+------------+----------------+-------------+----------+---------------------+----------------+------------+--------------+-------------+
2 rows in set (0.00 sec)
-
通过SHOW STATS_BUCKETS来查看直方图每个桶的信息
mysql> show stats_buckets where table_name='tt03';
+---------+------------+----------------+-------------+----------+-----------+-------+---------+-------------+-------------+
| Db_name | Table_name | Partition_name | Column_name | Is_index | Bucket_id | Count | Repeats | Lower_Bound | Upper_Bound |
+---------+------------+----------------+-------------+----------+-----------+-------+---------+-------------+-------------+
| tango | tt03 | | a | 0 | 0 | 1 | 1 | 1 | 1 |
| tango | tt03 | | idx_a | 1 | 0 | 1 | 1 | 1 | 1 |
| tango | tt03 | | idx_b | 1 | 0 | 1 | 1 | 1 | 1 |
+---------+------------+----------------+-------------+----------+-----------+-------+---------+-------------+-------------+
3 rows in set (0.00 sec)
6)删除统计信息
通过执行DROP STATS语句来删除统计信息
mysql> show stats_meta where table_name = 'tt03';
+---------+------------+----------------+---------------------+--------------+-----------+
| Db_name | Table_name | Partition_name | Update_time | Modify_count | Row_count |
+---------+------------+----------------+---------------------+--------------+-----------+
| tango | tt03 | | 2021-03-13 23:44:43 | 1 | 2 |
+---------+------------+----------------+---------------------+--------------+-----------+
1 row in set (0.01 sec)
mysql> drop stats tango.tt03;
Query OK, 0 rows affected (0.18 sec)
mysql> show stats_meta where table_name = 'tt03';
Empty set (0.00 sec)
7)统计信息的导入导出
-
统计信息导出
通过以下接口可以获取数据库${db_name}中的表${table_name}的json格式的统计信息:
http://${tidb-server-ip}:${tidb-server-status-port}/stats/dump/${db_name}/${table_name}
例如获取bikeshare数据库中trips表的统计信息:
curl -G "http://192.168.112.101:10080/stats/dump/bikeshare/trips" > trips.json
-
统计信息导入
将统计信息导出接口得到的json文件导入数据库中:
mysql> LOAD STATS 'file_name';
-
Optimizer Hints,使用Hint来指导TiDB生成执行计划,Hint会侵入性地更改SQL
-
执行计划管理,通过SQL BINDING来控制执行计划的生成以及Baseline Evolution在后台自动对执行计划进行演进来减轻诸如版本升级等原因造成的执行计划不稳定,而导致集群性能下降的问题。
-
优化规则和表达式下推黑名单,手动禁用一些优化规则和表达式
mysql> explain select count(*) from trips where trip_id>1;
+-----------------------------+-----------+-----------+---------------+-----------------------------------+
| id | estRows | task | access object | operator info |
+-----------------------------+-----------+-----------+---------------+-----------------------------------+
| StreamAgg_17 | 1.00 | root | | funcs:count(Column#13)->Column#11 |
| └─TableReader_18 | 1.00 | root | | data:StreamAgg_9 |
| └─StreamAgg_9 | 1.00 | cop[tikv] | | funcs:count(1)->Column#13 |
| └─TableRangeScan_16 | 339999.00 | cop[tikv] | table:trips | range:(1,+inf], keep order:false |
+-----------------------------+-----------+-----------+---------------+-----------------------------------+
4 rows in set (0.01 sec)
mysql> explain select count(*) from trips use index(start_date) where trip_id>1 ;
+------------------------------+-----------+-----------+-------------------------------------------+-----------------------------------+
| id | estRows | task | access object | operator info |
+------------------------------+-----------+-----------+-------------------------------------------+-----------------------------------+
| StreamAgg_20 | 1.00 | root | | funcs:count(Column#16)->Column#11 |
| └─IndexReader_21 | 1.00 | root | | index:StreamAgg_9 |
| └─StreamAgg_9 | 1.00 | cop[tikv] | | funcs:count(1)->Column#16 |
| └─Selection_19 | 339999.00 | cop[tikv] | | gt(bikeshare.trips.trip_id, 1) |
| └─IndexFullScan_18 | 340000.00 | cop[tikv] | table:trips, index:start_date(start_date) | keep order:false |
+------------------------------+-----------+-----------+-------------------------------------------+-----------------------------------+
5 rows in set (0.03 sec)
2)MAX_EXECUTION_TIME(N)
在SELECT语句中可以使用MAX_EXECUTION_TIME(N),它会限制语句的执行时间不能超过N毫秒,否则服务器会终止这条语句的执行。比如设置了1秒超时:
mysql> SELECT /*+ MAX_EXECUTION_TIME(100) */ count(*) FROM trips;
ERROR 1317 (70100): Query execution was interrupted
另外,环境变量MAX_EXECUTION_TIME也会对语句执行时间进行限制。对于高可用和时间敏感的业务, 建议使用MAX_EXECUTION_TIME,以免错误的查询计划或bug影响整个TiDB集群的性能甚至稳定性。
执行计划管理,又称SPM (SQL Plan Management),是执行计划进行人为干预的一系列功能,包括执行计划绑定、自动捕获绑定、自动演进绑定等。相比较Hint而言,SPM可以在不修改SQL语句的前提下干预执行计划的选择。
1)执行计划绑定SQL Binding
SQL Binding是SPM的基础,当出现执行计划不优时,可以使用SQL Bind在不更改业务的情况下快速地对执行计划进行修复。
-
创建绑定
CREATE [GLOBAL | SESSION] BINDING FOR SelectStmt USING SelectStmt;
该语句可以在GLOBAL或者SESSION作用域内为SQL绑定执行计划。在不指定作用域时,默认作用域为SESSION。被绑定的SQL会被参数化,然后存储到系统表中。在处理SQL查询时,只要参数化后的SQL和系统表中某个被绑定的SQL匹配即可使用相应的优化器Hint。
#创建一个binding
mysql> create binding for select count(*) from trips where trip_id>1 using select count(*) from trips use index(start_date) where trip_id>1;
Query OK, 0 rows affected (0.11 sec)
-
查看绑定bindings
mysql> show bindings;
+-------------------------------------------------+------------------------------------------------------------------+------------+--------+-------------------------+-------------------------+---------+-----------------+
| Original_sql | Bind_sql | Default_db | Status | Create_time | Update_time | Charset | Collation |
+-------------------------------------------------+------------------------------------------------------------------+------------+--------+-------------------------+-------------------------+---------+-----------------+
| select count ( ? ) from trips where trip_id > ? | select count(*) from trips use index(start_date) where trip_id>1 | bikeshare | using | 2021-03-15 18:49:43.069 | 2021-03-15 18:49:43.069 | utf8 | utf8_general_ci |
+-------------------------------------------------+------------------------------------------------------------------+------------+--------+-------------------------+-------------------------+---------+-----------------+
-
删除bindings
mysql> drop binding for select count(*) from trips where trip_id>1;
Query OK, 0 rows affected (0.00 sec)
2)自动捕获绑定(Baseline Capturing)
将tidb_capture_plan_baselines的值设置为on(其默认值为 off)可以打开自动捕获绑定功能。自动绑定功能依赖于Statement Summary,因此在使用自动绑定之前需打开Statement Summary开关。开启自动绑定功能后,每隔bind-info-lease(默认值为3s)会遍历一次Statement Summary中的历史SQL语句,并为至少出现两次的SQL语句自动捕获绑定。
# 开启 statement summary
mysql> set tidb_enable_stmt_summary = 1;
Query OK, 0 rows affected (0.00 sec)
# 开启自动绑定功能
mysql> set tidb_capture_plan_baselines = 1;
Query OK, 0 rows affected (0.01 sec)
mysql> select count(*) from trips where trip_id>1;
+----------+
| count(*) |
+----------+
| 339999 |
+----------+
1 row in set (0.94 sec)
# 连续跑两遍如下查询即可自动为其创建一条绑定
mysql> select count(*) from trips where trip_id>1;
+----------+
| count(*) |
+----------+
| 339999 |
+----------+
1 row in set (0.33 sec)
# 查看 global bindings 即可发现自动创建的 binding
mysql> show global bindings;
+-------------------------------------------------+---------------------------------------------------------------------------------------------------------------------+------------+--------+-------------------------+-------------------------+---------+-----------+
| Original_sql | Bind_sql | Default_db | Status | Create_time | Update_time | Charset | Collation |
+-------------------------------------------------+---------------------------------------------------------------------------------------------------------------------+------------+--------+-------------------------+-------------------------+---------+-----------+
| select count ( ? ) from trips | SELECT /*+ use_index(@`sel_1` `bikeshare`.`trips` `start_date`), stream_agg(@`sel_1`)*/ COUNT(1) FROM `trips` | bikeshare | using | 2021-03-15 18:09:35.184 | 2021-03-15 18:09:35.184 | | |
| select count ( ? ) from trips where trip_id > ? | SELECT /*+ use_index(@`sel_1` `bikeshare`.`trips` ), stream_agg(@`sel_1`)*/ COUNT(1) FROM `trips` WHERE `trip_id`>1 | bikeshare | using | 2021-03-15 18:09:49.584 | 2021-03-15 18:09:49.584 | | |
| select * from trips limit ? | SELECT /*+ use_index(@`sel_1` `bikeshare`.`trips` )*/ * FROM `trips` LIMIT 1 | bikeshare | using | 2021-03-15 18:09:36.134 | 2021-03-15 18:09:36.134 | | |
+-------------------------------------------------+---------------------------------------------------------------------------------------------------------------------+------------+--------+-------------------------+-------------------------+---------+-----------+
3 rows in set (0.00 sec)
3)自动演进绑定(Baseline Evolution)
由于某些数据变更后,原先绑定的执行计划可能是一个不优的计划。为了解决该问题,TiDB引入Baseline Evolution功能来自动优化已经绑定的执行计划。另外Baseline Evolution还可以在一定程度上避免统计信息改动后,对执行计划带来的抖动。
通过设置global tidb_evolve_plan_baselines = 1开启自动演进功能,如果优化器选出的最优执行计划不在之前绑定的执行计划之中,会将其记录为待验证的执行计划。每隔bind-info-lease(默认值为 3s),会选出一个待验证的执行计划,将其和已经绑定的执行计划中代价最小的比较实际运行时间。如果待验证的运行时间更优的话,会将其标记为可使用的绑定。
在实际执行过程中,为了减少自动演进对集群的影响,可以通过tidb_evolve_plan_task_max_time来限制每个执行计划运行的最长时间,其默认值为十分钟;通过tidb_evolve_plan_task_start_time和tidb_evolve_plan_task_end_time可以限制运行演进任务的时间窗口,默认的时间窗口为全天。
参考资料:
-
https://book.tidb.io/session3/chapter1/optimizer-summary.html
-
https://docs.pingcap.com/zh/tidb/stable/sql-logical-optimization
-
https://docs.pingcap.com/zh/tidb/stable/sql-physica3-optimization