TiDB 7.1资源管控和Oceanbase 4.0多租户使用对比

网友投稿 1250 2023-11-27

一、背景

TiDB和***都是非常优秀的国产分布式数据库;公司从2021年第一套生产业务的TiDB集群落地,随着第一套TiDB生产集群的落地,主要面向不同的业务类型,随着业务的不断迭代,第一套TiDB集群面临最大的问题就是:同一集群,不同业务之间互相影响,所以当时就对第一套TiDB集群根据业务以及访问负载进行了拆分;但是拆分之后,面临的另一个问题就是TiDB集群的资源利用率问题;所以一直在期待TiDB资源隔离功能的实现;截止目前已经10套TiDB集群;随着TiDB 7.1版本的到来,面临着企业降本,合理地利用集群资源管控特性,可以减少集群数量,降低运维难度及管理成本。本文主要针对TiDB 7.1资源管控特性和*** 4.0多租户特性的使用对比,站在客观的角度,对比两者在资源限制层面的差异;

TiDB 7.1资源管控和*** 4.0多租户使用对比

笔者能力有限,文章中如果存在技术性或描述性等错误,请大家及时指正,非常感谢!

二、TiDB 7.1 资源管控使用

2.1 TiDB 7.1测试环境

测试环境使用 7台主机(3PD + 2TiDB混合部署,3 TiKV + 1 监控节点),配置为 16C 16G 300G。如下为TiDB集群拓扑:

$ tiup cluster display tidb-test tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.12.2/tiup-cluster display tidb-test Cluster type: tidb Cluster name: tidb-test Cluster version: v7.1.0 Deploy user: tidb SSH type: builtin Dashboard URL: http://10.xx.xx.xx:2379/dashboard Grafana URL: http://10.xx.xx.xx:3000 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 10.xx.xx.xx:9093 alertmanager 10.22.128.17 9093/9094 linux/x86_64 Up /data/tidb-data/alertmanager-9093 /data/tidb-deploy/alertmanager-9093 10.xx.xx.xx:3000 grafana 10.22.128.17 3000 linux/x86_64 Up - /data/tidb-deploy/grafana-3000 10.xx.xx.xx:2379 pd 10.22.128.29 2379/2380 linux/x86_64 Up /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379 10.xx.xx.xx:2379 pd 10.22.128.57 2379/2380 linux/x86_64 Up|L|UI /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379 10.xx.xx.xx:2379 pd 10.22.128.58 2379/2380 linux/x86_64 Up /data/tidb-data/pd-2379 /data/tidb-deploy/pd-2379 10.xx.xx.xx:9090 prometheus 10.22.128.17 9090/12020 linux/x86_64 Up /data/tidb-data/prometheus-9090 /data/tidb-deploy/prometheus-9090 10.xx.xx.xx:4000 tidb 10.22.128.29 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000 10.xx.xx.xx:4000 tidb 10.22.128.57 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000 10.xx.xx.xx:4000 tidb 10.22.128.58 4000/10080 linux/x86_64 Up - /data/tidb-deploy/tidb-4000 10.xx.xx.xx:20160 tikv 10.22.128.18 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160 10.xx.xx.xx:20160 tikv 10.22.128.19 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160 10.xx.xx.xx:20160 tikv 10.22.128.20 20160/20180 linux/x86_64 Up /data/tidb-data/tikv-20160 /data/tidb-deploy/tikv-20160 Total nodes: 12

资源管控特性引入了两个新的全局开关变量,TiDB 7.1 默认会将上述参数打开。

TiDB:通过配置全局变量 tidb_enable_resource_control控制是否打开资源组流控。

TiKV:通过配置参数 resource-control.enabled控制是否使用基于资源组配额的请求调度。

mysql root@127.0.0.1:(none)14:12:41>SHOW VARIABLES LIKE "tidb_enable_resource_control"; +------------------------------+-------+ | Variable_name | Value | +------------------------------+-------+ | tidb_enable_resource_control | ON | +------------------------------+-------+ 1 row in set (0.00 sec) mysql root@127.0.0.1:(none)14:14:18>SHOW CONFIG WHERE NAME LIKE "resource-control.enabled"; +------+--------------------+--------------------------+-------+ | Type | Instance | Name | Value | +------+--------------------+--------------------------+-------+ | tikv | 10.xx.xx.xx:20160 | resource-control.enabled | true | | tikv | 10.xx.xx.xx:20160 | resource-control.enabled | true | | tikv | 10.xx.xx.xx:20160 | resource-control.enabled | true | +------+--------------------+--------------------------+-------+ 3 rows in set (0.02 sec) mysql root@127.0.0.1:(none)14:24:30>select * from information_schema.resource_groups; +---------+------------+----------+-----------+ | NAME | RU_PER_SEC | PRIORITY | BURSTABLE | +---------+------------+----------+-----------+ | default | UNLIMITED | MEDIUM | YES | +---------+------------+----------+-----------+ 1 row in set (0.01 sec)

2.2 资源管控配置

1、创建资源组,并且允许这个资源组的应用超额占用资源。

CREATE RESOURCE GROUP IF NOT EXISTS rg1 RU_PER_SEC = 500 BURSTABLE; CREATE RESOURCE GROUP IF NOT EXISTS rg2 RU_PER_SEC = 5000 BURSTABLE; CREATE RESOURCE GROUP IF NOT EXISTS rg3 RU_PER_SEC = 10000 BURSTABLE; CREATE RESOURCE GROUP IF NOT EXISTS rg4 RU_PER_SEC = 20000 BURSTABLE; mysql root@127.0.0.1:sbtest15:56:42>select * from information_schema.resource_groups; +---------+------------+----------+-----------+ | NAME | RU_PER_SEC | PRIORITY | BURSTABLE | +---------+------------+----------+-----------+ | default | UNLIMITED | MEDIUM | YES | | rg1 | 500 | MEDIUM | YES | | rg2 | 5000 | MEDIUM | YES | | rg3 | 10000 | MEDIUM | YES | | rg4 | 20000 | MEDIUM | YES | +---------+------------+----------+-----------+ 5 rows in set (0.00 sec)

2、创建用户绑定到资源组

CREATE USER dev_usr1@% IDENTIFIED BY tidb RESOURCE GROUP rg1; GRANT all on *.* to dev_usr1@%; CREATE USER dev_usr2@% IDENTIFIED BY tidb RESOURCE GROUP rg2; GRANT all on *.* to dev_usr2@%; CREATE USER dev_usr3@% IDENTIFIED BY tidb RESOURCE GROUP rg3; GRANT all on *.* to dev_usr3@%; CREATE USER dev_usr4@% IDENTIFIED BY tidb RESOURCE GROUP rg4; GRANT all on *.* to dev_usr4@%; mysql root@127.0.0.1:sbtest15:56:40>select user,host,User_attributes from mysql.user; +----------+------+---------------------------+ | user | host | User_attributes | +----------+------+---------------------------+ | root | % | NULL | | dev_usr1 | % | {"resource_group": "rg1"} | | dev_usr2 | % | {"resource_group": "rg2"} | | dev_usr3 | % | {"resource_group": "rg3"} | | dev_usr4 | % | {"resource_group": "rg4"} | +----------+------+---------------------------+ 5 rows in set (0.00 sec)

3、使用sysbench进行压力测试

mysql-host=10.10.xx.xx.xx mysql-port=4000 mysql-user=dev_usr2 //修改绑定不同资源组的用户名 mysql-password=xxxxx mysql-db=sbtest time=300 threads=16 report-interval=10 db-driver=mysql

测试命令:

# ./sysbench --config-file=config oltp_read_write --tables=10 --table-size=10000000 run

测试结果:

线程数16(资源组允许超用)(资源组允许超用)(资源组允许超用)(资源组允许超用)RU_PER_SEC不限制rg1=500rg2=5000rg3=10000rg4=20000QPS15852.0416002.4315359.6815289.7915248.92TPS792.6800.12767.98764.49762.4595% latency ms25.2824.8325.7425.7425.74

从上述测试结果可以发现,资源充足的情况下,并且资源组允许超用的情况下,RU设置多大,其他关系并不大;

4、修改资源组配置,不允许超用资源

ALTER RESOURCE GROUP rg1 RU_PER_SEC = 500 PRIORITY = MEDIUM; ALTER RESOURCE GROUP rg2 RU_PER_SEC = 5000 PRIORITY = MEDIUM; ALTER RESOURCE GROUP rg3 RU_PER_SEC = 10000 PRIORITY = MEDIUM; ALTER RESOURCE GROUP rg4 RU_PER_SEC = 20000 PRIORITY = MEDIUM; mysql root@127.0.0.1:sbtest16:22:53>select * from information_schema.resource_groups; +---------+------------+----------+-----------+ | NAME | RU_PER_SEC | PRIORITY | BURSTABLE | +---------+------------+----------+-----------+ | default | UNLIMITED | MEDIUM | YES | | rg1 | 500 | MEDIUM | NO | | rg2 | 5000 | MEDIUM | NO | | rg3 | 10000 | MEDIUM | NO | | rg4 | 20000 | MEDIUM | NO | +---------+------------+----------+-----------+ 5 rows in set (0.00 sec)

测试结果:

线程数16(不允许超用)(不允许超用)(不允许超用)(不允许超用)RU_PER_SEC不限制rg1=500rg2=5000rg3=10000rg4=20000QPS15852.04423.134257.378509.2215278.77TPS792.621.16212.87425.46763.9495% latency ms25.28802.0594.149.2125.74

从上述测试结果看,资源充足的情况下,并且资源组不允许超用的情况下,QPS不会超过RU设置的大小,实现了资源限制;

5、将语句绑定到资源组

通过在 SQL 语句中添加 RESOURCE_GROUP(resource_group_name) Hint,可以将该语句绑定到指定的资源组。此 Hint 支持 SELECT、INSERT、UPDATE、DELETE 四种语句。

(1)模拟消耗内存的SQL语句(没有绑定资源组)

mysql> select * from sbtest2 join sbtest3; ERROR 1105 (HY000): probeWorker[2] meets error: Your query has been cancelled due to exceeding the allowed memory limit for a single SQL query. Please try narrowing your query scope or increase the tidb_mem_quota_query limit and try again.[conn=697884219405239279]

上述SQL 超出了 tidb_mem_quota_query 的限制被终止

(2)创建一个RU很小的资源组rg5

mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg5 RU_PER_SEC = 5; Query OK, 0 rows affected (0.53 sec)

(3)同样执行上述SQL语句

mysql> select /*+ RESOURCE_GROUP(rg5) */ * from sbtest2 join sbtest3; ERROR 8252 (HY000): Exceeded resource group quota limitation

可以看到,和(1)的报错不一样,这次是由于资源组的限制被终止,达到了临时限制某个 SQL 资源消耗的目的

通过上述对资源管控功能的测试,该功能通过控制应用访问用户、会话、SQL对应的资源组,可以有效控制资源的使用情况,最大化的利用了系统资源,从而提高了系统的资源利用率,并且很好的做到资源隔离,不用业务之间相互不受影响。

但是TiDB 在创建资源组时不会检查容量只要系统有足够的空闲资源,TiDB 就会满足每个资源组的用量设置。当系统资源超过限制时,TiDB 会优先满足高优先级 (PRIORITY) 资源组的请求。如果同一优先级的请求无法全部满足,TiDB 会根据用量 (RU_PER_SEC) 的大小按比例分配

正如文章开始截图,集群估算的RU是44659,但是创建的RU超过44659同样也是可以创建成功的:

mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg6 RU_PER_SEC = 50000; Query OK, 0 rows affected (0.53 sec) mysql> CREATE RESOURCE GROUP IF NOT EXISTS rg7 RU_PER_SEC = 100000; Query OK, 0 rows affected (0.53 sec)

其实针对这一点,这对于在如何准确的划分资源组分配给不同的业务来说我个人觉的这是不太好的,但从这点来说的话,***做的就比较好;

三、***多租户使用

3.1 ***测试环境

测试环境使用 6台主机(2 OBProxy,3 OBServer + 1 监控节点),配置和TiDB配置一样 16C 16G 300G。***集群的拓扑如下:

$ obd cluster display obtest Get local repositories and plugins ok Open ssh connection ok Cluster status check ok Connect to observer ok Wait for observer init ok +------------------------------------------------+ | observer | +--------------+---------+------+-------+--------+ | ip | version | port | zone | status | +--------------+---------+------+-------+--------+ | 10.xx.xx.xx | 4.0.0.0 | 2881 | zone1 | ACTIVE | | 10.xx.xx.xx | 4.0.0.0 | 2881 | zone2 | ACTIVE | | 10.xx.xx.xx | 4.0.0.0 | 2881 | zone3 | ACTIVE | +--------------+---------+------+-------+--------+ obclient -h10.22.124.74 -P2881 -uroot -poceanbase -Doceanbase -A Connect to obproxy ok +------------------------------------------------+ | obproxy | +--------------+------+-----------------+--------+ | ip | port | prometheus_port | status | +--------------+------+-----------------+--------+ | 10.xx.xx.xx | 2883 | 2884 | active | +--------------+------+-----------------+--------+ obclient -h10.22.124.73 -P2883 -uroot -poceanbase -Doceanbase -A +--------------------------------------------------+ | obagent | +--------------+-------------+------------+--------+ | ip | server_port | pprof_port | status | +--------------+-------------+------------+--------+ | 10.xx.xx.xx | 8088 | 8089 | active | | 10.xx.xx.xx | 8088 | 8089 | active | | 10.xx.xx.xx | 8088 | 8089 | active | +--------------+-------------+------------+--------+ Connect to Prometheus ok +-----------------------------------------------------+ | prometheus | +--------------------------+------+----------+--------+ | url | user | password | status | +--------------------------+------+----------+--------+ | http://10.xx.xx.xx:9090 | | | active | +--------------------------+------+----------+--------+ Connect to grafana ok +-------------------------------------------------------------------+ | grafana | +--------------------------------------+-------+-----------+--------+ | url | user | password | status | +--------------------------------------+-------+-----------+--------+ | http://10.xx.xx.xx:3000/d/oceanbase | admin | xxxxxxxxx | active | +--------------------------------------+-------+-----------+--------+

3.2 ***多租户配置

***从最开始就是面向多租户设计。 在集群层面实现了实例资源的池化。在 *** 数据库中,每一个租户即一个实例(类比 MySQL instance)。租户与租户之间数据、权限、资源隔离,每个租户拥有自己独立的访问端口及 CPU、内存访问资源;在一个大集群中,可以创建很多租户,不同的业务使用不同的租户。租户和租户之间资源已经进行了隔离,保障了相互之间访问不受影响。*** 数据库默认会自动创建 sys 租户,sys 租户负责一部分 *** 数据库的管理工作,并且能够访问系统元数据表,sys 自动预留了一定的资源。在资源使用方面表现为租户"独占"其资源配额。总体上来说,租户(tenant)既是各类数据库对象的容器,又是资源(CPU、Memory、IO 等)的容器。

1、使用系统租户的 root 用户(root@sys)查询集群内各 Server 的资源占用情况

obclient [oceanbase]> SELECT * FROM oceanbase.GV$OB_SERVERS; +--------------+----------+-------+----------+--------------+------------------+--------------+------------------+--------------+--------------+-------------------+-------------------+-----------------+--------------------+------------------+-------------------------+--------------+-------------------------+-----------------------+ | SVR_IP | SVR_PORT | ZONE | SQL_PORT | CPU_CAPACITY | CPU_CAPACITY_MAX | CPU_ASSIGNED | CPU_ASSIGNED_MAX | MEM_CAPACITY | MEM_ASSIGNED | LOG_DISK_CAPACITY | LOG_DISK_ASSIGNED | LOG_DISK_IN_USE | DATA_DISK_CAPACITY | DATA_DISK_IN_USE | DATA_DISK_HEALTH_STATUS | MEMORY_LIMIT | DATA_DISK_ABNORMAL_TIME | SSL_CERT_EXPIRED_TIME | +--------------+----------+-------+----------+--------------+------------------+--------------+------------------+--------------+--------------+-------------------+-------------------+-----------------+--------------------+------------------+-------------------------+--------------+-------------------------+-----------------------+ | 10.xx.xx.xx | 2882 | zone3 | 2881 | 16 | 16 | 3 | 3 | 6442450944 | 6442450944 | 39728447488 | 15032385536 | 2080374784 | 268435456000 | 1356857344 | NORMAL | 12884901888 | NULL | NULL | | 10.xx.xx.xx | 2882 | zone1 | 2881 | 16 | 16 | 3 | 3 | 6442450944 | 6442450944 | 39728447488 | 15032385536 | 2080374784 | 268435456000 | 1323302912 | NORMAL | 12884901888 | NULL | NULL | | 10.xx.xx.xx | 2882 | zone2 | 2881 | 16 | 16 | 3 | 3 | 6442450944 | 6442450944 | 39728447488 | 15032385536 | 2080374784 | 268435456000 | 1356857344 | NORMAL | 12884901888 | NULL | NULL | +--------------+----------+-------+----------+--------------+------------------+--------------+------------------+--------------+--------------+-------------------+-------------------+-----------------+--------------------+------------------+-------------------------+--------------+-------------------------+-----------------------+ 3 rows in set (0.006 sec)

2、创建资源单元unit

在 *** 中,资源单元 unit 是一个租户使用 cpu、内存的最小逻辑单元,也是集群扩展和负载均衡的一个基本单位,在集群节点上下线,扩容、缩容时会动态调整资源单元在节点上的分布进而达到资源的使用均衡。

obclient [oceanbase]> create resource unit S1 max_cpu=2, min_cpu=2, MEMORY_SIZE=4G, max_iops=10000, min_iops=10000; Query OK, 0 rows affected (0.010 sec)

S1 资源单元 CPU、内存使用的分别是 2C,4G,IOPS是10000。

3、创建资源池

obclient [oceanbase]> create resource pool P1 unit=S1, unit_num=1; Query OK, 0 rows affected (0.027 sec)

其中:

UNIT_NUM 表示在集群的一个 Zone 里面包含的资源单元个数。该值小于等于一个 Zone 中的 OBServer 的个数。

ZONE_LIST 表示资源池的 Zone 列表,显示该资源池的资源在哪些 Zone 中被使用。

如果在某个 Zone 内找不到足够剩余资源的机器来创建资源单元,资源池会创建失败。

4、创建租户

CREATE TENANT IF NOT EXISTS test_tenant CHARSET=utf8mb4, replica_num=3, ZONE_LIST=(zone1,zone2,zone3), PRIMARY_ZONE=RANDOM, RESOURCE_POOL_LIST=(P1) SET ob_tcp_invited_nodes=%;

5、查看已经创建的租户信息

obclient [oceanbase]> select * from DBA_OB_TENANTS; +-----------+-------------+-------------+----------------------------+----------------------------+--------------+---------------------------------------------+-------------------+--------------------+--------+---------------+--------+ | TENANT_ID | TENANT_NAME | TENANT_TYPE | CREATE_TIME | MODIFY_TIME | PRIMARY_ZONE | LOCALITY | PREVIOUS_LOCALITY | COMPATIBILITY_MODE | STATUS | IN_RECYCLEBIN | LOCKED | +-----------+-------------+-------------+----------------------------+----------------------------+--------------+---------------------------------------------+-------------------+--------------------+--------+---------------+--------+ | 1 | sys | SYS | 2023-03-07 15:51:43.510628 | 2023-03-07 15:51:43.510628 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 | NULL | MYSQL | NORMAL | NO | NO | | 1001 | META$1002 | META | 2023-03-09 15:56:07.263000 | 2023-03-09 15:56:27.544858 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 | NULL | MYSQL | NORMAL | NO | NO | | 1002 | test_tenant | USER | 2023-03-09 15:56:07.271520 | 2023-03-09 15:56:27.585238 | RANDOM | FULL{1}@zone1, FULL{1}@zone2, FULL{1}@zone3 | NULL | MYSQL | NORMAL | NO | NO | +-----------+-------------+-------------+----------------------------+----------------------------+--------------+---------------------------------------------+-------------------+--------------------+--------+---------------+--------+ 3 rows in set (0.003 sec)

6、使用新创建的租户test_tenant进行登录

$ obclient -h10.22.124.73 -P2883 -uroot@test_tenant#obcluster -p -A Enter password: Welcome to the ***. Commands end with ; or \g. Your *** connection id is 37 Server version: ***_CE 4.0.0.0 (r103000022023011215-05bbad0279302d7274e1b5ab79323a2c915c1981) (Built Jan 12 2023 15:28:27) Copyright (c) 2000, 2018, *** and/or its affiliates. All rights reserved. Type help; or \h for help. Type \c to clear the current input statement. obclient [(none)]>

7、调整资源单元unit配置

obclient [db1]> ALTER resource unit S1 max_cpu =5,min_cpu =5 ,memory_size =10G; ERROR 4624 (HY000): zone zone1 server "10.xx.xx.xx:2882" MEMORY_SIZE resource is not enough to hold a new unit obclient [(none)]> ALTER resource unit S1 max_cpu =5,min_cpu =5 ,memory_size =2G; Query OK, 0 rows affected (0.010 sec) obclient [(none)]> select * from oceanbase.DBA_OB_UNIT_CONFIGS ; +----------------+-----------------+----------------------------+----------------------------+---------+---------+-------------+---------------+----------+----------+-------------+ | UNIT_CONFIG_ID | NAME | CREATE_TIME | MODIFY_TIME | MAX_CPU | MIN_CPU | MEMORY_SIZE | LOG_DISK_SIZE | MAX_IOPS | MIN_IOPS | IOPS_WEIGHT | +----------------+-----------------+----------------------------+----------------------------+---------+---------+-------------+---------------+----------+----------+-------------+ | 1 | sys_unit_config | 2023-03-07 15:51:43.495994 | 2023-03-07 15:51:43.495994 | 1 | 1 | 2147483648 | 2147483648 | 10000 | 10000 | 1 | | 1001 | S1 | 2023-03-09 15:54:50.703465 | 2023-03-13 18:59:08.668397 | 5 | 5 | 2147483648 | 12884901888 | 10000 | 10000 | 0 | +----------------+-----------------+----------------------------+----------------------------+---------+---------+-------------+---------------+----------+----------+-------------+ 2 rows in set (0.004 sec)

如上所示,租户配置的调整是在线立即生效的,调整成功后,S1 的 CPU、内存为 5 C,2 G。*** 数据库通过内核的虚拟化技术,在变更配置后租户的 CPU 和内存资源可以立即生效,无需数据迁移或者切换,对业务无感知。

8、进入OCP,选择 租户 ,进入 租户 界面,可以看到租户的性能监控

四、总结

1、TiDB资源管控主要根据当前的集群配置,结合对不同负载观测的经验值进行预估,集群 RU 的容量会随集群的拓扑结构和各个组件软硬件配置的变化而变化,每个集群实际能消耗的 RU 还与实际的负载相关。基于硬件部署估算容量的预估值只能作为参考,可能会与实际的最大值存在偏差。如果是生产环境的话,建议基于实际负载进行配置;而***是基于OBserver节点的实际CPU、内存、IO进行配置资源单元,这种在生产上更准确的估算业务负载;

2、TiDB资源管控功能,一个数据库用户只能绑定到一个资源组,对业务侧连接数据库方式上不需要做任何调整;而***业务侧在连接数据库的时候需要指定租户信息,例如:dev_msg@“test_tenant”

3、TiDB在资源空闲的情况下,可以配置资源组允许资源超用;***不能配置允许超用,但是可以在调整资源单元的配置,并且立即生效;

4、TiDB 在创建资源组时不会检查容量,这可能和TiDB资源管控的底层方式有关;而***在创建或修改资源单元的时候会检查可用资源的分配情况,否则就会报错,出现类似如下的错误提示:

ERROR 4624 (HY000): zone zone1 server "10.xx.xx.xx:2882" MEMORY_SIZE resource is not enough to hold a new unit

5、TiDB调整资源组的话,调整完成后,只对用户新建的会话生效,而如果业务是长连接的话,是无法生效的;而***修改完资源单元后,通过内核的虚拟化技术,立即生效,无需业务重启或者切换,对业务无感知,从这一点看,***要优于TiDB;

6、TiDB资源管控支持将当前会话和SQL语句绑定到资源组,实现资源限制;而***不支持会话或SQL级别的资源绑定;从这一点看,TiDB要优化***,而且将当前会话和SQL语句绑定到资源组,对于DBA运维来说,确实比较好一些;

总之,不管是TiDB还是***,目前都可以帮助用户高效的利用资源,在保证可用性和性能的前提下,优化成本,并且做到按照需求弹性扩容期望TiDB在修改完资源组后,后续可以实现针对已创建的用户同样立即生效;并且期望针对根据实际负载估算容量后续更加准确和便捷。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:TiDB v7.1.0:精准资源分配,实现数据流畅运行!
下一篇:cdc任务同步错误但不会触发告警问题记录
相关文章