TiDB迁移、升级与案例分享(TiDB v4.0.11 → v6.5.1)

网友投稿 498 2023-11-29

环境描述

Cluster version:    v4.0.11

TiDB迁移、升级与案例分享(TiDB v4.0.11 → v6.5.1)

部署情况

节点名称部署情况Node11tidb+1tikv+1pdNode21tidb+1tikv+1pd+监控组件Node31tidb+1tikv+1pdNode44tikvNode54tikv

数据大小

主要任务

升级版本到v6.5.1

替换老旧机器Node1-Node3,并优化部署结构

升级测试

配置文件是否需要编辑

以源配置文件减配为3tidb+3pd+3tikv安装v4.0.11然后升级到v6.5.1。

这样可以验证配置文件是否有需要修改变更之处

监控迁移测试

生产系统从没有遇到过监控组件的迁移测试,为了保证过程安全可控,在测试环境进行监控组件迁移测试。

主要命令:

tiup cluster scale-in tidb-test -N XXX.XXX.XXX.XXX:9093,XXX.XXX.XXX.XXX:3000,XXX.XXX.XXX.XXX:9090 tiup cluster check ./single-monitor.yaml --user root -p (check因为tiup版本不同,方式有所不同) tiup cluster scale-out tidb-test single-monitor.yaml --user root -p

遇到问题:迁移后部分数据不显示

迁移前显示内容:

迁移后显示内容:

原因和解决方法:测试环境,新的监控组件所在服务器时间比正常晚了5分钟左右,时间同步正常

带数据升级测试

按照使用方提供的标准从生产环境导入部分数据,然后程序接入测试环境,做一次升级测试,并观察是否有不适应的地方

生产机器资源调整

tikv server通过扩容缩容的方式迁移

主要命令:

tiup cluster check ./scale-out20230301.yaml --user root -p tiup cluster scale-out <cluster-name> scale-out20230301.yaml --user root -p tiup cluster scale-in <cluster-name> --node XXX.XXX.XXX.60:20160,XXX.XXX.XXX.61:20160,XXX.XXX.XXX.62:20160

回收过程监控变化:

遇到问题:display没有Tombstone,监控还是有Tombstone

解决方法:

curl -X DELETE {pd_leader_ip}:2379/pd/api/v1/stores/remove-tombstone

tidb server通过扩容缩容的方式迁移

由于tidb是无状态节点,直接扩缩容

主要命令:

tiup cluster check ./tidb-scale-out2023031701.yaml --user root -p tiup cluster scale-out <cluster-name> tidb-scale-out2023031701.yaml --user root -p tiup cluster scale-in <cluster-name> --node XXX.XXX.XXX.XXX:3306

pd server通过扩容缩容的方式迁移

pd server 迁移官网注意提示:

主要命令:

tiup cluster check ./pd-scale-out2023031601.yaml --user root -p tiup cluster scale-out <cluster-name> pd-scale-out2023031601.yaml --user root -p tiup cluster scale-in <cluster-name> --node XXX.XXX.XXX.60:2379

原来leader 缩容:

我这里是通过reload的方式让主切换到新的机器上去的,可以先尝试reload,如果没有自动切换,再尝试手动切换leader

监控组件迁移

步骤和测试场景一样

调整后部署情况

节点名称部署情况Node44tikvNode54tikvNode64tikvNode74tikvNode84tikvNode91tidb+1pdNode101tidb+1pd+监控组件Node111tidb+1pd

生产升级

先升级 TiUP和cluster 版本(建议 tiup 版本不低于 1.11.0

tiup update --self tiup --version tiup update cluster tiup cluster --version

检查当前集群的健康状况

tiup cluster check <cluster-name> --cluster

其他确认项

变更配置文件,根据测试修改需要变更的参数 检查当前集群没有 DDL 和 Backup 在执行 表记录数据情况统计,方便升级完成验证

执行不停机升级升级

tiup cluster upgrade <cluster-name> v6.5.1

[09:52:11][tidb@Node1 ~]$ tiup cluster check <cluster-name> --cluster [09:52:11]tiup is checking updates for component cluster ... [09:52:11]Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.3/tiup-cluster check <cluster-name> --cluster [09:52:13]Run command on XXX.XXX.XXX.68(sudo:false): /tmp/tiup/bin/insight [09:52:13]Run command on XXX.XXX.XXX.67(sudo:false): /tmp/tiup/bin/insight [09:52:13]Run command on XXX.XXX.XXX.98(sudo:false): /tmp/tiup/bin/insight [09:52:13]Run command on XXX.XXX.XXX.99(sudo:false): /tmp/tiup/bin/insight [09:52:13]Run command on XXX.XXX.XXX.97(sudo:false): /tmp/tiup/bin/insight [09:52:14]Run command on XXX.XXX.XXX.103(sudo:false): /tmp/tiup/bin/insight [09:52:15]Run command on XXX.XXX.XXX.102(sudo:false): /tmp/tiup/bin/insight [09:52:15]Run command on XXX.XXX.XXX.101(sudo:false): /tmp/tiup/bin/insight [09:52:17]Run command on XXX.XXX.XXX.97(sudo:false): cat /etc/security/limits.conf [09:52:17]Run command on XXX.XXX.XXX.98(sudo:false): cat /etc/security/limits.conf [09:52:17]Run command on XXX.XXX.XXX.99(sudo:false): cat /etc/security/limits.conf [09:52:17]Run command on XXX.XXX.XXX.97(sudo:true): sysctl -a [09:52:17]Run command on XXX.XXX.XXX.98(sudo:true): sysctl -a [09:52:17]Run command on XXX.XXX.XXX.67(sudo:false): cat /etc/security/limits.conf [09:52:17]Run command on XXX.XXX.XXX.99(sudo:true): sysctl -a [09:52:17]Run command on XXX.XXX.XXX.68(sudo:false): cat /etc/security/limits.conf [09:52:18]Run command on XXX.XXX.XXX.67(sudo:true): sysctl -a [09:52:18]Run command on XXX.XXX.XXX.68(sudo:true): sysctl -a [09:52:19]Run command on XXX.XXX.XXX.101(sudo:false): cat /etc/security/limits.conf [09:52:20]Run command on XXX.XXX.XXX.102(sudo:false): cat /etc/security/limits.conf [09:52:20]Run command on XXX.XXX.XXX.103(sudo:false): cat /etc/security/limits.conf [09:52:20]Run command on XXX.XXX.XXX.101(sudo:true): sysctl -a [09:52:20]Run command on XXX.XXX.XXX.103(sudo:true): sysctl -a [09:52:20]Run command on XXX.XXX.XXX.102(sudo:true): sysctl -a [09:52:22]Node Check Result Message [09:52:22]---- ----- ------ ------- [09:52:22]XXX.XXX.XXX.103 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 [09:52:22]XXX.XXX.XXX.103 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.103 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.103 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv2/tidb-data/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv4/tidb-data/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv3/tidb-data/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv1/tidb-data/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.103 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.103 network Pass network speed of bond0 is 20000MB [09:52:22]XXX.XXX.XXX.103 network Pass network speed of em1 is 10000MB [09:52:22]XXX.XXX.XXX.103 network Pass network speed of em2 is 10000MB [09:52:22]XXX.XXX.XXX.103 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.103 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.103 cpu-cores Pass number of CPU cores / threads: 72 [09:52:22]XXX.XXX.XXX.103 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.97 permission Pass /tidb-deploy/pd-2379 is writable [09:52:22]XXX.XXX.XXX.97 permission Pass /tidb-data/pd-2379 is writable [09:52:22]XXX.XXX.XXX.97 permission Pass /tidb-deploy/tidb-3306 is writable [09:52:22]XXX.XXX.XXX.97 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 [09:52:22]XXX.XXX.XXX.97 cpu-cores Pass number of CPU cores / threads: 56 [09:52:22]XXX.XXX.XXX.97 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.97 network Pass network speed of p2p2 is 1000MB [09:52:22]XXX.XXX.XXX.97 network Pass network speed of bond0 is 20000MB [09:52:22]XXX.XXX.XXX.97 network Pass network speed of em1 is 10000MB [09:52:22]XXX.XXX.XXX.97 network Pass network speed of em2 is 10000MB [09:52:22]XXX.XXX.XXX.97 network Pass network speed of p2p1 is 1000MB [09:52:22]XXX.XXX.XXX.97 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.97 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.97 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.97 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.98 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai [09:52:22]XXX.XXX.XXX.98 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 [09:52:22]XXX.XXX.XXX.98 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-data/alertmanager-9093 is writable [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/pd-2379 is writable [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-data/pd-2379 is writable [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/tidb-3306 is writable [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/prometheus-9090 is writable [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/grafana-3000 is writable [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-deploy/alertmanager-9093 is writable [09:52:22]XXX.XXX.XXX.98 permission Pass /tidb-data/prometheus-9090 is writable [09:52:22]XXX.XXX.XXX.98 cpu-cores Pass number of CPU cores / threads: 56 [09:52:22]XXX.XXX.XXX.98 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.98 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.98 network Pass network speed of em2 is 10000MB [09:52:22]XXX.XXX.XXX.98 network Pass network speed of p1p1 is 1000MB [09:52:22]XXX.XXX.XXX.98 network Pass network speed of p1p2 is 1000MB [09:52:22]XXX.XXX.XXX.98 network Pass network speed of p2p1 is 1000MB [09:52:22]XXX.XXX.XXX.98 network Pass network speed of p2p2 is 1000MB [09:52:22]XXX.XXX.XXX.98 network Pass network speed of bond0 is 20000MB [09:52:22]XXX.XXX.XXX.98 network Pass network speed of em1 is 10000MB [09:52:22]XXX.XXX.XXX.98 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.98 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.99 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 [09:52:22]XXX.XXX.XXX.99 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.99 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.99 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai [09:52:22]XXX.XXX.XXX.99 permission Pass /tidb-deploy/pd-2379 is writable [09:52:22]XXX.XXX.XXX.99 permission Pass /tidb-data/pd-2379 is writable [09:52:22]XXX.XXX.XXX.99 permission Pass /tidb-deploy/tidb-3306 is writable [09:52:22]XXX.XXX.XXX.99 cpu-cores Pass number of CPU cores / threads: 56 [09:52:22]XXX.XXX.XXX.99 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.99 network Pass network speed of p2p1 is 1000MB [09:52:22]XXX.XXX.XXX.99 network Pass network speed of p2p2 is 1000MB [09:52:22]XXX.XXX.XXX.99 network Pass network speed of bond0 is 20000MB [09:52:22]XXX.XXX.XXX.99 network Pass network speed of em1 is 10000MB [09:52:22]XXX.XXX.XXX.99 network Pass network speed of em2 is 10000MB [09:52:22]XXX.XXX.XXX.99 network Pass network speed of p1p1 is 1000MB [09:52:22]XXX.XXX.XXX.99 network Pass network speed of p1p2 is 1000MB [09:52:22]XXX.XXX.XXX.99 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.99 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.67 os-version Pass OS is CentOS Linux 7 (Core) 7.7.1908 [09:52:22]XXX.XXX.XXX.67 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of em2 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of em3 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of em4 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p1 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of bond0 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of em1 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p4 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p2 is 1000MB [09:52:22]XXX.XXX.XXX.67 network Pass network speed of p1p3 is 1000MB [09:52:22]XXX.XXX.XXX.67 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.67 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.67 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv1/tidb-data/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv2/tidb-data/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv3/tidb-data/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv4/tidb-data/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.67 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.67 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.67 cpu-cores Pass number of CPU cores / threads: 56 [09:52:22]XXX.XXX.XXX.67 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p2 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of bond0 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of em1 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of em2 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of em3 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of em4 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p1 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p3 is 1000MB [09:52:22]XXX.XXX.XXX.68 network Pass network speed of p1p4 is 1000MB [09:52:22]XXX.XXX.XXX.68 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.68 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv2/tidb-data/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv3/tidb-data/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv1/tidb-data/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv4/tidb-data/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.68 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.68 os-version Pass OS is CentOS Linux 7 (Core) 7.7.1908 [09:52:22]XXX.XXX.XXX.68 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.68 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.68 cpu-cores Pass number of CPU cores / threads: 56 [09:52:22]XXX.XXX.XXX.68 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.68 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv1/tidb-data/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv4/tidb-data/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv2/tidb-data/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.101 permission Pass /tikv3/tidb-data/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.101 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 [09:52:22]XXX.XXX.XXX.101 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.101 network Pass network speed of em3 is 1000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of em4 is 1000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of p1p2 is 1000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of p2p4 is 10000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of bond0 is 20000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of em2 is 10000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of p1p1 is 1000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of p2p2 is 10000MB [09:52:22]XXX.XXX.XXX.101 network Pass network speed of em1 is 10000MB [09:52:22]XXX.XXX.XXX.101 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.101 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.101 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai [09:52:22]XXX.XXX.XXX.101 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.101 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.101 cpu-cores Pass number of CPU cores / threads: 56 [09:52:22]XXX.XXX.XXX.102 timezone Pass time zone is the same as the first PD machine: Asia/Shanghai [09:52:22]XXX.XXX.XXX.102 os-version Pass OS is CentOS Linux 7 (Core) 7.9.2009 [09:52:22]XXX.XXX.XXX.102 cpu-cores Pass number of CPU cores / threads: 56 [09:52:22]XXX.XXX.XXX.102 selinux Pass SELinux is disabled [09:52:22]XXX.XXX.XXX.102 thp Pass THP is disabled [09:52:22]XXX.XXX.XXX.102 command Pass numactl: policy: default [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv1/tidb-deploy/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv4/tidb-data/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv3/tidb-data/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv2/tidb-data/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv1/tidb-data/tikv-20160 is writable [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv3/tidb-deploy/tikv-20162 is writable [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv2/tidb-deploy/tikv-20161 is writable [09:52:22]XXX.XXX.XXX.102 permission Pass /tikv4/tidb-deploy/tikv-20163 is writable [09:52:22]XXX.XXX.XXX.102 cpu-governor Warn Unable to determine current CPU frequency governor policy [09:52:22]XXX.XXX.XXX.102 memory Pass memory size is 131072MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of p2p3 is 10000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of bond0 is 20000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of em1 is 10000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of em2 is 10000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of em4 is 1000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of p1p2 is 1000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of em3 is 1000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of p1p1 is 1000MB [09:52:22]XXX.XXX.XXX.102 network Pass network speed of p2p4 is 10000MB [09:52:22]Checking region status of the cluster <cluster-name>... [09:52:22]All regions are healthy. [09:53:35][tidb@Node1 ~]$ tiup cluster upgrade <cluster-name> v6.5.1 [09:53:35]tiup is checking updates for component cluster ... [09:53:35]Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.3/tiup-cluster upgrade <cluster-name> v6.5.1 [09:53:35]Before the upgrade, it is recommended to read the upgrade guide at https://docs.pingcap.com/tidb/stable/upgrade-tidb-using-tiup and finish the preparation steps. [09:53:35]This operation will upgrade tidb v4.0.11 cluster <cluster-name> to v6.5.1. [09:53:59]Do you want to continue? [y/N]:(default=N) y [09:53:59]Upgrading cluster... [09:53:59]+ [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/ssh/id_rsa.pub [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.99 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.67 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.97 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.102 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.68 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.101 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.97 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.103 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.99 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98 [09:53:59]+ [Parallel] - UserSSH: user=tidb, host=XXX.XXX.XXX.98 [09:53:59]+ [ Serial ] - Download: component=grafana, version=v6.5.1, os=linux, arch=amd64 [09:53:59]+ [ Serial ] - Download: component=tidb, version=v6.5.1, os=linux, arch=amd64 [09:53:59]+ [ Serial ] - Download: component=prometheus, version=v6.5.1, os=linux, arch=amd64 [09:53:59]+ [ Serial ] - Download: component=pd, version=v6.5.1, os=linux, arch=amd64 [09:53:59]+ [ Serial ] - Download: component=tikv, version=v6.5.1, os=linux, arch=amd64 [09:54:06]+ [ Serial ] - Download: component=alertmanager, version=, os=linux, arch=amd64 [09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=/tikv2/tidb-data/tikv-20161 [09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=/tidb-data/pd-2379 [09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.99, directories=/tidb-data/pd-2379 [09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=/tikv1/tidb-data/tikv-20160 [09:54:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.97, directories=/tidb-data/pd-2379 [09:54:22]+ [ Serial ] - BackupComponent: component=pd, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/pd-2379 [09:54:22]+ [ Serial ] - BackupComponent: component=pd, currentVersion=v4.0.11, remote=XXX.XXX.XXX.99:/tidb-deploy/pd-2379 [09:54:22]+ [ Serial ] - BackupComponent: component=pd, currentVersion=v4.0.11, remote=XXX.XXX.XXX.97:/tidb-deploy/pd-2379 [09:54:22]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv2/tidb-deploy/tikv-20161 [09:54:22]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv1/tidb-deploy/tikv-20160 [09:54:22]+ [ Serial ] - CopyComponent: component=pd, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/pd-2379 os=linux, arch=amd64 [09:54:22]+ [ Serial ] - CopyComponent: component=pd, version=v6.5.1, remote=XXX.XXX.XXX.99:/tidb-deploy/pd-2379 os=linux, arch=amd64 [09:54:22]+ [ Serial ] - CopyComponent: component=pd, version=v6.5.1, remote=XXX.XXX.XXX.97:/tidb-deploy/pd-2379 os=linux, arch=amd64 [09:54:22]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64 [09:54:22]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64 [09:54:24]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.99, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:25]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:25]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.97, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, data_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:26]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=/tikv3/tidb-data/tikv-20162 [09:54:27]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.67, directories=/tikv4/tidb-data/tikv-20163 [09:54:27]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=/tikv1/tidb-data/tikv-20160 [09:54:28]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv3/tidb-deploy/tikv-20162 [09:54:28]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv1/tidb-deploy/tikv-20160 [09:54:28]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.67:/tikv4/tidb-deploy/tikv-20163 [09:54:28]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64 [09:54:29]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64 [09:54:29]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.67:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64 [09:54:31]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:31]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:35]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=/tikv2/tidb-data/tikv-20161 [09:54:35]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=/tikv3/tidb-data/tikv-20162 [09:54:36]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv2/tidb-deploy/tikv-20161 [09:54:36]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv3/tidb-deploy/tikv-20162 [09:54:37]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64 [09:54:37]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64 [09:54:37]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:38]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.67, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:39]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:41]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.68, directories=/tikv4/tidb-data/tikv-20163 [09:54:42]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.68:/tikv4/tidb-deploy/tikv-20163 [09:54:42]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=/tikv1/tidb-data/tikv-20160 [09:54:42]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.68:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64 [09:54:43]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv1/tidb-deploy/tikv-20160 [09:54:43]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=/tikv2/tidb-data/tikv-20161 [09:54:43]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64 [09:54:44]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv2/tidb-deploy/tikv-20161 [09:54:45]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:45]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:45]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64 [09:54:49]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=/tikv3/tidb-data/tikv-20162 [09:54:49]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.101, directories=/tikv4/tidb-data/tikv-20163 [09:54:50]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.68, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:50]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv3/tidb-deploy/tikv-20162 [09:54:50]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.101:/tikv4/tidb-deploy/tikv-20163 [09:54:50]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64 [09:54:50]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.101:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64 [09:54:52]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:53]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:53]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=/tikv1/tidb-data/tikv-20160 [09:54:54]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv1/tidb-deploy/tikv-20160 [09:54:55]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64 [09:54:56]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=/tikv2/tidb-data/tikv-20161 [09:54:57]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv2/tidb-deploy/tikv-20161 [09:54:57]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=/tikv3/tidb-data/tikv-20162 [09:54:57]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64 [09:54:58]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:58]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv3/tidb-deploy/tikv-20162 [09:54:58]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.101, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:54:58]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64 [09:55:01]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:02]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.102, directories=/tikv4/tidb-data/tikv-20163 [09:55:03]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=/tikv1/tidb-data/tikv-20160 [09:55:03]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.102:/tikv4/tidb-deploy/tikv-20163 [09:55:03]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.102:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64 [09:55:03]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv1/tidb-deploy/tikv-20160 [09:55:04]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:04]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv1/tidb-deploy/tikv-20160 os=linux, arch=amd64 [09:55:05]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=/tikv2/tidb-data/tikv-20161 [09:55:05]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:06]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv2/tidb-deploy/tikv-20161 [09:55:06]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv2/tidb-deploy/tikv-20161 os=linux, arch=amd64 [09:55:08]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=/tikv3/tidb-data/tikv-20162 [09:55:08]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv3/tidb-deploy/tikv-20162 [09:55:09]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv3/tidb-deploy/tikv-20162 os=linux, arch=amd64 [09:55:10]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.103, directories=/tikv4/tidb-data/tikv-20163 [09:55:10]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.102, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:10]+ [ Serial ] - BackupComponent: component=tikv, currentVersion=v4.0.11, remote=XXX.XXX.XXX.103:/tikv4/tidb-deploy/tikv-20163 [09:55:11]+ [ Serial ] - CopyComponent: component=tikv, version=v6.5.1, remote=XXX.XXX.XXX.103:/tikv4/tidb-deploy/tikv-20163 os=linux, arch=amd64 [09:55:12]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20160.service, deploy_dir=/tikv1/tidb-deploy/tikv-20160, data_dir=[/tikv1/tidb-data/tikv-20160], log_dir=/tikv1/tidb-deploy/tikv-20160/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:13]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20161.service, deploy_dir=/tikv2/tidb-deploy/tikv-20161, data_dir=[/tikv2/tidb-data/tikv-20161], log_dir=/tikv2/tidb-deploy/tikv-20161/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:14]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.97, directories= [09:55:14]+ [ Serial ] - BackupComponent: component=tidb, currentVersion=v4.0.11, remote=XXX.XXX.XXX.97:/tidb-deploy/tidb-3306 [09:55:15]+ [ Serial ] - CopyComponent: component=tidb, version=v6.5.1, remote=XXX.XXX.XXX.97:/tidb-deploy/tidb-3306 os=linux, arch=amd64 [09:55:16]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20162.service, deploy_dir=/tikv3/tidb-deploy/tikv-20162, data_dir=[/tikv3/tidb-data/tikv-20162], log_dir=/tikv3/tidb-deploy/tikv-20162/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:18]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.97, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tidb-3306.service, deploy_dir=/tidb-deploy/tidb-3306, data_dir=[], log_dir=/tidb-deploy/tidb-3306/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:18]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.103, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tikv-20163.service, deploy_dir=/tikv4/tidb-deploy/tikv-20163, data_dir=[/tikv4/tidb-data/tikv-20163], log_dir=/tikv4/tidb-deploy/tikv-20163/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:19]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories= [09:55:19]+ [ Serial ] - BackupComponent: component=tidb, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/tidb-3306 [09:55:20]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.99, directories= [09:55:20]+ [ Serial ] - BackupComponent: component=tidb, currentVersion=v4.0.11, remote=XXX.XXX.XXX.99:/tidb-deploy/tidb-3306 [09:55:20]+ [ Serial ] - CopyComponent: component=tidb, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/tidb-3306 os=linux, arch=amd64 [09:55:20]+ [ Serial ] - CopyComponent: component=tidb, version=v6.5.1, remote=XXX.XXX.XXX.99:/tidb-deploy/tidb-3306 os=linux, arch=amd64 [09:55:21]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=/tidb-data/prometheus-9090 [09:55:22]+ [ Serial ] - BackupComponent: component=prometheus, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/prometheus-9090 [09:55:22]+ [ Serial ] - CopyComponent: component=prometheus, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/prometheus-9090 os=linux, arch=amd64 [09:55:22]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tidb-3306.service, deploy_dir=/tidb-deploy/tidb-3306, data_dir=[], log_dir=/tidb-deploy/tidb-3306/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:23]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.99, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/tidb-3306.service, deploy_dir=/tidb-deploy/tidb-3306, data_dir=[], log_dir=/tidb-deploy/tidb-3306/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:24]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories= [09:55:24]+ [ Serial ] - BackupComponent: component=grafana, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/grafana-3000 [09:55:24]+ [ Serial ] - Mkdir: host=XXX.XXX.XXX.98, directories=/tidb-data/alertmanager-9093 [09:55:24]+ [ Serial ] - CopyComponent: component=grafana, version=v6.5.1, remote=XXX.XXX.XXX.98:/tidb-deploy/grafana-3000 os=linux, arch=amd64 [09:55:25]+ [ Serial ] - BackupComponent: component=alertmanager, currentVersion=v4.0.11, remote=XXX.XXX.XXX.98:/tidb-deploy/alertmanager-9093 [09:55:25]+ [ Serial ] - CopyComponent: component=alertmanager, version=, remote=XXX.XXX.XXX.98:/tidb-deploy/alertmanager-9093 os=linux, arch=amd64 [09:55:26]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/prometheus-9090.service, deploy_dir=/tidb-deploy/prometheus-9090, data_dir=[/tidb-data/prometheus-9090], log_dir=/tidb-deploy/prometheus-9090/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:26]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/alertmanager-9093.service, deploy_dir=/tidb-deploy/alertmanager-9093, data_dir=[/tidb-data/alertmanager-9093], log_dir=/tidb-deploy/alertmanager-9093/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:27]+ [ Serial ] - InitConfig: cluster=<cluster-name>, user=tidb, host=XXX.XXX.XXX.98, path=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache/grafana-3000.service, deploy_dir=/tidb-deploy/grafana-3000, data_dir=[], log_dir=/tidb-deploy/grafana-3000/log, cache_dir=/home/tidb/.tiup/storage/cluster/clusters/<cluster-name>/config-cache [09:55:34]+ [ Serial ] - UpgradeCluster [09:55:34]Upgrading component pd [09:55:34] Restarting instance XXX.XXX.XXX.98:2379 [09:55:36] Restart instance XXX.XXX.XXX.98:2379 success [09:55:38] Restarting instance XXX.XXX.XXX.99:2379 [09:55:40] Restart instance XXX.XXX.XXX.99:2379 success [09:55:49] Restarting instance XXX.XXX.XXX.97:2379 [09:55:51] Restart instance XXX.XXX.XXX.97:2379 success [09:55:53]Upgrading component tikv [09:55:53] Evicting 7109 leaders from store XXX.XXX.XXX.67:20160... [09:55:53] Still waitting for 7109 store leaders to transfer... [09:55:55] Still waitting for 7109 store leaders to transfer... [09:55:57] Still waitting for 7109 store leaders to transfer... [09:56:00] Still waitting for 7109 store leaders to transfer... [09:56:02] Still waitting for 7109 store leaders to transfer... [09:56:04] Still waitting for 5342 store leaders to transfer... [09:56:06] Still waitting for 5342 store leaders to transfer... [09:56:08] Still waitting for 5342 store leaders to transfer... [09:56:10] Still waitting for 5342 store leaders to transfer... [09:56:12] Still waitting for 4140 store leaders to transfer... [09:56:14] Still waitting for 4140 store leaders to transfer... [09:56:16] Still waitting for 4140 store leaders to transfer... [09:56:18] Still waitting for 4140 store leaders to transfer... [09:56:21] Still waitting for 4140 store leaders to transfer... [09:56:23] Still waitting for 2942 store leaders to transfer... [09:56:25] Still waitting for 2942 store leaders to transfer... [09:56:27] Still waitting for 2942 store leaders to transfer... [09:56:29] Still waitting for 2942 store leaders to transfer... [09:56:31] Still waitting for 2942 store leaders to transfer... [09:56:33] Still waitting for 1810 store leaders to transfer... [09:56:35] Still waitting for 1810 store leaders to transfer... [09:56:38] Still waitting for 1810 store leaders to transfer... [09:56:40] Still waitting for 1810 store leaders to transfer... [09:56:42] Still waitting for 1810 store leaders to transfer... [09:56:44] Still waitting for 488 store leaders to transfer... [09:56:46] Still waitting for 488 store leaders to transfer... [09:56:48] Still waitting for 488 store leaders to transfer... [09:56:50] Still waitting for 488 store leaders to transfer... [09:56:52] Restarting instance XXX.XXX.XXX.67:20160 [09:57:17] Restart instance XXX.XXX.XXX.67:20160 success [09:57:17] Evicting 7469 leaders from store XXX.XXX.XXX.67:20161... [09:57:17] Still waitting for 7469 store leaders to transfer... [09:57:19] Still waitting for 7469 store leaders to transfer... [09:57:21] Still waitting for 7469 store leaders to transfer... [09:57:23] Still waitting for 7469 store leaders to transfer... [09:57:26] Still waitting for 5324 store leaders to transfer... [09:57:28] Still waitting for 5324 store leaders to transfer... [09:57:30] Still waitting for 5324 store leaders to transfer... [09:57:32] Still waitting for 5324 store leaders to transfer... [09:57:34] Still waitting for 5324 store leaders to transfer... [09:57:36] Still waitting for 2513 store leaders to transfer... [09:57:38] Still waitting for 2513 store leaders to transfer... [09:57:40] Still waitting for 2513 store leaders to transfer... [09:57:42] Still waitting for 2513 store leaders to transfer... [09:57:45] Still waitting for 2513 store leaders to transfer... [09:57:47] Restarting instance XXX.XXX.XXX.67:20161 [09:58:11] Restart instance XXX.XXX.XXX.67:20161 success [09:58:11] Evicting 7481 leaders from store XXX.XXX.XXX.67:20162... [09:58:11] Still waitting for 7481 store leaders to transfer... [09:58:13] Still waitting for 7481 store leaders to transfer... [09:58:15] Still waitting for 7481 store leaders to transfer... [09:58:18] Still waitting for 5796 store leaders to transfer... [09:58:20] Still waitting for 5796 store leaders to transfer... [09:58:22] Still waitting for 5796 store leaders to transfer... [09:58:24] Still waitting for 5796 store leaders to transfer... [09:58:26] Still waitting for 5796 store leaders to transfer... [09:58:28] Still waitting for 2902 store leaders to transfer... [09:58:30] Still waitting for 2902 store leaders to transfer... [09:58:32] Still waitting for 2902 store leaders to transfer... [09:58:34] Still waitting for 2902 store leaders to transfer... [09:58:37] Still waitting for 2902 store leaders to transfer... [09:58:39] Still waitting for 22 store leaders to transfer... [09:58:41] Still waitting for 22 store leaders to transfer... [09:58:43] Still waitting for 22 store leaders to transfer... [09:58:45] Still waitting for 22 store leaders to transfer... [09:58:47] Still waitting for 22 store leaders to transfer... [09:58:49] Restarting instance XXX.XXX.XXX.67:20162 [09:59:15] Restart instance XXX.XXX.XXX.67:20162 success [09:59:15] Evicting 7480 leaders from store XXX.XXX.XXX.67:20163... [09:59:15] Still waitting for 7480 store leaders to transfer... [09:59:17] Still waitting for 7480 store leaders to transfer... [09:59:20] Still waitting for 7480 store leaders to transfer... [09:59:22] Still waitting for 5687 store leaders to transfer... [09:59:24] Still waitting for 5687 store leaders to transfer... [09:59:26] Still waitting for 5687 store leaders to transfer... [09:59:28] Still waitting for 5687 store leaders to transfer... [09:59:30] Still waitting for 5687 store leaders to transfer... [09:59:32] Still waitting for 2842 store leaders to transfer... [09:59:34] Still waitting for 2842 store leaders to transfer... [09:59:36] Still waitting for 2842 store leaders to transfer... [09:59:39] Still waitting for 2842 store leaders to transfer... [09:59:41] Still waitting for 2842 store leaders to transfer... [09:59:43] Still waitting for 7 store leaders to transfer... [09:59:45] Still waitting for 7 store leaders to transfer... [09:59:47] Still waitting for 7 store leaders to transfer... [09:59:49] Still waitting for 7 store leaders to transfer... [09:59:51] Still waitting for 7 store leaders to transfer... [09:59:53] Restarting instance XXX.XXX.XXX.67:20163 [10:00:19] Restart instance XXX.XXX.XXX.67:20163 success [10:00:19] Evicting 7480 leaders from store XXX.XXX.XXX.68:20160... [10:00:19] Still waitting for 7480 store leaders to transfer... [10:00:22] Still waitting for 7063 store leaders to transfer... [10:00:24] Still waitting for 7063 store leaders to transfer... [10:00:26] Still waitting for 7063 store leaders to transfer... [10:00:28] Still waitting for 7063 store leaders to transfer... [10:00:30] Still waitting for 7063 store leaders to transfer... [10:00:32] Still waitting for 4180 store leaders to transfer... [10:00:34] Still waitting for 4180 store leaders to transfer... [10:00:36] Still waitting for 4180 store leaders to transfer... [10:00:38] Still waitting for 4180 store leaders to transfer... [10:00:41] Still waitting for 4180 store leaders to transfer... [10:00:43] Still waitting for 1318 store leaders to transfer... [10:00:45] Still waitting for 1318 store leaders to transfer... [10:00:47] Still waitting for 1318 store leaders to transfer... [10:00:49] Still waitting for 1318 store leaders to transfer... [10:00:51] Restarting instance XXX.XXX.XXX.68:20160 [10:01:16] Restart instance XXX.XXX.XXX.68:20160 success [10:01:16] Evicting 7481 leaders from store XXX.XXX.XXX.68:20161... [10:01:16] Still waitting for 7481 store leaders to transfer... [10:01:18] Still waitting for 7481 store leaders to transfer... [10:01:20] Still waitting for 7481 store leaders to transfer... [10:01:22] Still waitting for 7481 store leaders to transfer... [10:01:24] Still waitting for 7481 store leaders to transfer... [10:01:26] Still waitting for 4703 store leaders to transfer... [10:01:29] Still waitting for 4703 store leaders to transfer... [10:01:31] Still waitting for 4703 store leaders to transfer... [10:01:33] Still waitting for 4703 store leaders to transfer... [10:01:35] Still waitting for 4703 store leaders to transfer... [10:01:37] Still waitting for 1823 store leaders to transfer... [10:01:39] Still waitting for 1823 store leaders to transfer... [10:01:41] Still waitting for 1823 store leaders to transfer... [10:01:43] Still waitting for 1823 store leaders to transfer... [10:01:45] Still waitting for 1823 store leaders to transfer... [10:01:48] Restarting instance XXX.XXX.XXX.68:20161 [10:02:13] Restart instance XXX.XXX.XXX.68:20161 success [10:02:13] Evicting 7481 leaders from store XXX.XXX.XXX.68:20162... [10:02:13] Still waitting for 7481 store leaders to transfer... [10:02:16] Still waitting for 7481 store leaders to transfer... [10:02:18] Still waitting for 6401 store leaders to transfer... [10:02:20] Still waitting for 6401 store leaders to transfer... [10:02:22] Still waitting for 6401 store leaders to transfer... [10:02:24] Still waitting for 6401 store leaders to transfer... [10:02:26] Still waitting for 6401 store leaders to transfer... [10:02:27] [10:02:28] Still waitting for 3513 store leaders to transfer... [10:02:30] Still waitting for 3513 store leaders to transfer... [10:02:32] Still waitting for 3513 store leaders to transfer... [10:02:35] Still waitting for 3513 store leaders to transfer... [10:02:35] [10:02:36] [10:02:37] Still waitting for 3513 store leaders to transfer... [10:02:39] Still waitting for 647 store leaders to transfer... [10:02:41] Still waitting for 647 store leaders to transfer... [10:02:43] Still waitting for 647 store leaders to transfer... [10:02:45] Still waitting for 647 store leaders to transfer... [10:02:47] Restarting instance XXX.XXX.XXX.68:20162 [10:03:02] [10:03:12] Restart instance XXX.XXX.XXX.68:20162 success [10:03:12] Evicting 7482 leaders from store XXX.XXX.XXX.68:20163... [10:03:12] Still waitting for 7482 store leaders to transfer... [10:03:14] Still waitting for 7191 store leaders to transfer... [10:03:16] Still waitting for 7191 store leaders to transfer... [10:03:18] Still waitting for 7191 store leaders to transfer... [10:03:20] Still waitting for 7191 store leaders to transfer... [10:03:23] Still waitting for 7191 store leaders to transfer... [10:03:25] Still waitting for 4395 store leaders to transfer... [10:03:27] Still waitting for 4395 store leaders to transfer... [10:03:29] Still waitting for 4395 store leaders to transfer... [10:03:31] Still waitting for 4395 store leaders to transfer... [10:03:33] Still waitting for 1569 store leaders to transfer... [10:03:35] Still waitting for 1569 store leaders to transfer... [10:03:37] Still waitting for 1569 store leaders to transfer... [10:03:39] Still waitting for 1569 store leaders to transfer... [10:03:42] Still waitting for 1569 store leaders to transfer... [10:03:44] Restarting instance XXX.XXX.XXX.68:20163 [10:04:04] [10:04:08] Restart instance XXX.XXX.XXX.68:20163 success [10:04:09] Evicting 7472 leaders from store XXX.XXX.XXX.101:20160... [10:04:09] Still waitting for 7472 store leaders to transfer... [10:04:11] Still waitting for 7472 store leaders to transfer... [10:04:13] Still waitting for 6740 store leaders to transfer... [10:04:15] Still waitting for 6740 store leaders to transfer... [10:04:17] Still waitting for 6740 store leaders to transfer... [10:04:19] Still waitting for 6740 store leaders to transfer... [10:04:21] Still waitting for 3869 store leaders to transfer... [10:04:23] Still waitting for 3869 store leaders to transfer... [10:04:25] Still waitting for 3869 store leaders to transfer... [10:04:27] Still waitting for 3869 store leaders to transfer... [10:04:30] Still waitting for 3869 store leaders to transfer... [10:04:32] Still waitting for 949 store leaders to transfer... [10:04:34] Still waitting for 949 store leaders to transfer... [10:04:36] Still waitting for 949 store leaders to transfer... [10:04:38] Still waitting for 949 store leaders to transfer... [10:04:40] Still waitting for 949 store leaders to transfer... [10:04:42] Restarting instance XXX.XXX.XXX.101:20160 [10:05:05] Restart instance XXX.XXX.XXX.101:20160 success [10:05:05] Evicting 7474 leaders from store XXX.XXX.XXX.101:20161... [10:05:05] Still waitting for 7474 store leaders to transfer... [10:05:07] Still waitting for 7474 store leaders to transfer... [10:05:09] Still waitting for 7474 store leaders to transfer... [10:05:11] Still waitting for 7474 store leaders to transfer... [10:05:14] Still waitting for 7474 store leaders to transfer... [10:05:16] Still waitting for 4891 store leaders to transfer... [10:05:18] Still waitting for 4891 store leaders to transfer... [10:05:20] Still waitting for 4891 store leaders to transfer... [10:05:22] Still waitting for 4891 store leaders to transfer... [10:05:24] Still waitting for 4891 store leaders to transfer... [10:05:26] Still waitting for 1999 store leaders to transfer... [10:05:28] Still waitting for 1999 store leaders to transfer... [10:05:30] Still waitting for 1999 store leaders to transfer... [10:05:32] Still waitting for 1999 store leaders to transfer... [10:05:34] Restarting instance XXX.XXX.XXX.101:20161 [10:05:57] Restart instance XXX.XXX.XXX.101:20161 success [10:05:57] Evicting 7483 leaders from store XXX.XXX.XXX.101:20162... [10:05:57] Still waitting for 7483 store leaders to transfer... [10:06:00] Still waitting for 7483 store leaders to transfer... [10:06:02] Still waitting for 7483 store leaders to transfer... [10:06:04] Still waitting for 6184 store leaders to transfer... [10:06:06] Still waitting for 6184 store leaders to transfer... [10:06:08] Still waitting for 6184 store leaders to transfer... [10:06:10] Still waitting for 6184 store leaders to transfer... [10:06:12] Still waitting for 3305 store leaders to transfer... [10:06:14] Still waitting for 3305 store leaders to transfer... [10:06:16] Still waitting for 3305 store leaders to transfer... [10:06:18] Still waitting for 3305 store leaders to transfer... [10:06:20] Still waitting for 3305 store leaders to transfer... [10:06:23] Still waitting for 409 store leaders to transfer... [10:06:25] Still waitting for 409 store leaders to transfer... [10:06:27] Still waitting for 409 store leaders to transfer... [10:06:29] Still waitting for 409 store leaders to transfer... [10:06:31] Still waitting for 409 store leaders to transfer... [10:06:33] Restarting instance XXX.XXX.XXX.101:20162 [10:06:55] Restart instance XXX.XXX.XXX.101:20162 success [10:06:55] Evicting 7483 leaders from store XXX.XXX.XXX.101:20163... [10:06:55] Still waitting for 7483 store leaders to transfer... [10:06:57] Still waitting for 7441 store leaders to transfer... [10:06:59] Still waitting for 7441 store leaders to transfer... [10:07:01] Still waitting for 7441 store leaders to transfer... [10:07:03] Still waitting for 7441 store leaders to transfer... [10:07:05] Still waitting for 4564 store leaders to transfer... [10:07:07] Still waitting for 4564 store leaders to transfer... [10:07:09] Still waitting for 4564 store leaders to transfer... [10:07:12] Still waitting for 4564 store leaders to transfer... [10:07:14] Still waitting for 4564 store leaders to transfer... [10:07:16] Still waitting for 1694 store leaders to transfer... [10:07:18] Still waitting for 1694 store leaders to transfer... [10:07:20] Still waitting for 1694 store leaders to transfer... [10:07:22] Still waitting for 1694 store leaders to transfer... [10:07:24] Still waitting for 1694 store leaders to transfer... [10:07:26] Restarting instance XXX.XXX.XXX.101:20163 [10:07:49] Restart instance XXX.XXX.XXX.101:20163 success [10:07:49] Evicting 7478 leaders from store XXX.XXX.XXX.102:20160... [10:07:49] Still waitting for 7478 store leaders to transfer... [10:07:51] Still waitting for 7478 store leaders to transfer... [10:07:53] Still waitting for 7478 store leaders to transfer... [10:07:55] Still waitting for 5945 store leaders to transfer... [10:07:58] Still waitting for 5945 store leaders to transfer... [10:08:00] Still waitting for 5945 store leaders to transfer... [10:08:02] Still waitting for 5945 store leaders to transfer... [10:08:04] Still waitting for 5945 store leaders to transfer... [10:08:06] Still waitting for 3029 store leaders to transfer... [10:08:08] Still waitting for 3029 store leaders to transfer... [10:08:10] Still waitting for 3029 store leaders to transfer... [10:08:12] Still waitting for 3029 store leaders to transfer... [10:08:14] Still waitting for 3029 store leaders to transfer... [10:08:16] Still waitting for 138 store leaders to transfer... [10:08:18] Still waitting for 138 store leaders to transfer... [10:08:21] Still waitting for 138 store leaders to transfer... [10:08:23] Still waitting for 138 store leaders to transfer... [10:08:25] Restarting instance XXX.XXX.XXX.102:20160 [10:08:46] Restart instance XXX.XXX.XXX.102:20160 success [10:08:46] Evicting 7477 leaders from store XXX.XXX.XXX.102:20161... [10:08:47] Still waitting for 7477 store leaders to transfer... [10:08:49] Still waitting for 7477 store leaders to transfer... [10:08:51] Still waitting for 7477 store leaders to transfer... [10:08:53] Still waitting for 5774 store leaders to transfer... [10:08:55] Still waitting for 5774 store leaders to transfer... [10:08:57] Still waitting for 5774 store leaders to transfer... [10:08:59] Still waitting for 5774 store leaders to transfer... [10:09:01] Still waitting for 5774 store leaders to transfer... [10:09:03] Still waitting for 2885 store leaders to transfer... [10:09:05] Still waitting for 2885 store leaders to transfer... [10:09:07] Still waitting for 2885 store leaders to transfer... [10:09:10] Still waitting for 2885 store leaders to transfer... [10:09:12] Still waitting for 2885 store leaders to transfer... [10:09:14] Still waitting for 24 store leaders to transfer... [10:09:16] Still waitting for 24 store leaders to transfer... [10:09:18] Still waitting for 24 store leaders to transfer... [10:09:20] Still waitting for 24 store leaders to transfer... [10:09:22] Still waitting for 24 store leaders to transfer... [10:09:24] Restarting instance XXX.XXX.XXX.102:20161 [10:09:47] Restart instance XXX.XXX.XXX.102:20161 success [10:09:47] Evicting 7484 leaders from store XXX.XXX.XXX.102:20162... [10:09:47] Still waitting for 7484 store leaders to transfer... [10:09:49] Still waitting for 7484 store leaders to transfer... [10:09:51] Still waitting for 7484 store leaders to transfer... [10:09:54] Still waitting for 7484 store leaders to transfer... [10:09:56] Still waitting for 5564 store leaders to transfer... [10:09:58] Still waitting for 5564 store leaders to transfer... [10:10:00] Still waitting for 5564 store leaders to transfer... [10:10:02] Still waitting for 5564 store leaders to transfer... [10:10:04] Still waitting for 2748 store leaders to transfer... [10:10:06] Still waitting for 2748 store leaders to transfer... [10:10:08] Still waitting for 2748 store leaders to transfer... [10:10:10] Still waitting for 2748 store leaders to transfer... [10:10:12] Still waitting for 2748 store leaders to transfer... [10:10:14] Restarting instance XXX.XXX.XXX.102:20162 [10:10:36] Restart instance XXX.XXX.XXX.102:20162 success [10:10:36] Evicting 7483 leaders from store XXX.XXX.XXX.102:20163... [10:10:36] Still waitting for 7483 store leaders to transfer... [10:10:38] Still waitting for 7483 store leaders to transfer... [10:10:40] Still waitting for 7483 store leaders to transfer... [10:10:42] Still waitting for 7483 store leaders to transfer... [10:10:45] Still waitting for 5573 store leaders to transfer... [10:10:47] Still waitting for 5573 store leaders to transfer... [10:10:49] Still waitting for 5573 store leaders to transfer... [10:10:51] Still waitting for 5573 store leaders to transfer... [10:10:53] Still waitting for 5573 store leaders to transfer... [10:10:55] Still waitting for 3276 store leaders to transfer... [10:10:57] Still waitting for 3276 store leaders to transfer... [10:10:59] Still waitting for 3276 store leaders to transfer... [10:11:01] Still waitting for 3276 store leaders to transfer... [10:11:03] Still waitting for 396 store leaders to transfer... [10:11:05] Still waitting for 396 store leaders to transfer... [10:11:08] Still waitting for 396 store leaders to transfer... [10:11:10] Still waitting for 396 store leaders to transfer... [10:11:12] Still waitting for 396 store leaders to transfer... [10:11:14] Restarting instance XXX.XXX.XXX.102:20163 [10:11:28] [10:11:29] [10:11:37] Restart instance XXX.XXX.XXX.102:20163 success [10:11:37] Evicting 7476 leaders from store XXX.XXX.XXX.103:20160... [10:11:37] Still waitting for 7476 store leaders to transfer... [10:11:39] Still waitting for 7476 store leaders to transfer... [10:11:41] Still waitting for 6360 store leaders to transfer... [10:11:43] Still waitting for 6360 store leaders to transfer... [10:11:45] Still waitting for 6360 store leaders to transfer... [10:11:47] Still waitting for 6360 store leaders to transfer... [10:11:49] Still waitting for 6360 store leaders to transfer... [10:11:51] Still waitting for 3458 store leaders to transfer... [10:11:53] Still waitting for 3458 store leaders to transfer... [10:11:56] Still waitting for 3458 store leaders to transfer... [10:11:58] Still waitting for 3458 store leaders to transfer... [10:12:00] Still waitting for 3458 store leaders to transfer... [10:12:02] Still waitting for 572 store leaders to transfer... [10:12:04] Still waitting for 572 store leaders to transfer... [10:12:06] Still waitting for 572 store leaders to transfer... [10:12:08] Still waitting for 572 store leaders to transfer... [10:12:10] Still waitting for 572 store leaders to transfer... [10:12:12] Restarting instance XXX.XXX.XXX.103:20160 [10:12:37] Restart instance XXX.XXX.XXX.103:20160 success [10:12:37] Evicting 7476 leaders from store XXX.XXX.XXX.103:20161... [10:12:37] Still waitting for 7476 store leaders to transfer... [10:12:39] Still waitting for 7476 store leaders to transfer... [10:12:41] Still waitting for 6714 store leaders to transfer... [10:12:43] Still waitting for 6714 store leaders to transfer... [10:12:46] Still waitting for 6714 store leaders to transfer... [10:12:48] Still waitting for 6714 store leaders to transfer... [10:12:50] Still waitting for 6714 store leaders to transfer... [10:12:52] Still waitting for 3853 store leaders to transfer... [10:12:54] Still waitting for 3853 store leaders to transfer... [10:12:56] Still waitting for 3853 store leaders to transfer... [10:12:58] Still waitting for 3853 store leaders to transfer... [10:13:00] Still waitting for 966 store leaders to transfer... [10:13:02] Still waitting for 966 store leaders to transfer... [10:13:04] Still waitting for 966 store leaders to transfer... [10:13:06] Still waitting for 966 store leaders to transfer... [10:13:08] Still waitting for 966 store leaders to transfer... [10:13:11] Restarting instance XXX.XXX.XXX.103:20161 [10:13:36] Restart instance XXX.XXX.XXX.103:20161 success [10:13:36] Evicting 7485 leaders from store XXX.XXX.XXX.103:20162... [10:13:36] Still waitting for 7485 store leaders to transfer... [10:13:38] Still waitting for 7485 store leaders to transfer... [10:13:40] Still waitting for 7485 store leaders to transfer... [10:13:42] Still waitting for 5883 store leaders to transfer... [10:13:44] Still waitting for 5883 store leaders to transfer... [10:13:47] Still waitting for 5883 store leaders to transfer... [10:13:49] Still waitting for 5883 store leaders to transfer... [10:13:51] Still waitting for 5883 store leaders to transfer... [10:13:53] Still waitting for 3024 store leaders to transfer... [10:13:55] Still waitting for 3024 store leaders to transfer... [10:13:57] Still waitting for 3024 store leaders to transfer... [10:13:59] Still waitting for 3024 store leaders to transfer... [10:14:01] Still waitting for 3024 store leaders to transfer... [10:14:03] Still waitting for 137 store leaders to transfer... [10:14:05] Still waitting for 137 store leaders to transfer... [10:14:07] Still waitting for 137 store leaders to transfer... [10:14:09] Still waitting for 137 store leaders to transfer... [10:14:12] Still waitting for 137 store leaders to transfer... [10:14:14] Restarting instance XXX.XXX.XXX.103:20162 [10:14:38] Restart instance XXX.XXX.XXX.103:20162 success [10:14:38] Evicting 7486 leaders from store XXX.XXX.XXX.103:20163... [10:14:38] Still waitting for 7486 store leaders to transfer... [10:14:40] Still waitting for 7486 store leaders to transfer... [10:14:43] Still waitting for 6568 store leaders to transfer... [10:14:45] Still waitting for 6568 store leaders to transfer... [10:14:47] Still waitting for 6568 store leaders to transfer... [10:14:49] Still waitting for 6568 store leaders to transfer... [10:14:51] Still waitting for 6568 store leaders to transfer... [10:14:53] Still waitting for 3666 store leaders to transfer... [10:14:55] Still waitting for 3666 store leaders to transfer... [10:14:57] Still waitting for 3666 store leaders to transfer... [10:14:59] Still waitting for 3666 store leaders to transfer... [10:15:01] Still waitting for 3666 store leaders to transfer... [10:15:03] Still waitting for 780 store leaders to transfer... [10:15:06] Still waitting for 780 store leaders to transfer... [10:15:08] Still waitting for 780 store leaders to transfer... [10:15:10] Still waitting for 780 store leaders to transfer... [10:15:12] Restarting instance XXX.XXX.XXX.103:20163 [10:15:36] Restart instance XXX.XXX.XXX.103:20163 success [10:15:36]Upgrading component tidb [10:15:36] Restarting instance XXX.XXX.XXX.97:3306 [10:16:31] Restart instance XXX.XXX.XXX.97:3306 success [10:16:31] Restarting instance XXX.XXX.XXX.98:3306 [10:16:34] Restart instance XXX.XXX.XXX.98:3306 success [10:16:34] Restarting instance XXX.XXX.XXX.99:3306 [10:16:37] Restart instance XXX.XXX.XXX.99:3306 success [10:16:37]Upgrading component prometheus [10:16:37] Restarting instance XXX.XXX.XXX.98:9090 [10:16:38] Restart instance XXX.XXX.XXX.98:9090 success [10:16:38]Upgrading component grafana [10:16:38] Restarting instance XXX.XXX.XXX.98:3000 [10:16:41] Restart instance XXX.XXX.XXX.98:3000 success [10:16:41]Upgrading component alertmanager [10:16:41] Restarting instance XXX.XXX.XXX.98:9093 [10:16:41] Restart instance XXX.XXX.XXX.98:9093 success [10:16:41]Stopping component node_exporter [10:16:41] Stopping instance XXX.XXX.XXX.99 [10:16:41] Stopping instance XXX.XXX.XXX.102 [10:16:41] Stopping instance XXX.XXX.XXX.103 [10:16:41] Stopping instance XXX.XXX.XXX.98 [10:16:41] Stopping instance XXX.XXX.XXX.68 [10:16:41] Stopping instance XXX.XXX.XXX.101 [10:16:41] Stopping instance XXX.XXX.XXX.67 [10:16:41] Stopping instance XXX.XXX.XXX.97 [10:16:42] Stop XXX.XXX.XXX.103 success [10:16:42] Stop XXX.XXX.XXX.98 success [10:16:42] Stop XXX.XXX.XXX.102 success [10:16:42] Stop XXX.XXX.XXX.101 success [10:16:42] Stop XXX.XXX.XXX.67 success [10:16:42] Stop XXX.XXX.XXX.97 success [10:16:42] Stop XXX.XXX.XXX.99 success [10:16:42] Stop XXX.XXX.XXX.68 success [10:16:42]Stopping component blackbox_exporter [10:16:42] Stopping instance XXX.XXX.XXX.99 [10:16:42] Stopping instance XXX.XXX.XXX.102 [10:16:42] Stopping instance XXX.XXX.XXX.98 [10:16:42] Stopping instance XXX.XXX.XXX.97 [10:16:42] Stopping instance XXX.XXX.XXX.103 [10:16:42] Stopping instance XXX.XXX.XXX.67 [10:16:42] Stopping instance XXX.XXX.XXX.101 [10:16:42] Stopping instance XXX.XXX.XXX.68 [10:16:42] Stop XXX.XXX.XXX.98 success [10:16:42] Stop XXX.XXX.XXX.103 success [10:16:42] Stop XXX.XXX.XXX.99 success [10:16:42] Stop XXX.XXX.XXX.67 success [10:16:42] Stop XXX.XXX.XXX.97 success [10:16:42] Stop XXX.XXX.XXX.102 success [10:16:42] Stop XXX.XXX.XXX.68 success [10:16:42] Stop XXX.XXX.XXX.101 success [10:16:42]Starting component node_exporter [10:16:42] Starting instance XXX.XXX.XXX.99 [10:16:42] Starting instance XXX.XXX.XXX.98 [10:16:42] Starting instance XXX.XXX.XXX.103 [10:16:42] Starting instance XXX.XXX.XXX.101 [10:16:42] Starting instance XXX.XXX.XXX.67 [10:16:42] Starting instance XXX.XXX.XXX.97 [10:16:42] Starting instance XXX.XXX.XXX.102 [10:16:42] Starting instance XXX.XXX.XXX.68 [10:16:43] Start XXX.XXX.XXX.98 success [10:16:43] Start XXX.XXX.XXX.99 success [10:16:43] Start XXX.XXX.XXX.67 success [10:16:43] Start XXX.XXX.XXX.68 success [10:16:43] Start XXX.XXX.XXX.97 success [10:16:44] Start XXX.XXX.XXX.103 success [10:16:44] Start XXX.XXX.XXX.101 success [10:16:44] Start XXX.XXX.XXX.102 success [10:16:44]Starting component blackbox_exporter [10:16:44] Starting instance XXX.XXX.XXX.99 [10:16:44] Starting instance XXX.XXX.XXX.68 [10:16:44] Starting instance XXX.XXX.XXX.67 [10:16:44] Starting instance XXX.XXX.XXX.98 [10:16:44] Starting instance XXX.XXX.XXX.97 [10:16:44] Starting instance XXX.XXX.XXX.101 [10:16:44] Starting instance XXX.XXX.XXX.103 [10:16:44] Starting instance XXX.XXX.XXX.102 [10:16:45] Start XXX.XXX.XXX.98 success [10:16:45] Start XXX.XXX.XXX.99 success [10:16:45] Start XXX.XXX.XXX.68 success [10:16:45] Start XXX.XXX.XXX.97 success [10:16:45] Start XXX.XXX.XXX.67 success [10:16:46] Start XXX.XXX.XXX.103 success [10:16:46] Start XXX.XXX.XXX.102 success [10:16:46] Start XXX.XXX.XXX.101 success [10:16:46]Upgraded cluster `<cluster-name>` successfully [10:18:22][tidb@Node1 ~]$ tiup cluster display <cluster-name> [10:18:22]tiup is checking updates for component cluster ... [10:18:22]Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.11.3/tiup-cluster display <cluster-name> [10:18:22]Cluster type: tidb [10:18:22]Cluster name: <cluster-name> [10:18:22]Cluster version: v6.5.1 [10:18:22]Deploy user: tidb [10:18:22]SSH type: builtin [10:18:22]Dashboard URL: http://XXX.XXX.XXX.97:2379/dashboard [10:18:22]Grafana URL: http://XXX.XXX.XXX.98:3000 [10:18:22]ID Role Host Ports OS/Arch Status Data Dir Deploy Dir [10:18:22]-- ---- ---- ----- ------- ------ -------- ---------- [10:18:22]XXX.XXX.XXX.98:9093 alertmanager XXX.XXX.XXX.98 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093 [10:18:22]XXX.XXX.XXX.98:3000 grafana XXX.XXX.XXX.98 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000 [10:18:22]XXX.XXX.XXX.97:2379 pd XXX.XXX.XXX.97 2379/2380 linux/x86_64 Up|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379 [10:18:22]XXX.XXX.XXX.98:2379 pd XXX.XXX.XXX.98 2379/2380 linux/x86_64 Up|L /tidb-data/pd-2379 /tidb-deploy/pd-2379 [10:18:22]XXX.XXX.XXX.99:2379 pd XXX.XXX.XXX.99 2379/2380 linux/x86_64 Up /tidb-data/pd-2379 /tidb-deploy/pd-2379 [10:18:22]XXX.XXX.XXX.98:9090 prometheus XXX.XXX.XXX.98 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090 [10:18:22]XXX.XXX.XXX.97:3306 tidb XXX.XXX.XXX.97 3306/10080 linux/x86_64 Up - /tidb-deploy/tidb-3306 [10:18:22]XXX.XXX.XXX.98:3306 tidb XXX.XXX.XXX.98 3306/10080 linux/x86_64 Up - /tidb-deploy/tidb-3306 [10:18:22]XXX.XXX.XXX.99:3306 tidb XXX.XXX.XXX.99 3306/10080 linux/x86_64 Up - /tidb-deploy/tidb-3306 [10:18:22]XXX.XXX.XXX.101:20160 tikv XXX.XXX.XXX.101 20160/20180 linux/x86_64 Up /tikv1/tidb-data/tikv-20160 /tikv1/tidb-deploy/tikv-20160 [10:18:22]XXX.XXX.XXX.101:20161 tikv XXX.XXX.XXX.101 20161/20181 linux/x86_64 Up /tikv2/tidb-data/tikv-20161 /tikv2/tidb-deploy/tikv-20161 [10:18:22]XXX.XXX.XXX.101:20162 tikv XXX.XXX.XXX.101 20162/20182 linux/x86_64 Up /tikv3/tidb-data/tikv-20162 /tikv3/tidb-deploy/tikv-20162 [10:18:22]XXX.XXX.XXX.101:20163 tikv XXX.XXX.XXX.101 20163/20183 linux/x86_64 Up /tikv4/tidb-data/tikv-20163 /tikv4/tidb-deploy/tikv-20163 [10:18:22]XXX.XXX.XXX.102:20160 tikv XXX.XXX.XXX.102 20160/20180 linux/x86_64 Up /tikv1/tidb-data/tikv-20160 /tikv1/tidb-deploy/tikv-20160 [10:18:22]XXX.XXX.XXX.102:20161 tikv XXX.XXX.XXX.102 20161/20181 linux/x86_64 Up /tikv2/tidb-data/tikv-20161 /tikv2/tidb-deploy/tikv-20161 [10:18:22]XXX.XXX.XXX.102:20162 tikv XXX.XXX.XXX.102 20162/20182 linux/x86_64 Up /tikv3/tidb-data/tikv-20162 /tikv3/tidb-deploy/tikv-20162 [10:18:22]XXX.XXX.XXX.102:20163 tikv XXX.XXX.XXX.102 20163/20183 linux/x86_64 Up /tikv4/tidb-data/tikv-20163 /tikv4/tidb-deploy/tikv-20163 [10:18:22]XXX.XXX.XXX.103:20160 tikv XXX.XXX.XXX.103 20160/20180 linux/x86_64 Up /tikv1/tidb-data/tikv-20160 /tikv1/tidb-deploy/tikv-20160 [10:18:22]XXX.XXX.XXX.103:20161 tikv XXX.XXX.XXX.103 20161/20181 linux/x86_64 Up /tikv2/tidb-data/tikv-20161 /tikv2/tidb-deploy/tikv-20161 [10:18:22]XXX.XXX.XXX.103:20162 tikv XXX.XXX.XXX.103 20162/20182 linux/x86_64 Up /tikv3/tidb-data/tikv-20162 /tikv3/tidb-deploy/tikv-20162 [10:18:22]XXX.XXX.XXX.103:20163 tikv XXX.XXX.XXX.103 20163/20183 linux/x86_64 Up /tikv4/tidb-data/tikv-20163 /tikv4/tidb-deploy/tikv-20163 [10:18:22]XXX.XXX.XXX.67:20160 tikv XXX.XXX.XXX.67 20160/20180 linux/x86_64 Up /tikv1/tidb-data/tikv-20160 /tikv1/tidb-deploy/tikv-20160 [10:18:22]XXX.XXX.XXX.67:20161 tikv XXX.XXX.XXX.67 20161/20181 linux/x86_64 Up /tikv2/tidb-data/tikv-20161 /tikv2/tidb-deploy/tikv-20161 [10:18:22]XXX.XXX.XXX.67:20162 tikv XXX.XXX.XXX.67 20162/20182 linux/x86_64 Up /tikv3/tidb-data/tikv-20162 /tikv3/tidb-deploy/tikv-20162 [10:18:22]XXX.XXX.XXX.67:20163 tikv XXX.XXX.XXX.67 20163/20183 linux/x86_64 Up /tikv4/tidb-data/tikv-20163 /tikv4/tidb-deploy/tikv-20163 [10:18:22]XXX.XXX.XXX.68:20160 tikv XXX.XXX.XXX.68 20160/20180 linux/x86_64 Up /tikv1/tidb-data/tikv-20160 /tikv1/tidb-deploy/tikv-20160 [10:18:22]XXX.XXX.XXX.68:20161 tikv XXX.XXX.XXX.68 20161/20181 linux/x86_64 Up /tikv2/tidb-data/tikv-20161 /tikv2/tidb-deploy/tikv-20161 [10:18:22]XXX.XXX.XXX.68:20162 tikv XXX.XXX.XXX.68 20162/20182 linux/x86_64 Up /tikv3/tidb-data/tikv-20162 /tikv3/tidb-deploy/tikv-20162 [10:18:22]XXX.XXX.XXX.68:20163 tikv XXX.XXX.XXX.68 20163/20183 linux/x86_64 Up /tikv4/tidb-data/tikv-20163 /tikv4/tidb-deploy/tikv-20163 [10:18:22]Total nodes: 29

我把升级命令在screen里面执行的,怕本地与机房直接网络有问题影响升级过程

升级后验证

新统计表数据和以前的表记录数据相比较

中控机迁移

遇到问题

auto analyze table 出现失败的情况

show analyze status;

解决方法: 手工执行了,速度确实快,我的命令如下:

停止自动收集 set global tidb_auto_analyze_end_time =01:00 +0000; 即时生效 set global tidb_max_auto_analyze_time =600; 即时生效,在设置之前启动的超过这个时间也被kill; 设置并发参数 set global tidb_build_stats_concurrency=8; 这个变量用来设置 ANALYZE 语句执行时并发度。 执行手工收集 analyze table `Table_schema`.`Table_name` 手工速度比自动快的多,15分钟已经比自动3个小时执行的数据多了

PD 日志报错["transport: Got too many pings from the client, closing the connection."]

目前看不影响数据和集群使用,已经报官方排查

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:迁移prometheus数据
下一篇:软硬件负载均衡 IP 地址透传
相关文章