机房搬迁更改集群IP

网友投稿 385 2023-04-04

1、查看当前集群状态

[tidb@vm172-16-201-64 ~]$ tiup cluster display tidb-dev tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-dev Cluster type: tidb Cluster name: tidb-dev Cluster version: v5.4.1 Deploy user: tidb SSH type: builtin Dashboard URL: http://172.16.201.151:2379/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.201.150:9093 alertmanager 172.16.201.150 9093/9094 linux/x86_64 Up /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093 172.16.201.150:8300 cdc 172.16.201.150 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.152:8300 cdc 172.16.201.152 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.150:8249 drainer 172.16.201.150 8249 linux/x86_64 Up /data1/binlog /data1/tidb-deploy/drainer-8249 172.16.201.150:3000 grafana 172.16.201.150 3000 linux/x86_64 Up - /data1/tidb-deploy/grafana-3000 172.16.201.150:2379 pd 172.16.201.150 2379/2380 linux/x86_64 Up /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.151:2379 pd 172.16.201.151 2379/2380 linux/x86_64 Up|UI /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.152:2379 pd 172.16.201.152 2379/2380 linux/x86_64 Up|L /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.150:9090 prometheus 172.16.201.150 9090/12020 linux/x86_64 Up /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090 172.16.201.150:8250 pump 172.16.201.150 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.151:8250 pump 172.16.201.151 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.152:8250 pump 172.16.201.152 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.150:4000 tidb 172.16.201.150 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.151:4000 tidb 172.16.201.151 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.152:4000 tidb 172.16.201.152 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.150:9000 tiflash 172.16.201.150 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.152:9000 tiflash 172.16.201.152 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.150:20160 tikv 172.16.201.150 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.151:20160 tikv 172.16.201.151 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.152:20160 tikv 172.16.201.152 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 Total nodes: 20 [tidb@vm172-16-201-64 ~]$

2、IP映射关系

修改IP前

机房搬迁更改集群IP

修改IP后

备注

172.16.201.150

172.16.201.153

中控机

172.16.201.151

172.16.201.154

172.16.201.152

172.16.201.155

3、stop集群

tiup cluster stop tidb-dev

4、机房搬迁并调整机器IP地址

5、修改meta.yaml 文件,替换对应IP

先备份原来的.tiup 目录,防止修改出错

[tidb@vm172-16-201-64 tidb-dev]$ more meta.yaml user: tidb tidb_version: v5.4.1 topology: global: user: tidb ssh_port: 22 ssh_type: builtin deploy_dir: /data1/tidb-deploy data_dir: /data1/tidb-data os: linux arch: amd64 monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 deploy_dir: /data1/tidb-deploy/monitor-9100 data_dir: /data1/tidb-data/monitor-9100 log_dir: /data1/tidb-deploy/monitor-9100/log server_configs: tidb: binlog.enable: false binlog.ignore-error: true log.level: error new_collations_enabled_on_first_bootstrap: true performance.txn-total-size-limit: 2147483648 pessimistic-txn.max-retry-count: 0 prepared-plan-cache.enabled: true tikv: server.snap-max-write-bytes-per-sec: 200MB pd: {} tidb_dashboard: {} tiflash: {} tiflash-learner: {} pump: {} drainer: {} cdc: debug.enable-db-sorter: true per-table-memory-quota: 1073741824 sorter.chunk-size-limit: 268435456 sorter.max-memory-consumption: 30 sorter.max-memory-percentage: 70 sorter.num-workerpool-goroutine: 30 kvcdc: {} grafana: {} tidb_servers: - host: 172.16.201.155 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux tikv_servers: - host: 172.16.201.155 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux tiflash_servers: - host: 172.16.201.155 ssh_port: 22 tcp_port: 9000 http_port: 8123 flash_service_port: 3930 flash_proxy_port: 20170 flash_proxy_status_port: 20292 metrics_port: 8234 deploy_dir: /data1/tidb-deploy/tiflash-9000 data_dir: /data1/tidb-data/tiflash-9000 log_dir: /data1/tidb-deploy/tiflash-9000/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 tcp_port: 9000 http_port: 8123 flash_service_port: 3930 flash_proxy_port: 20170 flash_proxy_status_port: 20292 metrics_port: 8234 deploy_dir: /data1/tidb-deploy/tiflash-9000 data_dir: /data1/tidb-data/tiflash-9000 log_dir: /data1/tidb-deploy/tiflash-9000/log arch: amd64 os: linux pd_servers: - host: 172.16.201.153 ssh_port: 22 name: pd-172.16.201.153-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 name: pd-172.16.201.154-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os: linux - host: 172.16.201.155 ssh_port: 22 name: pd-172.16.201.155-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os: linux pump_servers: - host: 172.16.201.153 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux - host: 172.16.201.155 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux drainer_servers: - host: 172.16.201.153 ssh_port: 22 port: 8249 deploy_dir: /data1/tidb-deploy/drainer-8249 data_dir: /data1/binlog log_dir: /data1/tidb-deploy/drainer-8249/log config: syncer.db-type: file arch: amd64 os: linux cdc_servers: - host: 172.16.201.153 ssh_port: 22 port: 8300 deploy_dir: /data1/tidb-deploy/cdc-8300 data_dir: /data1/tidb-data/cdc-8300 log_dir: /data1/tidb-deploy/cdc-8300/log ticdc_cluster_id: "" arch: amd64 os: linux - host: 172.16.201.155 ssh_port: 22 port: 8300 deploy_dir: /data1/tidb-deploy/cdc-8300 data_dir: /data1/tidb-data/cdc-8300 log_dir: /data1/tidb-deploy/cdc-8300/log ticdc_cluster_id: "" arch: amd64 os: linux monitoring_servers: - host: 172.16.201.153 ssh_port: 22 port: 9090 ng_port: 12020 deploy_dir: /data1/tidb-deploy/prometheus-9090 data_dir: /data1/tidb-data/prometheus-9090 log_dir: /data1/tidb-deploy/prometheus-9090/log external_alertmanagers: [] arch: amd64 os: linux grafana_servers: - host: 172.16.201.153 ssh_port: 22 port: 3000 deploy_dir: /data1/tidb-deploy/grafana-3000 arch: amd64 os: linux username: admin password: admin anonymous_enable: false root_url: "" domain: "" alertmanager_servers: - host: 172.16.201.153 ssh_port: 22 web_port: 9093 cluster_port: 9094 deploy_dir: /data1/tidb-deploy/alertmanager-9093 data_dir: /data1/tidb-data/alertmanager-9093 log_dir: /data1/tidb-deploy/alertmanager-9093/log arch: amd64 os: linux

6、获取 Cluster ID

[root@vm172-16-201-64 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id" [2022/10/25 15:56:05.470 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 15:58:40.012 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 17:52:28.338 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062] [2022/10/25 17:58:14.750 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 18:00:42.983 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [root@vm172-16-201-63 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id" [2022/10/25 15:56:46.475 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 15:58:37.002 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 17:52:28.307 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062] [2022/10/25 17:58:14.758 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 18:00:42.981 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [root@vm172-16-201-95 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id" [2022/10/25 15:56:02.450 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 15:58:36.990 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 17:52:28.213 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062] [2022/10/25 17:58:14.658 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 18:00:42.879 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528]

7、获取已分配 ID

[root@vm172-16-201-95 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F= {print $2} | awk -F] {print $1} | sort -r | head -n 1 4000 [root@vm172-16-201-95 ~]# [root@vm172-16-201-63 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F= {print $2} | awk -F] {print $1} | sort -r | head -n 1 5000 [root@vm172-16-201-63 ~]# [root@vm172-16-201-64 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F= {print $2} | awk -F] {print $1} | sort -r | head -n 1 3000 [root@vm172-16-201-64 ~]#

8、移除所有 PD 旧数据目录

[tidb@vm172-16-201-95 tidb-data]$ mv pd-2379/ pd-2379_bak [tidb@vm172-16-201-95 tidb-data]$ ll total 20 drwxr-xr-x 2 tidb tidb 4096 Aug 9 16:46 drainer-8249 drwxr-xr-x 2 tidb tidb 4096 Oct 25 15:55 monitor-9100 drwx------ 5 tidb tidb 4096 Oct 25 20:41 pd-2379_bak drwxr-xr-x 4 tidb tidb 4096 Oct 25 15:56 pump-8250 drwxr-xr-x 6 tidb tidb 4096 Oct 25 18:00 tikv-20160 [tidb@vm172-16-201-95 tidb-data]$

9、部署新的 PD 集群

[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster display tidb-dev tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-dev Cluster type: tidb Cluster name: tidb-dev Cluster version: v5.4.1 Deploy user: tidb SSH type: builtin ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.201.153:9093 alertmanager 172.16.201.153 9093/9094 linux/x86_64 Down /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093 172.16.201.153:8300 cdc 172.16.201.153 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.155:8300 cdc 172.16.201.155 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.153:8249 drainer 172.16.201.153 8249 linux/x86_64 Down /data1/binlog /data1/tidb-deploy/drainer-8249 172.16.201.153:3000 grafana 172.16.201.153 3000 linux/x86_64 Down - /data1/tidb-deploy/grafana-3000 172.16.201.154:2379 pd 172.16.201.154 2379/2380 linux/x86_64 Down /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.155:2379 pd 172.16.201.155 2379/2380 linux/x86_64 Down /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.153:9090 prometheus 172.16.201.153 9090/12020 linux/x86_64 Down /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090 172.16.201.153:8250 pump 172.16.201.153 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.154:8250 pump 172.16.201.154 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.155:8250 pump 172.16.201.155 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.153:4000 tidb 172.16.201.153 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000 172.16.201.154:4000 tidb 172.16.201.154 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000 172.16.201.155:4000 tidb 172.16.201.155 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000 172.16.201.153:9000 tiflash 172.16.201.153 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.155:9000 tiflash 172.16.201.155 9000/8123/3930/20170/20292/8234 linux/x86_64 N/A /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.154:20160 tikv 172.16.201.154 20160/20180 linux/x86_64 N/A /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.155:20160 tikv 172.16.201.155 20160/20180 linux/x86_64 N/A /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 Total nodes: 18 [tidb@vm172-16-201-64 tidb-dev]$ [tidb@vm172-16-201-64 tidb-dev]$ tiup cluster start tidb-dev -R pd tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster start tidb-dev -R pd Starting cluster tidb-dev... + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [ Serial ] - StartCluster Starting component pd Starting instance 172.16.201.155:2379 Starting instance 172.16.201.154:2379 Starting instance 172.16.201.153:2379 Start instance 172.16.201.153:2379 success Start instance 172.16.201.154:2379 success Start instance 172.16.201.155:2379 success Starting component node_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 success Starting component blackbox_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 success + [ Serial ] - UpdateTopology: cluster=tidb-dev Started cluster `tidb-dev` successfully [tidb@vm172-16-201-64 tidb-dev]$ tiup cluster reload tidb-dev -R pd tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster reload tidb-dev -R pd Will reload the cluster tidb-dev with restart policy is true, nodes: , roles: pd. Do you want to continue? [y/N]:(default=N) y + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [ Serial ] - UpdateTopology: cluster=tidb-dev + Refresh instance configs - Generate config pd -> 172.16.201.153:2379 ... Done - Generate config pd -> 172.16.201.154:2379 ... Done - Generate config pd -> 172.16.201.155:2379 ... Done - Generate config tikv -> 172.16.201.155:20160 ... Done - Generate config tikv -> 172.16.201.153:20160 ... Done - Generate config tikv -> 172.16.201.154:20160 ... Done - Generate config pump -> 172.16.201.153:8250 ... Done - Generate config pump -> 172.16.201.155:8250 ... Done - Generate config pump -> 172.16.201.154:8250 ... Done - Generate config tidb -> 172.16.201.155:4000 ... Done - Generate config tidb -> 172.16.201.153:4000 ... Done - Generate config tidb -> 172.16.201.154:4000 ... Done - Generate config tiflash -> 172.16.201.155:9000 ... Done - Generate config tiflash -> 172.16.201.153:9000 ... Done - Generate config drainer -> 172.16.201.153:8249 ... Done - Generate config cdc -> 172.16.201.153:8300 ... Done - Generate config cdc -> 172.16.201.155:8300 ... Done - Generate config prometheus -> 172.16.201.153:9090 ... Done - Generate config grafana -> 172.16.201.153:3000 ... Done - Generate config alertmanager -> 172.16.201.153:9093 ... Done + Refresh monitor configs - Generate config node_exporter -> 172.16.201.153 ... Done - Generate config node_exporter -> 172.16.201.154 ... Done - Generate config node_exporter -> 172.16.201.155 ... Done - Generate config blackbox_exporter -> 172.16.201.153 ... Done - Generate config blackbox_exporter -> 172.16.201.154 ... Done - Generate config blackbox_exporter -> 172.16.201.155 ... Done + [ Serial ] - Upgrade Cluster Upgrading component pd Restarting instance 172.16.201.153:2379 Restart instance 172.16.201.153:2379 success Restarting instance 172.16.201.155:2379 Restart instance 172.16.201.155:2379 success Restarting instance 172.16.201.154:2379 Restart instance 172.16.201.154:2379 success Stopping component node_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stopping instance 172.16.201.154 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 success Stopping component blackbox_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stopping instance 172.16.201.154 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 success Starting component node_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 success Starting component blackbox_exporter Starting instance 172.16.201.155 Starting instance 172.16.201.153 Starting instance 172.16.201.154 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 success Reloaded cluster `tidb-dev` successfully

10、确认pd 启动状态

[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster display tidb-dev tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-dev Cluster type: tidb Cluster name: tidb-dev Cluster version: v5.4.1 Deploy user: tidb SSH type: builtin Dashboard URL: http://172.16.201.154:2379/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.201.153:9093 alertmanager 172.16.201.153 9093/9094 linux/x86_64 Down /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093 172.16.201.153:8300 cdc 172.16.201.153 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.155:8300 cdc 172.16.201.155 8300 linux/x86_64 Down /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.153:8249 drainer 172.16.201.153 8249 linux/x86_64 Down /data1/binlog /data1/tidb-deploy/drainer-8249 172.16.201.153:3000 grafana 172.16.201.153 3000 linux/x86_64 Down - /data1/tidb-deploy/grafana-3000 172.16.201.153:2379 pd 172.16.201.153 2379/2380 linux/x86_64 Up|L /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.154:2379 pd 172.16.201.154 2379/2380 linux/x86_64 Up|UI /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.155:2379 pd 172.16.201.155 2379/2380 linux/x86_64 Up /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.153:9090 prometheus 172.16.201.153 9090/12020 linux/x86_64 Down /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090 172.16.201.153:8250 pump 172.16.201.153 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.154:8250 pump 172.16.201.154 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.155:8250 pump 172.16.201.155 8250 linux/x86_64 Down /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.153:4000 tidb 172.16.201.153 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000 172.16.201.154:4000 tidb 172.16.201.154 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000 172.16.201.155:4000 tidb 172.16.201.155 4000/10080 linux/x86_64 Down - /data1/tidb-deploy/tidb-4000 172.16.201.153:9000 tiflash 172.16.201.153 9000/8123/3930/20170/20292/8234 linux/x86_64 Down /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.155:9000 tiflash 172.16.201.155 9000/8123/3930/20170/20292/8234 linux/x86_64 Down /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.153:20160 tikv 172.16.201.153 20160/20180 linux/x86_64 Down /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.154:20160 tikv 172.16.201.154 20160/20180 linux/x86_64 Down /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.155:20160 tikv 172.16.201.155 20160/20180 linux/x86_64 Down /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 Total nodes: 20

11、使用 pd-recover 恢复 PD 集群

注意使用调整后的新 PD IP 来进行 pd-recover,pd-recover 和集群版本保持一致

[tidb@vm172-16-201-64 tidb-dev]$ tiup pd-recover -endpoints http://172.16.201.153:2379 -cluster-id 7158355689478888528 -alloc-id 20000 tiup is checking updates for component pd-recover ... Starting component `pd-recover`: /home/tidb/.tiup/components/pd-recover/v5.4.1/pd-recover /home/tidb/.tiup/components/pd-recover/v5.4.1/pd-recover -endpoints http://172.16.201.153:2379 -cluster-id 7158355689478888528 -alloc-id 20000 recover success! please restart the PD cluster [tidb@vm172-16-201-64 tidb-dev]$

12、reload 新集群配置

[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster reload tidb-dev --skip-restart tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster reload tidb-dev --skip-restart Will reload the cluster tidb-dev with restart policy is false, nodes: , roles: . Do you want to continue? [y/N]:(default=N) y + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + Refresh instance configs - Generate config pd -> 172.16.201.153:2379 ... Done - Generate config pd -> 172.16.201.154:2379 ... Done - Generate config pd -> 172.16.201.155:2379 ... Done - Generate config tikv -> 172.16.201.155:20160 ... Done - Generate config tikv -> 172.16.201.153:20160 ... Done - Generate config tikv -> 172.16.201.154:20160 ... Done - Generate config pump -> 172.16.201.153:8250 ... Done - Generate config pump -> 172.16.201.155:8250 ... Done - Generate config pump -> 172.16.201.154:8250 ... Done - Generate config tidb -> 172.16.201.155:4000 ... Done - Generate config tidb -> 172.16.201.153:4000 ... Done - Generate config tidb -> 172.16.201.154:4000 ... Done - Generate config tiflash -> 172.16.201.155:9000 ... Done - Generate config tiflash -> 172.16.201.153:9000 ... Done - Generate config drainer -> 172.16.201.153:8249 ... Done - Generate config cdc -> 172.16.201.153:8300 ... Done - Generate config cdc -> 172.16.201.155:8300 ... Done - Generate config prometheus -> 172.16.201.153:9090 ... Done - Generate config grafana -> 172.16.201.153:3000 ... Done - Generate config alertmanager -> 172.16.201.153:9093 ... Done + Refresh monitor configs - Generate config node_exporter -> 172.16.201.153 ... Done - Generate config node_exporter -> 172.16.201.154 ... Done - Generate config node_exporter -> 172.16.201.155 ... Done - Generate config blackbox_exporter -> 172.16.201.153 ... Done - Generate config blackbox_exporter -> 172.16.201.154 ... Done - Generate config blackbox_exporter -> 172.16.201.155 ... Done Reloaded cluster `tidb-dev` successfully

13、启动集群

[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster restart tidb-dev tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster restart tidb-dev Will restart the cluster tidb-dev with nodes: roles: . Cluster will be unavailable Do you want to continue? [y/N]:(default=N) y + [ Serial ] - SSHKeySet: privateKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa, publicKey=/home/tidb/.tiup/storage/cluster/clusters/tidb-dev/ssh/id_rsa.pub + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.154 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.155 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [Parallel] - UserSSH: user=tidb, host=172.16.201.153 + [ Serial ] - RestartCluster Stopping component alertmanager Stopping instance 172.16.201.153 Stop alertmanager 172.16.201.153:9093 success Stopping component grafana Stopping instance 172.16.201.153 Stop grafana 172.16.201.153:3000 success Stopping component prometheus Stopping instance 172.16.201.153 Stop prometheus 172.16.201.153:9090 success Stopping component cdc Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stop cdc 172.16.201.153:8300 success Stop cdc 172.16.201.155:8300 success Stopping component drainer Stopping instance 172.16.201.153 Stop drainer 172.16.201.153:8249 success Stopping component tiflash Stopping instance 172.16.201.153 Stopping instance 172.16.201.155 Stop tiflash 172.16.201.153:9000 success Stop tiflash 172.16.201.155:9000 success Stopping component tidb Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stopping instance 172.16.201.155 Stop tidb 172.16.201.153:4000 success Stop tidb 172.16.201.154:4000 success Stop tidb 172.16.201.155:4000 success Stopping component pump Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stopping instance 172.16.201.155 Stop pump 172.16.201.153:8250 success Stop pump 172.16.201.154:8250 success Stop pump 172.16.201.155:8250 success Stopping component tikv Stopping instance 172.16.201.154 Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stop tikv 172.16.201.153:20160 success Stop tikv 172.16.201.154:20160 success Stop tikv 172.16.201.155:20160 success Stopping component pd Stopping instance 172.16.201.155 Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stop pd 172.16.201.154:2379 success Stop pd 172.16.201.153:2379 success Stop pd 172.16.201.155:2379 success Stopping component node_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.153 Stopping instance 172.16.201.154 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 success Stopping component blackbox_exporter Stopping instance 172.16.201.155 Stopping instance 172.16.201.154 Stopping instance 172.16.201.153 Stop 172.16.201.153 success Stop 172.16.201.154 success Stop 172.16.201.155 success Starting component pd Starting instance 172.16.201.155:2379 Starting instance 172.16.201.153:2379 Starting instance 172.16.201.154:2379 Start instance 172.16.201.153:2379 success Start instance 172.16.201.154:2379 success Start instance 172.16.201.155:2379 success Starting component tikv Starting instance 172.16.201.154:20160 Starting instance 172.16.201.155:20160 Starting instance 172.16.201.153:20160 Start instance 172.16.201.153:20160 success Start instance 172.16.201.154:20160 success Start instance 172.16.201.155:20160 success Starting component pump Starting instance 172.16.201.154:8250 Starting instance 172.16.201.153:8250 Starting instance 172.16.201.155:8250 Start instance 172.16.201.153:8250 success Start instance 172.16.201.154:8250 success Start instance 172.16.201.155:8250 success Starting component tidb Starting instance 172.16.201.154:4000 Starting instance 172.16.201.155:4000 Starting instance 172.16.201.153:4000 Start instance 172.16.201.153:4000 success Start instance 172.16.201.154:4000 success Start instance 172.16.201.155:4000 success Starting component tiflash Starting instance 172.16.201.155:9000 Starting instance 172.16.201.153:9000 Start instance 172.16.201.153:9000 success Start instance 172.16.201.155:9000 success Starting component drainer Starting instance 172.16.201.153:8249 Start instance 172.16.201.153:8249 success Starting component cdc Starting instance 172.16.201.155:8300 Starting instance 172.16.201.153:8300 Start instance 172.16.201.153:8300 success Start instance 172.16.201.155:8300 success Starting component prometheus Starting instance 172.16.201.153:9090 Start instance 172.16.201.153:9090 success Starting component grafana Starting instance 172.16.201.153:3000 Start instance 172.16.201.153:3000 success Starting component alertmanager Starting instance 172.16.201.153:9093 Start instance 172.16.201.153:9093 success Starting component node_exporter Starting instance 172.16.201.154 Starting instance 172.16.201.155 Starting instance 172.16.201.153 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 success Starting component blackbox_exporter Starting instance 172.16.201.154 Starting instance 172.16.201.155 Starting instance 172.16.201.153 Start 172.16.201.153 success Start 172.16.201.154 success Start 172.16.201.155 success Restarted cluster `tidb-dev` successfully

14、确认集群状态

[tidb@vm172-16-201-64 tidb-dev]$ tiup cluster display tidb-dev tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-dev Cluster type: tidb Cluster name: tidb-dev Cluster version: v5.4.1 Deploy user: tidb SSH type: builtin Dashboard URL: http://172.16.201.154:2379/dashboard ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 172.16.201.153:9093 alertmanager 172.16.201.153 9093/9094 linux/x86_64 Up /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093 172.16.201.153:8300 cdc 172.16.201.153 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.155:8300 cdc 172.16.201.155 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.153:8249 drainer 172.16.201.153 8249 linux/x86_64 Up /data1/binlog /data1/tidb-deploy/drainer-8249 172.16.201.153:3000 grafana 172.16.201.153 3000 linux/x86_64 Up - /data1/tidb-deploy/grafana-3000 172.16.201.153:2379 pd 172.16.201.153 2379/2380 linux/x86_64 Up /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.154:2379 pd 172.16.201.154 2379/2380 linux/x86_64 Up|UI /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.155:2379 pd 172.16.201.155 2379/2380 linux/x86_64 Up|L /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.153:9090 prometheus 172.16.201.153 9090/12020 linux/x86_64 Down /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090 172.16.201.153:8250 pump 172.16.201.153 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.154:8250 pump 172.16.201.154 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.155:8250 pump 172.16.201.155 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.153:4000 tidb 172.16.201.153 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.154:4000 tidb 172.16.201.154 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.155:4000 tidb 172.16.201.155 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.153:9000 tiflash 172.16.201.153 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.155:9000 tiflash 172.16.201.155 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.153:20160 tikv 172.16.201.153 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.154:20160 tikv 172.16.201.154 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.155:20160 tikv 172.16.201.155 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 Total nodes: 20 [tidb@vm172-16-201-64 tidb-dev]$

15、检查集群

监控、dashboard、查询数据

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:TiDB 的数据加载性能调优方案
下一篇:Grafana组件升级和离线镜像源
相关文章