机房搬迁 TiDB 集群 IP 更改方案

网友投稿 590 2024-04-09



1、查看当前集群状态

[tidb@vm172-16-201-64 ~]$ tiup cluster display tidb-dev tiup is checking updates for component cluster ... Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.5/tiup-cluster display tidb-dev Cluster type: tidb Cluster name: tidb-dev Cluster version: v5.4.1 Deploy user: tidb SSH type: builtin Dashboard URL: http://172.16.201.151:2379/dashboardID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ----------172.16.201.150:9093 alertmanager 172.16.201.150 9093/9094 linux/x86_64 Up /data1/tidb-data/alertmanager-9093 /data1/tidb-deploy/alertmanager-9093 172.16.201.150:8300 cdc 172.16.201.150 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.152:8300 cdc 172.16.201.152 8300 linux/x86_64 Up /data1/tidb-data/cdc-8300 /data1/tidb-deploy/cdc-8300 172.16.201.150:8249 drainer 172.16.201.150 8249 linux/x86_64 Up /data1/binlog /data1/tidb-deploy/drainer-8249 172.16.201.150:3000 grafana 172.16.201.150 3000linux/x86_64 Up - /data1/tidb-deploy/grafana-3000 172.16.201.150:2379 pd 172.16.201.150 2379/2380 linux/x86_64 Up /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.151:2379 pd 172.16.201.151 2379/2380 linux/x86_64 Up|UI /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.152:2379 pd 172.16.201.152 2379/2380linux/x86_64 Up|L /data1/tidb-data/pd-2379 /data1/tidb-deploy/pd-2379 172.16.201.150:9090 prometheus 172.16.201.150 9090/12020 linux/x86_64 Up /data1/tidb-data/prometheus-9090 /data1/tidb-deploy/prometheus-9090 172.16.201.150:8250 pump 172.16.201.150 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.151:8250 pump 172.16.201.151 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.152:8250 pump 172.16.201.152 8250 linux/x86_64 Up /data1/tidb-data/pump-8250 /data1/tidb-deploy/pump-8250 172.16.201.150:4000 tidb 172.16.201.150 4000/10080linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.151:4000 tidb 172.16.201.151 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.152:4000 tidb 172.16.201.152 4000/10080 linux/x86_64 Up - /data1/tidb-deploy/tidb-4000 172.16.201.150:9000 tiflash 172.16.201.150 9000/8123/3930/20170/20292/8234linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.152:9000 tiflash 172.16.201.152 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /data1/tidb-data/tiflash-9000 /data1/tidb-deploy/tiflash-9000 172.16.201.150:20160 tikv 172.16.201.150 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160/data1/tidb-deploy/tikv-20160 172.16.201.151:20160 tikv 172.16.201.151 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 172.16.201.152:20160 tikv 172.16.201.152 20160/20180 linux/x86_64 Up /data1/tidb-data/tikv-20160 /data1/tidb-deploy/tikv-20160 Total nodes: 20 [tidb@vm172-16-201-64 ~]$

2、IP映射关系

修改IP前

机房搬迁 TiDB 集群 IP 更改方案

修改IP后

备注

172.16.201.150

172.16.201.153

中控机

172.16.201.151

172.16.201.154

172.16.201.152

172.16.201.155

3、stop集群

tiup cluster stop tidb-dev

4、机房搬迁并调整机器IP地址

5、修改meta.yaml 文件,替换对应IP

先备份原来的.tiup 目录,防止修改出错

[tidb@vm172-16-201-64 tidb-dev]$ more meta.yaml user: tidb tidb_version: v5.4.1topology: global: user: tidb ssh_port: 22 ssh_type: builtin deploy_dir: /data1/tidb-deploy data_dir: /data1/tidb-data os: linux arch: amd64 monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 deploy_dir:/data1/tidb-deploy/monitor-9100 data_dir: /data1/tidb-data/monitor-9100 log_dir: /data1/tidb-deploy/monitor-9100/log server_configs: tidb: binlog.enable: false binlog.ignore-error: true log.level: error new_collations_enabled_on_first_bootstrap: true performance.txn-total-size-limit: 2147483648 pessimistic-txn.max-retry-count: 0 prepared-plan-cache.enabled: true tikv: server.snap-max-write-bytes-per-sec: 200MB pd: {} tidb_dashboard: {} tiflash: {} tiflash-learner: {} pump: {} drainer: {} cdc: debug.enable-db-sorter: true per-table-memory-quota: 1073741824 sorter.chunk-size-limit: 268435456 sorter.max-memory-consumption: 30 sorter.max-memory-percentage: 70 sorter.num-workerpool-goroutine: 30 kvcdc: {} grafana: {} tidb_servers: - host: 172.16.201.155 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch:amd64 os: linux - host: 172.16.201.153 ssh_port: 22 port: 4000 status_port: 10080 deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 4000 status_port: 10080deploy_dir: /data1/tidb-deploy/tidb-4000 log_dir: /data1/tidb-deploy/tidb-4000/log arch: amd64 os: linux tikv_servers: - host: 172.16.201.155 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 20160 status_port: 20180 deploy_dir: /data1/tidb-deploy/tikv-20160 data_dir: /data1/tidb-data/tikv-20160 log_dir: /data1/tidb-deploy/tikv-20160/log arch: amd64 os:linux tiflash_servers: - host: 172.16.201.155 ssh_port: 22 tcp_port: 9000 http_port: 8123 flash_service_port: 3930 flash_proxy_port: 20170 flash_proxy_status_port: 20292 metrics_port: 8234 deploy_dir: /data1/tidb-deploy/tiflash-9000 data_dir: /data1/tidb-data/tiflash-9000 log_dir: /data1/tidb-deploy/tiflash-9000/log arch: amd64 os: linux - host: 172.16.201.153 ssh_port: 22 tcp_port: 9000 http_port: 8123 flash_service_port: 3930 flash_proxy_port: 20170 flash_proxy_status_port: 20292 metrics_port: 8234 deploy_dir: /data1/tidb-deploy/tiflash-9000 data_dir: /data1/tidb-data/tiflash-9000 log_dir:/data1/tidb-deploy/tiflash-9000/log arch: amd64 os: linux pd_servers: - host: 172.16.201.153 ssh_port: 22 name: pd-172.16.201.153-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 name: pd-172.16.201.154-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch:amd64 os: linux - host: 172.16.201.155 ssh_port: 22 name: pd-172.16.201.155-2379 client_port: 2379 peer_port: 2380 deploy_dir: /data1/tidb-deploy/pd-2379 data_dir: /data1/tidb-data/pd-2379 log_dir: /data1/tidb-deploy/pd-2379/log arch: amd64 os:linux pump_servers: - host: 172.16.201.153 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux - host: 172.16.201.155 ssh_port: 22 port: 8250 deploy_dir:/data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir: /data1/tidb-deploy/pump-8250/log arch: amd64 os: linux - host: 172.16.201.154 ssh_port: 22 port: 8250 deploy_dir: /data1/tidb-deploy/pump-8250 data_dir: /data1/tidb-data/pump-8250 log_dir:/data1/tidb-deploy/pump-8250/log arch: amd64 os: linux drainer_servers: - host: 172.16.201.153 ssh_port: 22 port: 8249 deploy_dir: /data1/tidb-deploy/drainer-8249 data_dir: /data1/binlog log_dir: /data1/tidb-deploy/drainer-8249/log config:syncer.db-type: file arch: amd64 os: linux cdc_servers: - host: 172.16.201.153 ssh_port: 22 port: 8300 deploy_dir: /data1/tidb-deploy/cdc-8300 data_dir: /data1/tidb-data/cdc-8300 log_dir: /data1/tidb-deploy/cdc-8300/log ticdc_cluster_id: "" arch:amd64 os: linux - host: 172.16.201.155 ssh_port: 22 port: 8300 deploy_dir: /data1/tidb-deploy/cdc-8300 data_dir: /data1/tidb-data/cdc-8300 log_dir: /data1/tidb-deploy/cdc-8300/log ticdc_cluster_id: "" arch: amd64 os: linux monitoring_servers: - host: 172.16.201.153 ssh_port: 22 port: 9090 ng_port: 12020 deploy_dir: /data1/tidb-deploy/prometheus-9090 data_dir: /data1/tidb-data/prometheus-9090 log_dir: /data1/tidb-deploy/prometheus-9090/log external_alertmanagers: [] arch: amd64 os: linux grafana_servers: - host: 172.16.201.153 ssh_port: 22 port: 3000 deploy_dir: /data1/tidb-deploy/grafana-3000 arch: amd64 os: linux username: admin password: admin anonymous_enable: false root_url: "" domain: "" alertmanager_servers: - host: 172.16.201.153 ssh_port: 22 web_port: 9093cluster_port: 9094 deploy_dir: /data1/tidb-deploy/alertmanager-9093 data_dir: /data1/tidb-data/alertmanager-9093 log_dir: /data1/tidb-deploy/alertmanager-9093/log arch: amd64 os: linux

6、获取 Cluster ID

[root@vm172-16-201-64 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id" [2022/10/25 15:56:05.470 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 15:58:40.012 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 17:52:28.338 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062] [2022/10/25 17:58:14.750 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 18:00:42.983 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [root@vm172-16-201-63 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id" [2022/10/25 15:56:46.475 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 15:58:37.002 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 17:52:28.307 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062] [2022/10/25 17:58:14.758 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 18:00:42.981 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [root@vm172-16-201-95 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "init cluster id" [2022/10/25 15:56:02.450 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 15:58:36.990 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 17:52:28.213 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158385694626677062] [2022/10/25 17:58:14.658 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528] [2022/10/25 18:00:42.879 +08:00] [INFO] [server.go:358] ["init cluster id"] [cluster-id=7158355689478888528]

7、获取已分配 ID

[root@vm172-16-201-95 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F= {print $2} | awk -F] {print $1} | sort -r | head -n 1 4000 [root@vm172-16-201-95 ~]# [root@vm172-16-201-63 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F= {print $2} | awk -F] {print $1} | sort -r | head -n 1 5000 [root@vm172-16-201-63 ~]# [root@vm172-16-201-64 ~]# cat /data1/tidb-deploy/pd-2379/log/pd.log | grep "idAllocator allocates a new id" | awk -F= {print $2} | awk -F] {print $1} | sort -r | head -n 1 3000 [root@vm172-16-201-64 ~]#

8、移除所有 PD 旧数据目录

[tidb@vm172-16-201-95 tidb-data]$ mv pd-2379/

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:本地备份实操:使用br工具备份TiDB数据库的技巧
下一篇:杭州银行×TiDB:打造首个云原生分布式国产银行核心业务系统
相关文章