【新手升级指南】TiDB v6.5 至 v7.5 升级实操步骤

网友投稿 492 2024-03-29



TiDB 7.5 已发布了 支持并行运行多个 ADD INDEX 语句 并且兼容MySQL 8.0. 是时候测试一下了,要测试必须先升级。那么下面就是按官方文档指示升级的过程。

【新手升级指南】TiDB v6.5 至 v7.5 升级实操步骤

升级说明:本次升级测试为测试环境,单机部署。

操作系统版本CentOS Linux release 7.8.2003 (Core)

原tidb版本 6.5.2

升级版本7.5.0

先看看官方的升级介绍:

12 月 1 日,期待已久的 TiDB v7.5.0 LTS 发版。 TiDB 7.5.0 Release Notes

作为 TiDB 7 系列的第二个长期支持版本 (LTS) ,TiDB 7.5 着眼于提升规模化场景下关键应用的稳定性。新版本中,TiDB 在可扩展性与性能、稳定性与高可用、SQL 以及可观测性等方面获得了持续的提升。TiDB 7.5 LTS 包含了已发布的 7.2.0-DMR、7.3.0-DMR 和 7.4.0-DMR 版本中的新功能、提升改进和错误修复,累计优化和修复功能 70 余项。

第一步 升级参考地址

使用 TiUP 升级 TiDB | PingCAP 文档中心

第二步 检查tiup版本

检查tiup版本

检查tiup cluster 版本

确保tiup和tiup cluster 版本不低于 1.11.3

第三步 编辑 TiUP Cluster 拓扑配置文件(可选)

注意,以下情况可跳过此步骤:

原集群没有修改过配置参数,或通过 tiup cluster 修改过参数但不需要调整。

升级后对未修改过的配置项希望使用 7.5.0 默认参数。

如果要保留就的参数配置,或者改变7.5新增的默认的参数,请修改topology 配置文件。

第四步 检查当前集群的健康状况

测试环境有一些操作系统内核参数配置不符合推荐配置。

可以通过--apply 先尝试自动修复

例如:tiup cluster check mytidb7 --cluster --apply

tiup会尝试自动修复

如果不能自动修复,需要手工修复,部分操作可能需要重启。

例如:

numactl not usable, bash: numactl: command not found numactl工具可用于查看当前服务器的NUMA节点配置、状态,可通过该工具将进程绑定到指定CPU core,由指定CPU core来运行对应进程。 FIX : yum -y install numactl.x86_64

THP is enabled, please disable it for best performanceTransparent Hugepages(THP,页面内存透明化),透明大页面(THP)是一种Linux内存管理系统,通过使用更大的内存页面,可以减少具有大量内存的计算机上的Translation Lookaside Buffer(TLB)查找的开销。但是,数据库工作负载通常在THP上表现不佳,因为它们往往具有稀疏而不是连续的内存访问模式。 您应该在Linux机器上禁用THP以确保使用***获得最佳性能。争对一些数据库,如***、MariaDB、***、VoltDB在使用时,要求关闭此功能。 FIX: vim /etc/rc.d/rc.local 1:新增: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi 2:授权执行: chmod +x /etc/rc.d/rc.local 3:重启: reboot

第五步 检查当前集群的 DDL 和 Backup 情况

确保无正在执行的ddl语句、无备份任务或者还原任务。

第六步 升级TiDB

tiup cluster upgrade mytidb v7.5.0

因为网络限速,多次升级失败,建议tidb延长默认的下载文件时间

第七步 升级成功输出

[root@zabbix_server ~]# tiup cluster upgrade mytidb v7.5.0tiup is checking updates for component cluster ... A new version of cluster is available: The latest version: v1.14.0 Local installed version: v1.12.1 Update current component: tiup update cluster Update all components: tiup update --all

Starting component cluster: /root/.tiup/components/cluster/v1.12.1/tiup-cluster upgrade mytidb v7.5.0 Before the upgrade, it is recommended to read the upgrade guide at https://docs.pingcap.com/tidb/stable/upgrade-tidb-using-tiup

[ Serial ] - SSHKeySet: privateKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa, publicKey=/root/.tiup/storage/cluster/clusters/mytidb/ssh/id_rsa.pub

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[Parallel] - UserSSH: user=tidb, host= 192.11.117.15

[ Serial ] - Download: component=tikv, version=v7.5.0, os=linux, arch=amd64

[ Serial ] - Download: component=prometheus, version=v7.5.0, os=linux, arch=amd64

[ Serial ] - Download: component=tiflash, version=v7.5.0, os=linux, arch=amd64

[ Serial ] - Download: component=pd, version=v7.5.0, os=linux, arch=amd64

[ Serial ] - Download: component=tidb, version=v7.5.0, os=linux, arch=amd64

[ Serial ] - Download: component=grafana, version=v7.5.0, os=linux, arch=amd64

[ Serial ] - Mkdir: host= 192.11.117.15, directories=/tidb-data/tikv-20162

[ Serial ] - Mkdir: host= 192.11.117.15, directories=/tidb-data/tiflash-9000

[ Serial ] - Mkdir: host= 192.11.117.15, directories=/tidb-data/pd-2379

[ Serial ] - Mkdir: host= 192.11.117.15, directories=/tidb-data/tikv-20161

[ Serial ] - Mkdir: host= 192.11.117.15, directories=/tidb-data/tikv-20160

[ Serial ] - BackupComponent: component=tikv, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/tikv-20162

[ Serial ] - BackupComponent: component=tiflash, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/tiflash-9000

[ Serial ] - BackupComponent: component=pd, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/pd-2379

[ Serial ] - BackupComponent: component=tikv, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/tikv-20160

[ Serial ] - BackupComponent: component=tikv, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/tikv-20161

[ Serial ] - CopyComponent: component=tikv, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/tikv-20162 os=linux, arch=amd64

[ Serial ] - CopyComponent: component=pd, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/pd-2379 os=linux, arch=amd64

[ Serial ] - CopyComponent: component=tikv, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/tikv-20160 os=linux, arch=amd64

[ Serial ] - CopyComponent: component=tiflash, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/tiflash-9000 os=linux, arch=amd64

[ Serial ] - CopyComponent: component=tikv, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/tikv-20161 os=linux, arch=amd64

[ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/tikv-20162.service, deploy_dir=/tidb-deploy/tikv-2016 2, data_dir=[/tidb-data/tikv-20162], log_dir=/tidb-deploy/tikv-20162/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/pd-2379.service, deploy_dir=/tidb-deploy/pd-2379, dat a_dir=[/tidb-data/pd-2379], log_dir=/tidb-deploy/pd-2379/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/tikv-20160.service, deploy_dir=/tidb-deploy/tikv-2016 0, data_dir=[/tidb-data/tikv-20160], log_dir=/tidb-deploy/tikv-20160/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/tiflash-9000.service, deploy_dir=/tidb-deploy/tiflash -9000, data_dir=[/tidb-data/tiflash-9000], log_dir=/tidb-deploy/tiflash-9000/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/tikv-20161.service, deploy_dir=/tidb-deploy/tikv-2016 1, data_dir=[/tidb-data/tikv-20161], log_dir=/tidb-deploy/tikv-20161/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - Mkdir: host= 192.11.117.15, directories=

[ Serial ] - BackupComponent: component=tidb, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/tidb-4000

[ Serial ] - Mkdir: host= 192.11.117.15, directories=/tidb-data/prometheus-9090

[ Serial ] - Mkdir: host= 192.11.117.15, directories=

[ Serial ] - BackupComponent: component=grafana, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/grafana-3000

[ Serial ] - CopyComponent: component=grafana, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/grafana-3000 os=linux, arch=amd64

[ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/grafana-3000.service, deploy_dir=/tidb-deploy/grafana -3000, data_dir=[], log_dir=/tidb-deploy/grafana-3000/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - CopyComponent: component=tidb, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/tidb-4000 os=linux, arch=amd64

[ Serial ] - BackupComponent: component=prometheus, currentVersion=v6.5.2, remote= 192.11.117.15:/tidb-deploy/prometheus-9090

[ Serial ] - CopyComponent: component=prometheus, version=v7.5.0, remote= 192.11.117.15:/tidb-deploy/prometheus-9090 os=linux, arch=amd64

[ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/prometheus-9090.service, deploy_dir=/tidb-deploy/prom etheus-9090, data_dir=[/tidb-data/prometheus-9090], log_dir=/tidb-deploy/prometheus-9090/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - InitConfig: cluster=mytidb, user=tidb, host= 192.11.117.15, path=/root/.tiup/storage/cluster/clusters/mytidb/config-cache/tidb-4000.service, deploy_dir=/tidb-deploy/tidb-4000, data_dir=[], log_dir=/tidb-deploy/tidb-4000/log, cache_dir=/root/.tiup/storage/cluster/clusters/mytidb/config-cache+ [ Serial ] - UpgradeCluster Upgrading component tiflash Restarting instance 192.11.117.15:9000 Restart instance 192.11.117.15:9000 success Upgrading component pd Restarting instance 192.11.117.15:2379 Restart instance 192.11.117.15:2379 success Upgrading component tikv Restarting instance 192.11.117.15:20160 Restart instance 192.11.117.15:20160 success Evicting 1 leaders from store 192.11.117.15:20161... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Restarting instance 192.11.117.15:20161 Restart instance 192.11.117.15:20161 success Evicting 1 leaders from store 192.11.117.15:20162... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Still waitting for 1 store leaders to transfer... Restarting instance 192.11.117.15:20162 Restart instance 192.11.117.15:20162 success Upgrading component tidb Restarting instance 192.11.117.15:4000 Restart instance 192.11.117.15:4000 success Upgrading component prometheus Restarting instance 192.11.117.15:9090 Restart instance 192.11.117.15:9090 success Upgrading component grafana Restarting instance 192.11.117.15:3000 Restart instance 192.11.117.15:3000 success Stopping component node_exporter Stopping instance 192.11.117.15 Stop 192.11.117.15 success Stopping component blackbox_exporter Stopping instance 192.11.117.15 Stop 192.11.117.15 success Starting component node_exporter Starting instance 192.11.117.15 Start 192.11.117.15 success Starting component blackbox_exporter Starting instance 192.11.117.15 Start 192.11.117.15 success Upgraded clustermytidb successfully

检查版本

结束语:

虽然是一个简单的升级,实际还是会遇到几个问题,实践大于查看。

这里把整个升级记录一下,以供参考。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:WordPress 用 TiDB Cloud 替换 MySQL 的实践
下一篇:一文掌握TiDB FlashBack技术深度解析指南
相关文章