黄东旭解析 TiDB 的核心优势
542
2024-04-25
本文档的部署路线图为:
离线部署 TiDB v5.3.0(TiDB*3、PD*3、TiKV*3);
源码部署 Haproxy v2.5.0
离线升级 TiDB v5.3.0 至 TiDB v5.4.2;
缩扩容 TiDB Server、PD
扩缩容 TiKV、TiFlash
部署 TiSpark(TiSpark*3)
离线升级 TiDB v5.4.2 至 TiDB v6.1
其中,192.168.3.221作为中控机,离线部署TiUP工具、TiDB离线镜像包以及ToolKit镜像包。另如未特殊说明,后续操作均在中控机(192.168.3.221)由root用户执行。
系统镜像挂载
~]# mkdir -p /mnt/yum ~]# mount -o loop /dev/cdrom /mnt/yum如果是光盘ISO文件,可通过mount -o loop /home/hhrs/CentOS-7.9-x86_64-dvd.iso /mnt/yum挂载。
配置本地 repo 源
~]# cat > /etc/yum.repos.d/local.repo << EOF [Packages] name=Redhat Enterprise Linux 7.9 baseurl=file:///mnt/yum/ enabled=1 gpgcheck=0 gpgkey=file:///mnt/yum/RPM-GPG-KEY-redhat-release EOF生成 YUM 缓存
~]# yum clean all ~]# yum makecache中控机(192.168.3.221)创建密钥。设置root用户互信,免密登录各节点。
生成密钥及密钥分发
~]# ssh-keygen -t rsa ~]# ssh-copy-id root@192.168.3.221 ~]# ssh-copy-id root@192.168.3.222 ~]# ssh-copy-id root@192.168.3.223 ~]# ssh-copy-id root@192.168.3.224 ~]# ssh-copy-id root@192.168.3.225 ~]# ssh-copy-id root@192.168.3.226测试免密登陆
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip} Start Login" ssh root@${node_ip} "date" done输出如下内容,说明免密登陆设置成功。
>>> 192.168.3.221 Start Login Fri Aug 12 20:44:03 CST 2022 >>> 192.168.3.222 Start Login Fri Aug 12 20:44:03 CST 2022 >>> 192.168.3.223 Start Login Fri Aug 12 20:44:03 CST 2022 >>> 192.168.3.224 Start Login Fri Aug 12 20:44:03 CST 2022 >>> 192.168.3.225 Start Login Fri Aug 12 20:44:04 CST 2022 >>> 192.168.3.226 Start Login Fri Aug 12 20:44:04 CST 2022每个TiKV节点都要操作,本文档以 /dev/sdb 为数据盘,进行优化。
分区格式化
~]# fdisk -l Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors ~]# parted -s -a optimal /dev/sdb mklabel gpt -- mkpart primary ext4 1 -1 [root@localhost ~]# mkfs.ext4 /dev/sdb1 mke2fs 1.42.9 (28-Dec-2013) Discarding device blocks: done Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 1310720 inodes, 5242368 blocks 262118 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=2153775104 160 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000 Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done查看分区的UUID
这里 /dev/sdb1 的 UUID 为 49e00d02-2f5b-4b05-8e0e-ac2f524a97ae
[root@localhost ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda ├─sda1 ext4 8e0e85e5-fa82-4f2b-a871-26733d6d2995 /boot └─sda2 LVM2_member KKs6SL-IzU3-62b3-KXZd-a2GR-1tvQ-icleoe └─centos-root ext4 91645e3c-486c-4bd3-8663-aa425bf8d89d / sdb └─sdb1 ext4 49e00d02-2f5b-4b05-8e0e-ac2f524a97ae sr0 iso9660 CentOS 7 x86_64 2020-11-04-11-36-43-00分区挂载将数据盘分区/dev/sdb1的挂载信息追加到 /etc/fstab 文件中,注意添加 nodelalloc 挂载参数。
~]# echo "UUID=49e00d02-2f5b-4b05-8e0e-ac2f524a97ae /tidb-data ext4 defaults,nodelalloc,noatime 0 2" >> /etc/fstab ~]# mkdir /tidb-data && mount /tidb-data ~]# mount -t ext4 /dev/mapper/centos-root on / type ext4 (rw,relatime,seclabel,data=ordered) /dev/sda1 on /boot type ext4 (rw,relatime,seclabel,data=ordered) /dev/sdb1 on /tidb-data type ext4 (rw,noatime,seclabel,nodelalloc,data=ordered)中控机(192.168.3.221)root用户执行。因已设置免密登陆,因此可通过如下命令可批量关闭各主机的Swap。
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "echo \"vm.swappiness = 0\">> /etc/sysctl.conf" ssh root@${node_ip} "swapoff -a && swapon -a" ssh root@${node_ip} "sysctl -p" done一起执行 swapoff -a 和 swapon -a 命令是为了刷新 swap,将 swap 里的数据转储回内存,并清空 swap 里的数据。
批量关闭各主机SELinux
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "setenforce 0" ssh root@${node_ip} "sed -i s#SELINUX=enforcing#SELINUX=disabled#g /etc/selinux/config" ssh root@${node_ip} "sed -i s/^SELINUX=.*/SELINUX=disabled/ /etc/selinux/config" done验证关闭是否生效
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "getenforce" done输出如下内容,说明禁用成功。
>>> 192.168.3.221 Disabled >>> 192.168.3.222 Disabled >>> 192.168.3.223 Disabled >>> 192.168.3.224 Disabled >>> 192.168.3.225 Disabled >>> 192.168.3.226 Disabled查看防火墙状态
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "firewall-cmd --state" ssh root@${node_ip} "systemctl status firewalld.service" done关闭防火墙
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "systemctl stop firewalld.service" ssh root@${node_ip} "systemctl disable firewalld.service" done各主机的预期输出如下:
not running ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)确认时区
将时区调整为东八区北京时间
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime" done验证时区,各主机预期的时区输出为星期 月份 日 时间 CST 年份,如Fri Aug 12 21:01:34 CST 2022。
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "date" done时钟同步
TiDB 是一套分布式数据库系统,需要节点间保证时间的同步,从而确保 ACID 模型的事务线性一致性。可以通过互联网中的pool.ntp.org 授时服务来保证节点的时间同步,也可以使用离线环境自己搭建的 NTP 服务来解决授时。
这里以向外网pool.ntp.org时间服务器同步为例,内网NTP服务器同理,只需将pool.ntp.org替换为您的NTP服务器主机的IP即可。
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "yum install ntp ntpdate" ssh root@${node_ip} "ntpdate pool.ntp.org" ssh root@${node_ip} "systemctl start ntpd.service" ssh root@${node_ip} "systemctl enable ntpd.service" done也可将ntpdate pool.ntp.org时钟同步命令加入各主机crond定时任务中。
以下操作,在所有节点上由root执行。
关闭透明大页( Transparent Huge Pages)
~]# cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never需使其返回值为never
优化IO调度假设数据盘为/sdb,需修改调度为noop
~]# cat /sys/block/sdb/queue/scheduler noop [deadline] cfq查看数据盘分区的唯一标识 ID_SERIAL。
~]# udevadm info --name=/dev/sdb | grep ID_SERIAL E: ID_SERIAL=0QEMU_QEMU_HARDDISK_drive-scsi1 E: ID_SERIAL_SHORT=drive-scsi1CPU节能策略The governor "powersave"表示 cpufreq 的节能策略使用 powersave,需要调整为 performance 策略。如果是虚拟机或者云主机,则不需要调整,命令输出通常为 Unable to determine current policy。
~]# cpupower frequency-info --policy analyzing CPU 0: current policy: frequency should be within 1.20 GHz and 3.10 GHz. The governor "powersave" may decide which speed to use within this range.1.3.8.1. 使用 tuned(推荐)以下操作,在所有节点上由root用户执行。
查看当前tuned策略
~]# tuned-adm list Available profiles: - balanced - General non-specialized tuned profile - desktop - Optimize for the desktop use-case - hpc-compute - Optimize for HPC compute workloads - latency-performance - Optimize for deterministic performance at the cost of increased power consumption - network-latency - Optimize for deterministic performance at the cost of increased power consumption, focused on low latency network performance - network-throughput - Optimize for streaming network throughput, generally only necessary on older CPUs or 40G+ networks - powersave - Optimize for low power consumption - throughput-performance - Broadly applicable tuning that provides excellent performance across a variety of common server workloads - virtual-guest - Optimize for running inside a virtual guest - virtual-host - Optimize for running KVM guests Current active profile: virtual-guest创建新的tuned策略
在当前的tuned策略balanced基础上,追加新的策略。
~]# mkdir /etc/tuned/balanced-tidb-optimal/ ~]# vi /etc/tuned/balanced-tidb-optimal/tuned.conf [main] include=balanced [cpu] governor=performance [vm] transparent_hugepages=never [disk] devices_udev_regex=(ID_SERIAL=0QEMU_QEMU_HARDDISK_drive-scsi1) elevator=noop多个磁盘的ID_SERIAL用竖线分割,如:
[disk] devices_udev_regex=(ID_SERIAL=0QEMU_QEMU_HARDDISK_drive-scsi1)|(ID_SERIAL=36d0946606d79f90025f3e09a0c1f9e81) elevator=noop应用新的策略
~]# tuned-adm profile balanced-tidb-optimal验证优化结果
cat /sys/kernel/mm/transparent_hugepage/enabled && \ cat /sys/block/sdb/queue/scheduler && \ cpupower frequency-info --policy注意若tuned关闭THP不生效,可通过如下方式关闭:
查看默认启动内核
~]# grubby --default-kernel /boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64追加关闭THP参数
~]# grubby --args="transparent_hugepage=never" --update-kernel /boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64 ~]# grubby --info /boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64 index=0 kernel=/boot/vmlinuz-3.10.0-1160.71.1.el7.x86_64 args="ro crashkernel=auto spectre_v2=retpoline rd.lvm.lv=centos/root rhgb quiet LANG=en_US.UTF-8 >transparent_hugepage=never" root=/dev/mapper/centos-root initrd=/boot/initramfs-3.10.0-1160.71.1.el7.x86_64.img title=CentOS Linux (3.10.0-1160.71.1.el7.x86_64) 7 (Core)立即关闭THP
~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled ~]# echo never > /sys/kernel/mm/transparent_hugepage/defrag1.3.8.2. 内核优化中控机(192.168.3.221)由用户root执行。
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "echo \"fs.file-max = 1000000\" >> /etc/sysctl.conf" ssh root@${node_ip} "echo \"net.core.somaxconn = 32768\" >> /etc/sysctl.conf" ssh root@${node_ip} "echo \"net.ipv4.tcp_tw_recycle = 0\" >> /etc/sysctl.conf" ssh root@${node_ip} "echo \"net.ipv4.tcp_syncookies = 0\" >> /etc/sysctl.conf" ssh root@${node_ip} "echo \"vm.overcommit_memory = 1\" >> /etc/sysctl.conf" ssh root@${node_ip} "sysctl -p" done以下操作,在中控机(192.168.3.221)由用户root执行。
1.3.9.1. 创建用户for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "useradd tidb && passwd tidb" donetidb用户密码tidb123
1.3.9.2. 资源限制for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "echo \"tidb soft nofile 1000000\" >> /etc/security/limits.conf" ssh root@${node_ip} "echo \"tidb hard nofile 1000000\" >> /etc/security/limits.conf" ssh root@${node_ip} "echo \"tidb soft stack 32768\" >> /etc/security/limits.conf" ssh root@${node_ip} "echo \"tidb hard stack 32768\" >> /etc/security/limits.conf" done1.3.9.3. sudo权限为 tidb 用户增加免密 sudo 权限
for node_ip in 192.168.3.221 192.168.3.222 192.168.3.223 192.168.3.224 192.168.3.225 192.168.3.226 do echo ">>> ${node_ip}" ssh root@${node_ip} "echo \"tidb ALL=(ALL) NOPASSWD: ALL\" >> /etc/sudoers" donetidb用户登录各目标节点,确认执行sudo - root无需输入密码,即表示添加sudo免密成功。
1.3.9.4. tidb 免密登录tidb用户登录中控机(192.168.3.221)执行:
为tidb用户创建密钥,并分发密钥
~]$ id uid=1000(tidb) gid=1000(tidb) groups=1000(tidb) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 ~]$ ssh-keygen -t rsa ~]$ ssh-copy-id tidb@192.168.3.221 ~]$ ssh-copy-id tidb@192.168.3.222 ~]$ ssh-copy-id tidb@192.168.3.223 ~]$ ssh-copy-id tidb@192.168.3.224 ~]$ ssh-copy-id tidb@192.168.3.225 ~]$ ssh-copy-id tidb@19版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。