黄东旭解析 TiDB 的核心优势
506
2024-03-21
TiDB v7.1.0版本 相关(部署、在线扩容、数据迁移)测试
一、服务器信息参数
序号服务器型号主机名配置IP地址用户名密码1Dell R610192_168_31_50_TiDBE5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.50rootP@ssw0rd2Dell R610192_168_31_51_PD1E5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.51rootP@ssw0rd3Dell R610192_168_31_52_PD2E5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.52rootP@ssw0rd4Dell R610192_168_31_53_TiKV1E5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.53rootP@ssw0rd5Dell R610192_168_31_54_TiKV2E5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.54rootP@ssw0rd6Dell R610192_168_31_55_TiKV3E5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.55rootP@ssw0rd7Dell R610192_168_31_56_TiKV4E5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.56rootP@ssw0rd8Dell R610192_168_31_57_DashboardE5-2609 v3 @ 1.90GHz * 2 / 32GB ROM /2TB HD/10000* 2 Network192.168.31.57rootP@ssw0rd二、TiDB v7.1.0 离线安装包下载
本次采用在官方下载页面选择对应版本的 TiDB server 离线镜像包(包含 TiUP 离线组件包)。需要同时下载 TiDB-community-server 软件包和 TiDB-community-toolkit 软件包。
TiDB-community-server 软件包:https://download.pingcap.org/tidb-community-server-v7.1.0-linux-amd64.tar.gz
TiDB-community-toolkit 软件包:https://download.pingcap.org/tidb-community-toolkit-v7.1.0-linux-amd64.tar.gz
三、上传安装包到 192_168_31_50_TiDB 主机的 /home目录,
可以使用以下两种方式
(1)命令方式上传文件到指定的服务器:scp 上传文件的包名子 root@上传服务器地址:/上传路径
(2)工具方式上传文件到指定的服务器:可以使用winscp、xsftp等工具
上传完成后,解压上述两个包文件[root@192_168_31_50_TiDB tidb-community-server-v7.1.0-linux-amd64]# sh local_install.sh
Disable telemetry success Successfully set mirror to /tidb-setup/tidb-community-server-v7.1.0-linux-amd64 Detected shell: bash Shell profile: /root/.bash_profile Installed path: /root/.tiup/bin/tiup=============================================== 1. source /root/.bash_profile 2. Have a try: tiup playground ===============================================[root@192_168_31_50_TiDB tidb-community-server-v7.1.0-linux-amd64]# source /root/.bash_profile
初始化集群拓扑文件[首次部署,排除192.168.31.56,然后在进行扩容操作]
[root@192_168_31_50_TiDB tidb-community-server-v7.1.0-linux-amd64]# tiup cluster template > tidb_install.yaml
编辑拓扑文件内容如下:
# # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: ## The user who runs the tidb cluster. user: "tidb" # # group is used to specify the group name the user belong to if its not the same as user. # group: "tidb" # # SSH port of servers in the managed cluster. ssh_port: 22 # # Storage directory for cluster deployment files, startup scripts, and configuration files.deploy_dir: "/tidb-deploy"# # TiDB Cluster data storage directory data_dir: "/tidb-data" # # Supported values: "amd64", "arm64" (default: "amd64") arch: "amd64" # # Resource Control is used to limit the resource of an instance. # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html ## Supports using instance-level `resource_control` to override global `resource_control`. # resource_control: # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemoryLimit=bytes # memory_limit: "2G" ## See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#CPUQuota= # # The percentage specifies how much CPU time the unit shall get at maximum, relative to the total CPU time available on one CPU. Use values > 100% for allotting CPU time on more than one CPU. # # Example: CPUQuota=200% ensures that the executed processes will never get more than two CPU time. # cpu_quota: "200%" # # See: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#IOReadBandwidthMax=device%20bytes # io_read_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M" # io_write_bandwidth_max: "/dev/disk/by-path/pci-0000:00:1f.2-scsi-0:0:0:0 100M" ## Monitored variables are applied to all the machines. monitored: # # The communication port for reporting system information of each node in the TiDB cluster. node_exporter_port: 9100 # # Blackbox_exporter communication port, used for TiDB cluster port monitoring. blackbox_exporter_port: 9115 ## Storage directory for deployment files, startup scripts, and configuration files of monitoring components. # deploy_dir: "/tidb-deploy/monitored-9100" # # Data storage directory of monitoring components. # data_dir: "/tidb-data/monitored-9100" # # Log storage directory of the monitoring component. #log_dir: "/tidb-deploy/monitored-9100/log" # # Server configs are used to specify the runtime configuration of TiDB components. # # All configuration items can be found in TiDB docs: # # - TiDB: https://pingcap.com/docs/stable/reference/configuration/tidb-server/configuration-file/ ## - TiKV: https://pingcap.com/docs/stable/reference/configuration/tikv-server/configuration-file/ # # - PD: https://pingcap.com/docs/stable/reference/configuration/pd-server/configuration-file/ # # - TiFlash: https://docs.pingcap.com/tidb/stable/tiflash-configuration # # # # All configuration items use points to represent the hierarchy, e.g: ## readpool.storage.use-unified-pool # # ^ ^ # # - example: https://github.com/pingcap/tiup/blob/master/examples/topology.example.yaml. # # You can overwrite this configuration via the instance-level `config` field. # server_configs: # tidb: # tikv: # pd: # tiflash: # tiflash-learner: ## Server configs are used to specify the configuration of PD Servers. pd_servers: # # The ip address of the PD Server. - host: 192.168.31.51 # # SSH port of the server. # ssh_port: 22 # # PD Server name # name: "pd-1" # # communication port for TiDB Servers to connect. # client_port: 2379 # # Communication port among PD Server nodes. # peer_port: 2380 ## PD Server deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/pd-2379" # # PD Server data storage directory. # data_dir: "/tidb-data/pd-2379" # # PD Server log file storage directory. # log_dir: "/tidb-deploy/pd-2379/log" # # numa node bindings. # numa_node: "0,1" ## The following configs are used to overwrite the `server_configs.pd` values. # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000 - host: 192.168.31.52 # ssh_port: 22 # name: "pd-1" # client_port: 2379 # peer_port: 2380 # deploy_dir: "/tidb-deploy/pd-2379" # data_dir: "/tidb-data/pd-2379" #log_dir: "/tidb-deploy/pd-2379/log" # numa_node: "0,1" # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000 # host: 10.0.1.13 # ssh_port: 22 # name: "pd-1" # client_port: 2379 # peer_port: 2380 # deploy_dir: "/tidb-deploy/pd-2379" # data_dir: "/tidb-data/pd-2379" #log_dir: "/tidb-deploy/pd-2379/log" # numa_node: "0,1" # config: # schedule.max-merge-region-size: 20 # schedule.max-merge-region-keys: 200000 # # Server configs are used to specify the configuration of TiDB Servers. tidb_servers: # # The ip address of the TiDB Server. - host: 192.168.31.50 # # SSH port of the server. # ssh_port: 22 ## The port for clients to access the TiDB cluster. # port: 4000 # # TiDB Server status API port. # status_port: 10080 # # TiDB Server deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/tidb-4000" # # TiDB Server log file storage directory. #log_dir: "/tidb-deploy/tidb-4000/log" # # The ip address of the TiDB Server. #host: 10.0.1.15 # ssh_port: 22 # port: 4000 # status_port: 10080 # deploy_dir: "/tidb-deploy/tidb-4000" # log_dir: "/tidb-deploy/tidb-4000/log" #host: 10.0.1.16 # ssh_port: 22 # port: 4000 # status_port: 10080 # deploy_dir: "/tidb-deploy/tidb-4000" #log_dir: "/tidb-deploy/tidb-4000/log" # # Server configs are used to specify the configuration of TiKV Servers. tikv_servers: # # The ip address of the TiKV Server. - host: 192.168.31.53 # # SSH port of the server. # ssh_port: 22 # # TiKV Server communication port. # port: 20160 # # TiKV Server status API port. # status_port: 20180 ## TiKV Server deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/tikv-20160" # # TiKV Server data storage directory. # data_dir: "/tidb-data/tikv-20160" # # TiKV Server log file storage directory. # log_dir: "/tidb-deploy/tikv-20160/log" ## The following configs are used to overwrite the `server_configs.tikv` values. # config: # log.level: warn # # The ip address of the TiKV Server. - host: 192.168.31.54 # ssh_port: 22 # port: 20160 # status_port: 20180 # deploy_dir: "/tidb-deploy/tikv-20160" # data_dir: "/tidb-data/tikv-20160" # log_dir: "/tidb-deploy/tikv-20160/log" # config: # log.level: warn - host: 192.168.31.55 # ssh_port: 22 # port: 20160 # status_port: 20180 # deploy_dir: "/tidb-deploy/tikv-20160" # data_dir: "/tidb-data/tikv-20160" # log_dir: "/tidb-deploy/tikv-20160/log" # config: # log.level: warn # # Server configs are used to specify the configuration of TiFlash Servers. tiflash_servers: ## The ip address of the TiFlash Server. - host: 192.168.31.57 # # SSH port of the server. # ssh_port: 22 # # TiFlash TCP Service port. # tcp_port: 9000 # # TiFlash raft service and coprocessor service listening address. # flash_service_port: 3930 # # TiFlash Proxy service port. # flash_proxy_port: 20170 ## TiFlash Proxy metrics port. # flash_proxy_status_port: 20292 # # TiFlash metrics port. # metrics_port: 8234 # # TiFlash Server deployment file, startup script, configuration file storage directory. # deploy_dir: /tidb-deploy/tiflash-9000 ##With cluster version >= v4.0.9 and you want to deploy a multi-disk TiFlash node, it is recommended to ## check config.storage.* for details. The data_dir will be ignored if you defined those configurations. ## Setting data_dir to a ,-joined string is still supported but deprecated. ## Check https://docs.pingcap.com/tidb/stable/tiflash-configuration#multi-disk-deployment for more details. ## TiFlash Server data storage directory. # data_dir: /tidb-data/tiflash-9000 # # TiFlash Server log file storage directory. # log_dir: /tidb-deploy/tiflash-9000/log # # The ip address of the TiKV Server. #host: 10.0.1.21 # ssh_port: 22 # tcp_port: 9000 # flash_service_port: 3930 # flash_proxy_port: 20170 #flash_proxy_status_port: 20292 # metrics_port: 8234 # deploy_dir: /tidb-deploy/tiflash-9000 # data_dir: /tidb-data/tiflash-9000 # log_dir: /tidb-deploy/tiflash-9000/log # # Server configs are used to specify the configuration of Prometheus Server. monitoring_servers: # # The ip address of the Monitoring Server. -host: 192.168.31.57# # SSH port of the server. # ssh_port: 22 # # Prometheus Service communication port. # port: 9090 # # ng-monitoring servive communication port # ng_port: 12020 # # Prometheus deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/prometheus-8249" ## Prometheus data storage directory. # data_dir: "/tidb-data/prometheus-8249" # # Prometheus log file storage directory. # log_dir: "/tidb-deploy/prometheus-8249/log" # # Server configs are used to specify the configuration of Grafana Servers. grafana_servers: # # The ip address of the Grafana Server. - host: 192.168.31.57 ## Grafana web port (browser access) # port: 3000 # # Grafana deployment file, startup script, configuration file storage directory. # deploy_dir: /tidb-deploy/grafana-3000 # # Server configs are used to specify the configuration of Alertmanager Servers. alertmanager_servers: ## The ip address of the Alertmanager Server. - host: 192.168.31.57 # # SSH port of the server. # ssh_port: 22 # # Alertmanager web service port. # web_port: 9093 # # Alertmanager communication port. # cluster_port: 9094 # # Alertmanager deployment file, startup script, configuration file storage directory. # deploy_dir: "/tidb-deploy/alertmanager-9093" ## Alertmanager data storage directory. # data_dir: "/tidb-data/alertmanager-9093" # # Alertmanager log file storage directory. # log_dir: "/tidb-deploy/alertmanager-9093/log"进行按集群拓扑文件对服务器条件检测
[root@192_168_31_50_TiDB tidb-community-server-v7.1.0-linux-amd64]# tiup cluster check ./tidb_install.yaml --user root -p
Input SSH password:
安装前条件检测结果如下:可以看到结果中有部份fail或warn状态信息,可以能过增加修复参数 --apply 进行修复。
Node Check Result Message ---- ----- ------ ------- 192.168.31.54 disk Warn mount point / does not have noatime option set 192.168.31.54 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.31.54service Fail service firewalld is running but should be stopped192.168.31.54 swap Warn swap is enabled, please disable it for best performance 192.168.31.54 thp Fail THP is enabled, please disable it for best performance 192.168.31.55 swap Warn swap is enabled, please disable it for best performance 192.168.31.55disk Warn mount point/ does not have noatime option set 192.168.31.55 service Fail service firewalld is running but should be stopped 192.168.31.55 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.31.55 thp Fail THP is enabled, please disable it for best performance 192.168.31.50service Fail service firewalld is running but should be stopped192.168.31.50 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.31.50 swap Warn swap is enabled, please disable it for best performance 192.168.31.50 thp Fail THP is enabled, please disable it for best performance 192.168.31.57command Fail numactl not usable, bash: numactl: command not found 192.168.31.57 disk Warn mount point / does not have noatime option set 192.168.31.57 sysctl Fail net.core.somaxconn = 128, should be greater than 32768 192.168.31.57 sysctl Fail net.ipv4.tcp_syncookies = 1, should be 0 192.168.31.57 sysctl Fail vm.swappiness = 30,should be0 192.168.31.57 thp Fail THP is enabled, please disable it for best performance 192.168.31.57 cpu-governor Warn Unable to determine current CPU frequency governor policy 192.168.31.57 swap Warn swap is enabled, please disable it for best performance 192.168.31.57 selinux Fail SELinux is not disabled 192.168.31.57service Fail service firewalld is running but should be stopped192.168.31.57 limits Fail soft limit of nofile for user tidb is not set or too low 192.168.31.57 limits Fail hard limit of nofile for user tidb is not set or too low 192.168.31.57 limits Fail soft limit of stack for user tidb is not set or too low 192.168.31.51 cpu-governor Warn Unable to determine currentCPU frequency governor policy 192.168.31.51 disk Warn mount point / does not have noatime option set 192.168.31.51 service Fail service firewalld is running but should be stopped 192.168.31.51 swap Warn swap is enabled, please disable it for best performance 192.168.31.51thp FailTHP is enabled, please disable it for best performance 192.168.31.52 service Fail service firewalld is running but should be stopped 192.168.31.52 cpu-governor Warn Unable to determine current CPU frequency governor policyB 192.168版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。