黄东旭解析 TiDB 的核心优势
559
2024-03-18
最近对国产操作系统很感兴趣,也有一些场景需要验证落地,官方支持银河麒麟 V10(X86,ARM),统信 UOS 等国产操作系统,但上述系统不是开源操作系统,使用上存在一些障碍,经过朋友推荐,选择***的 openEular 进行验证测试。
目前 openEular 的 LTS 版本主要是 2003 和 2203 两个版本,2003 是 gcc 7+ 和 python 2.X 的环境,2203 是 gcc 10+ 和 python 3.X 的环境,理论上讲 2003 更接近目前所使用的 CentOS 7,兄弟组用 2203 编译 Doris 也遇到一些问题,因此选择 openEular 2003 SP3 进行测试。本次测试主要验证功能,分为部署过程测试和基本 SQL 查询测试,不做性能测试。openEular 同时支持 X86 和 ARM 架构,TiDB 也支持上述两种架构,本次测试中采用 X86_64 架构硬件设备进行测试。 本文使用 TiDB 6.1 作为测试版本,验证结果不保证可复现在 6.X 之前的版本上。
本文参照官方文档操作,验证 TiDB 运行在 openEular 上的可行性,为有选型需求的同学做一些参考。 本文记录了整个部署过程中的标准输出,对于只是想了解 TiDB 部署安装过程的同学,有一定参考价值。 经过本文的验证,在 openEular 部署使用 TiDB 与在 Centos 7 部署使用 TiDB 基本一致。部署过程中遇到的问题见最后的错误排查。
openEular 2003 SP3 4C8G Vmware 虚拟机 x86_64 环境
由于系统的原因,需要提前安装以下组件:
yum -y install bc最小规模的 TiDB 集群拓扑:
实例个数IP配置TiKV3192.168.180.140修改端口TiDB1192.168.180.140默认配置PD1192.168.180.140默认配置TiFlash1192.168.180.140默认配置Monitor1192.168.180.140默认配置下载并安装 TiUP:
[root@localhost tidb]# curl --proto =https --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6968k 100 6968k 0 0 600k 0 0:00:11 0:00:11 --:--:-- 1129k WARN: adding root certificate via internet: https://tiup-mirrors.pingcap.com/root.json You can revoke this by remove /root/.tiup/bin/7b8e153f2e2d0928.root.json Successfully set mirror to https://tiup-mirrors.pingcap.com Detected shell: bash Shell profile: /root/.bash_profile /root/.bash_profile has been modified to add tiup to PATH open a new terminal or source /root/.bash_profile to use it Installed path: /root/.tiup/bin/tiup =============================================== Have a try: tiup playground ===============================================声明全局环境变量:
[root@localhost tidb]# source /root/.bash_profile安装 TiUP 的 cluster 组件:
[root@localhost tidb]# tiup cluster tiup is checking updates for component cluster ...timeout! The component `cluster` version is not installed; downloading from repository. download https://tiup-mirrors.pingcap.com/cluster-v1.10.1-linux-amd64.tar.gz 8.28 MiB / 8.28 MiB 100.00% 1.05 MiB/s Starting component `cluster`: /root/.tiup/components/cluster/v1.10.1/tiup-cluster Deploy a TiDB cluster for production ...... Use "tiup cluster help [command]" for more information about a command.调大 sshd 服务的连接数限制:
[root@localhost tidb]# sed -i s/#MaxSessions 10/MaxSessions 20/g /etc/ssh/sshd_config [root@localhost tidb]# service sshd restart Redirecting to /bin/systemctl restart sshd.service创建并启动集群 创建配置文件 topo.yaml:
# # Global variables are applied to all deployments and used as the default value of # # the deployments if a specific deployment value is missing. global: user: "tidb" ssh_port: 22 deploy_dir: "/tidb-deploy" data_dir: "/tidb-data" # # Monitored variables are applied to all the machines. monitored: node_exporter_port: 9100 blackbox_exporter_port: 9115 server_configs: tidb: log.slow-threshold: 300 tikv: readpool.storage.use-unified-pool: false readpool.coprocessor.use-unified-pool: true pd: replication.enable-placement-rules: true replication.location-labels: ["host"] tiflash: logger.level: "info" pd_servers: - host: 192.168.180.140 tidb_servers: - host: 192.168.180.140 tikv_servers: - host: 192.168.180.140 port: 20160 status_port: 20180 config: server.labels: {host: "logic-host-1"} - host: 192.168.180.140 port: 20161 status_port: 20181 config: server.labels: {host: "logic-host-2"} - host: 192.168.180.140 port: 20162 status_port: 20182 config: server.labels: {host: "logic-host-3"} tiflash_servers: - host: 192.168.180.140 monitoring_servers: - host: 192.168.180.140 grafana_servers: - host: 192.168.180.140部署集群:
[root@localhost tidb]# tiup cluster deploy tidb61 v6.1.0 ./topo.yaml --user root -p tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.10.1/tiup-cluster deploy tidb61 v6.1.0 ./topo.yaml --user root -p Input SSH password: + Detect CPU Arch Name - Detecting node 192.168.180.140 Arch info ... Done + Detect CPU OS Name - Detecting node 192.168.180.140 OS info ... Done Please confirm your topology: Cluster type: tidb Cluster name: tidb61 Cluster version: v6.1.0 ...... Attention: 1. If the topology is not what you expected, check your yaml file. 2. Please confirm there is no port/directory conflicts in same host. + Deploy TiDB instance + Copy certificate to remote host + Init instance configs + Init monitor configs + Check status Enabling component pd Enabling component tikv Enabling component tidb Enabling component tiflash Enabling component prometheus Enabling component grafana Enabling component node_exporter Enabling component blackbox_exporter Cluster `tidb61` deployed successfully, you can start it with command: `tiup cluster start tidb61 --init`启动集群:
[root@ecs-5842 ~]# tiup cluster start tidb61 --init tiup is checking updates for component cluster ... Starting component `cluster`: /root/.tiup/components/cluster/v1.10.1/tiup-cluster start tidb61 --init Starting cluster tidb61... ...... Started cluster `tidb61` successfully The root password of TiDB database has been changed. The new password is: 5%thkE=sL6^-1382wV. Copy and record it to somewhere safe, it is only displayed once, and will not be stored. The generated password can NOT be get and shown again.查看集群状态:
[root@ecs-5842 ~]# tiup cluster display tidb61 tiup is checking updates for component cluster ...timeout! Starting component `cluster`: /root/.tiup/components/cluster/v1.10.1/tiup-cluster display tidb61 Cluster type: tidb Cluster name: tidb61 Cluster version: v6.1.0 Deploy user: tidb SSH type: builtin Dashboard URL: http://192.168.0.141:2379/dashboard Grafana URL: http://192.168.0.141:3000 ID Role Host Ports OS/Arch Status Data Dir Deploy Dir -- ---- ---- ----- ------- ------ -------- ---------- 192.168.0.141:3000 grafana 192.168.0.141 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000 192.168.0.141:2379 pd 192.168.0.141 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379 192.168.0.141:9090 prometheus 192.168.0.141 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090 192.168.0.141:4000 tidb 192.168.0.141 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000 192.168.0.141:9000 tiflash 192.168.0.141 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000 192.168.0.141:20160 tikv 192.168.0.141 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160 192.168.0.141:20161 tikv 192.168.0.141 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161 192.168.0.141:20162 tikv 192.168.0.141 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162 Total nodes: 8下载并安装一下 rpm 包:
mysql-community-client-5.7.35-1.el7.x86_64.rpm mysql-community-common-5.7.35-1.el7.x86_64.rpm mysql-community-libs-5.7.35-1.el7.x86_64.rpm[root@ecs-5842 ~]# rpm -ivh mysql-community-* warning: mysql-community-client-5.7.35-1.el7.x86_64.rpm: Header V3 DSA/SHA256 Signature, key ID 5072e1f5: NOKEY Verifying... ################################# [100%] Preparing... ################################# [100%] Updating / installing... 1:mysql-community-common-5.7.35-1.e################################# [33%] 2:mysql-community-libs-5.7.35-1.el7################################# [67%] 3:mysql-community-client-5.7.35-1.e################################# [100%]查看执行计划: 经过分析发现,查询走的 tiflash,因为出现 ExchangeSender 和 ExchangeReceiver 算子,表明 MPP 已生效。
当遇到如下错误时:
Error: executor.ssh.execute_failed: Failed to execute command over SSH for tidb@192.168.0.141:22 {ssh_stderr: We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. sudo: no tty present and no askpass program specified , ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /usr/bin/sudo -H bash -c "test -d /tidb-deploy || (mkdir -p /tidb-deploy && chown tidb:$(id -g -n tidb) /tidb-deploy)"}, cause: Process exited with status 1需要添加 sudo 权限:
visudo tidb ALL=(ALL) NOPASSWD: ALL当发现以下错误时:
goroutine 1 [running]: runtime/debug.Stack() /usr/local/go/src/runtime/debug/stack.go:24 +0x65 runtime/debug.PrintStack() /usr/local/go/src/runtime/debug/stack.go:16 +0 github.com/pingcap/tidb/session.insertBuiltinBindInfoRow(...) /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/bootstrap.go:1461 github.com/pingcap/tidb/session.initBindInfoTable({0x42f39c0, 0xc001078480}) /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/bootstrap.go:1457 +0xb1 github.com/pingcap/tidb/session.doDDLWorks({0x42f39c0, 0xc001078480}) /home/jenkins/ageootstrap.go:445 +0x2ab github.com/pingcap/tidb/session.runInBootstrapSession({0x42b5ff0, 0xc000b825a0}, 0x3e27620) /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/session.go:2941 +0x1ff github.com/pingcap/tidb/session.BootstrapSession({0x42b5ff0, 0xc000b825a0}) /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/session/session.go:2829 +0x216 main.createStoreAndDomain() /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/tidb-server/main.go:296 +0x114 main.main() /home/jenkins/agent/workspace/build-common/go/src/github.com/pingcap/tidb/tidb-server/main.go:202 +0x4ca可能是内存不足,由原有的 8G 扩展到 32G 后,问题没有复现。
经过以上验证,用最简单的方式验证了 openEular 2003 SP3 系统下 TiDB 的部署安装和简单的功能验证。由于环境所限,没有经过更多的功能测试和性能测试,在充分的使用和验证后,再行文贡献给大家。
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。