TiDB使用国内公有云与私有部署的S3存储备份指南

网友投稿 507 2024-02-29



背景

S3 存储服务凭借易用的 Restful API、扩展性和按需计费模型,在云时代迅速流行开来。国内公有云上部署的 TiDB 较多地使用 S3 存储用于备份,随着 minio 等开源的 S3 存储在私有化本地部署场景的流行,本地部署的 TiDB 也逐渐走向开源的 S3 存储。

TiDB使用国内公有云与私有部署的S3存储备份指南

S3 存储的使用要点和推荐方式

Path style 和 Virtual hosted style

Path style 和 Virtual hosted style 是 S3 存储服务 URL 的两种构建方式。

Path Style

URL 结构为 <Schema>://<S3 Endpoint>/<Bucket>/<Object> ,Schema 包含 HTTP 或者 HTTPS,Bucket 表示存储空间名称,S3 Endpoint 为 Bucket 所在数据存储供访问的 Endpoint,Object 是文件的访问路径,如:https://minio.example.com:9000/examplebucket/destfolder/example.txt。

Virtual hosted style

Virtual hosted style 是指将 Bucket 置于 Host Header 的访问方式,URL 结构为 <Schema>://<Bucket>.<外网Endpoint>/<Object>,路径上的各部分含义同上,如:https://examplebucket.oss-cn-hangzhou.aliyuncs.com/destfolder/example.txt。

在使用 URL 引用对象时,DNS 解析用于将子域名映射到 IP 地址。Path Style 下,子域始终是公有云域名或其中一个区域终端节点。Virtual hosted style 下,子域特定于存储桶。Path Style 是 S3 存储早期的访问路径方式,在公有云上配置使用 Path Style 时,会导致所有 S3 用户访问同一域名,如:s3.Region.amazonaws.com,使得流量管理和访问控制等越来越难于管理。

目前*** OSS 等公有云 S3 对象存储仅支持 virtual hosted 访问方式,大部分的私有化部署 S3 存储仅支持 Path style 访问方式,部分服务两种方式都支持。备份客户端在使用 S3 存储时,需要根据服务方的要求说明进行相应设置,否则使用 OSS 等公有云的 S3 存储服务时,会遇到 Please use virtual hosted style to access 的错误提示。

参考文档:

https://help.aliyun.com/document_detail/31834.html

https://cloud.tencent.com/document/product/436/41284

https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/userguide/VirtualHosting.html

Internal 域名

使用公有云的 S3 存储服务时,需要区分 S3 Endpoint 域名所代表的内网或者外网。

内网指的是***同地域(region)产品之间的内部通信网络,比如:ECS 云服务器访问同地域的 OSS 服务。

外网指的是互联网,比如:私有的本地数据中心通过 http 下载 OSS 服务上的备份文件。如果需要外网访问,同时还涉及公共读或公共读写的桶的权限配置。

根据云服务商的收费策略,内网和外网会产生不同的网络流量费用。比如:云服务内网产生的流入和流出流量均免费,但是请求次数仍会计费。通过外网访问产生的流入流量(写)是免费的,流出流量(读)是收费的。

云服务厂商通常会在产品文档中说明不同的域名,需要根据备份服务器与 OSS 服务的地域关系进行配置。*** OSS 需要显式指定外网和内网不同的域名区分访问方式,***云 COS 通过 DNS 解析自动返回内网或者外网的 IP 区分访问方式。

参考文档:

https://help.aliyun.com/document_detail/31837.htm

https://cloud.tencent.com/document/product/436/6224

存储桶的数据归档时效

使用 S3 存储服务时,可以使用存储桶的生产周期管理能力,实现备份数据的分层存储和自动删除等功能。如:备份数据 7 天后从温数据层到冷数据层,30 天后删除。

参考文档:

https://help.aliyun.com/document_detail/169911.html

https://cloud.tencent.com/document/product/436/14605

https://min.io/docs/minio/kubernetes/upstream/administration/object-management/object-lifecycle-management.html

存储桶的自动复制

使用 S3 存储服务时,尤其是单机房私有化部署的 minio 集群未实现容灾,可以使用存储桶跨区域或者跨集群的复制能力,实现备份数据的站点级别冗余保护。如:备份数据从本地机房 A 的 minio 集群复制到本地机房 B 的 minio 集群。

参考文档:

https://www.minio.org.cn/docs/minio/kubernetes/upstream/administration/bucket-replication.html

https://help.aliyun.com/document_detail/254865.html

https://cloud.tencent.com/document/product/436/19235

运维编排备份任务

公有云通常提供运维编排平台的能力,如:*** OOS、***云蓝鲸平台等。通过编排平台和节点脚本可以定时运行运维备份任务。

s3cmd 验证

在使用 S3 桶进行备份时,强烈推荐先使用 s3cmd 进行桶的访问权限验证,再配置备份命令。

基于 AK/SK 进行备份操作

以使用*** OSS 存储为例,使用 AK/SK 的访问方式进行 TiDB 的备份操作。请注意 rclone组件默认 s3-force-path-style 是使用 path style,通过 S3 URL 的 参数或者命令行配置 s3.provider=“alibaba” 可以控制。

具体的控制逻辑参考:https://github.com/pingcap/tidb/blob/master/br/pkg/storage/s3.go#L167

// In some cases, we need to set ForcePathStyle to false. // Refer to: https://rclone.org/s3/#s3-force-path-style if options.Provider == "alibaba" || options.Provider == "netease" ||options.UseAccelerateEndpoint{ options.ForcePathStyle = false }

br 全库备份脚本

# vi brbackupfull.sh AccessKey= SecretKey= Bucket= Endpoint=oss-cn-REGION-internal.aliyuncs.com PDIP= export AWS_ACCESS_KEY_ID=$AccessKey export AWS_SECRET_ACCESS_KEY=$SecretKey CURDATE=$(date +%Y%m%d%H%M%S) /root/.tiup/components/br/{version}/br backup full --pd"${PDIP}" --storage "s3://${Bucket}/br/${CURDATE}" --s3.endpoint="http://${Endpoint}" --s3.provider="alibaba" --send-credentials-to-tikv=true --ratelimit 128 --log-file brbackupfull.log echo s3://${Bucket}/br/${CURDATE}

br 全库恢复脚本

# vi brrestorefull.sh AccessKey= SecretKey= Bucket= Endpoint=oss-cn-REGION-internal.aliyuncs.comPDIP= dir= export AWS_ACCESS_KEY_ID=$AccessKey export AWS_SECRET_ACCESS_KEY=$SecretKey /root/.tiup/components/br/{version}/br restore full --pd "${PDIP}" --storage "s3://${Bucket}/br/${dir}" --s3.endpoint="http://${Endpoint}" --s3.provider="alibaba" --send-credentials-to-tikv=true --ratelimit 128--log-file brrestorefull.log

dumpling 全库备份脚本

# vi dumplingfull.sh AccessKey= SecretKey= Bucket= Endpoint=oss-cn-REGION-internal.aliyuncs.com TIDBIP= BAKUSER= BAKPW= export AWS_ACCESS_KEY_ID=$AccessKey export AWS_SECRET_ACCESS_KEY=$SecretKey CURDATE=$(date +%Y%m%d%H%M%S) /root/.tiup/components/dumpling/{version}/dumpling -u"${BAKUSER}" -P4000 -h ${TIDBIP} -p "${BAKPW}" --filetype sql -t 8 -o "s3://${Bucket}/dumpling/${CURDATE}" --s3.endpoint="http://${Endpoint}" --s3.provider="alibaba" -r 200000 -F 256MiB # 简要日志如下: [2022/03/24 13:57:42.941 +08:00] [INFO] [dump.go:103] ["begin to run Dump"] [conf="{\"s3\":{\"endpoint\":\"http://oss-cn-hangzhou-internal.aliyuncs.com\",\"region\":\"us-east-1\",\"storage-class\":\"\",\"sse\":\"\",\"sse-kms-key-id\":\"\",\"acl\":\"\",\"access-key\":\"\",\"secret-access-key\":\"\",\"provider\":\"alibaba\",\"force-path-style\":false,\"use-accelerate-endpoint\":false},\"gcs\":{\"endpoint\":\"\",\"storage-class\":\"\",\"predefined-acl\":\"\",\"credentials-file\":\"\"},\"azblob\":{\"endpoint\":\"\",\"account-name\":\"\",\"account-key\":\"\",\"access-tier\":\"\"},\"AllowCleartextPasswords\":false,\"SortByPk\":true,\"NoViews\":true,\"NoHeader\":false,\"NoSchemas\":false,\"NoData\":false,\"CompleteInsert\":false,\"TransactionalConsistency\":true,\"EscapeBackslash\":true,\"DumpEmptyDatabase\":true,\"PosAfterConnect\":false,\"CompressType\":0,\"Host\":\"10.0.1.32\",\"Port\":4000,\"Threads\":8,\"User\":\"root\",\"Security\":{\"CAPath\":\"\",\"CertPath\":\"\",\"KeyPath\":\"\"},\"LogLevel\":\"info\",\"LogFile\":\"\",\"LogFormat\":\"text\",\"OutputDirPath\":\"s3://tidb-test-a/dumpling/20220324135742\",\"StatusAddr\":\":8281\",\"Snapshot\":\"432039909896486919\",\"Consistency\":\"snapshot\",\"CsvNullValue\":\"\\\\N\",\"SQL\":\"\",\"CsvSeparator\":\",\",\"CsvDelimiter\":\"\\\"\",\"Databases\":[],\"Where\":\"\",\"FileType\":\"sql\",\"ServerInfo\":{\"ServerType\":3,\"ServerVersion\":\"5.4.0\",\"HasTiKV\":true},\"Rows\":200000,\"ReadTimeout\":900000000000,\"TiDBMemQuotaQuery\":0,\"FileSize\":268435456,\"StatementSize\":1000000,\"SessionParams\":{\"tidb_snapshot\":\"432039909896486919\"},\"Tables\":null,\"CollationCompatible\":\"loose\"}"]

lightning 还原脚本

命令中的参数可以根据需要转成 toml 文件。

建议提升执行 lightning 节点的配置到 16 C 的规格,并且保证临时目录 sortedkvdir 的容量能满足单份数据存放的需求。

# vi lightning.sh AccessKey= SecretKey= Bucket= Endpoint=oss-cn-REGION-internal.aliyuncs.com TIDBIP= TIDBPORT=4000 BAKUSER= BAKPW= PDIP= dir= sortedkvdir=/data/sorted-kv-dir/${dir} export AWS_ACCESS_KEY_ID=$AccessKey export AWS_SECRET_ACCESS_KEY=$SecretKey # tidb backup dpoint}&provider=alibaba" --tidb-host ${TIDBIP} --tidb-port ${TIDBPORT} --tidb-user ${BAKUSER} --tidb-password ${BAKPW} --pd-urls "${PDIP}:2379" --analyze required --checksum required # local backend /root/.tiup/components/tidb-lightning/{version}/tidb-lightning --backend local -sorted-kv-dir "${sortedkvdir}" --log-file tidb-lightning-full.log --status-addr ":8239" -d "s3://${Bucket}/dumpling/${dir}${Endpoint}&provider=alibaba" --tidb-host ${TIDBIP} --tidb-user ${BAKUSER} --tidb-password ${BAKPW} --pd-urls "${PDIP}:2379" --analyze required --checksum required

基于*** RAM 进行备份操作

将 AK/SK 保存在应用程序的配置文件中,调用***服务 API,存在两个问题:(1) 保密性问题。可能随着快照、镜像及镜像创建出来的实例被泄露。(2) 难运维性问题。要更换 AK,需要对每个对象进行更新并重新部署。在***场景中,可以通过给 ECS 实例配置 RAM 角色来避免 AK 泄露及运维难的问题。

TiDB 6.1 版本开始对 Dumpling 和 Lightning 工具开始支持*** RAM,BR 工具涉及 send-credentials-to-tikv 的逻辑,暂不支持。

简要步骤如下:

在 ECS 的主机上附加 S3 存储的 RAM 的权限。

使用免配置 AK/SK 的备份脚本。

总结

TiDB 支持 NFS 和 S3 两种备份存储,NFS 存在操作用户权限要求高、内核态维护难等弊端。推荐 TiDB 备份逐渐切换到 S3 备份存储,结合存储桶的桶复制和数据归档特性,实现更好的备份管理。

附: s3cmd 的使用

s3 cmd 的下载

Centos 7 推荐从以下网址进行下载,注意 s3cmd 是 noarch 类型的包,理论上兼容 arm64。

https://rhel.pkgs.org/7/epel-aarch64/s3cmd-2.0.2-1.el7.noarch.rpm.html

s3cmd 的官方下载网址

https://s3tools.org/download

常用的 s3cmd 命令

命令列举如下:

# 列举所有的桶 s3cmd ls # 创建 bucket(ucket 名称必须是全局唯一) s3cmd mb s3://{BUCKETNAME} # 查看桶内的文件 s3cmd ls s3://{BUCKETNAME}/dumpling/ # 递归删除目录 s3cmd rm -r s3://{BUCKETNAME}/dumpling/ # 批量下载备份文件到本地目录 s3cmd get s3://{BUCKETNAME}/dumpling/* ./ # 批量上传本地目录文件 s3cmd put ./* s3://{BUCKETNAME}/dumpling/ # 查看空间占用 s3cmd du -H s3://{BUCKETNAME}/dumpling/

s3cfg 典型配置

以*** OSS 为例,使用 s3cmd -configure 的典型配置如下。

#s3cmd -configure Access Key: xxxx Secret Key: xxxx Default Region: US S3 Endpoint: oss-cn-hangzhou-internal.aliyuncs.com DNS-style bucket+hostname:port template foraccessing a bucket: %(bucket)s.oss-cn-hangzhou-internal.aliyuncs.com Encryption password: Path to GPG program: /usr/bin/gpg Use HTTPS protocol: True HTTP Proxy server name: HTTP Proxy server port: 0[Y/n] y Please wait, attempting to list all buckets... Success. Your access key and secret key worked fine :-) Now verifying that encryption works..[y/N] y Configuration saved to /root/.s3cfg

使用 python setup 脚本的安装方式

下载 s3cmd 的安装包和依赖的 whl 等离线包 https://sourceforge.net/projects/s3tools/files/s3cmd/ https://pypi.org/project/python-magic/#files https://pypi.org/project/six/#files https://pypi.org/project/python-dateutil/#files [root@host S3CMD]# yum install python-setuptools Last metadata expiration check: 2:01:08 ago on Thu 25 May 2023 07:44:15 AM CST. Package python-setuptools-44.1.1-1.oe1.noarch is already installed. Dependencies resolved. Nothing to do. Complete! [root@host S3CMD]# pip install python_magic-0.4.27-py2.py3-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support WARNING: Running pip install with root privileges is generally not a good idea. Try pip install --user instead. Processing ./python_magic-0.4.27-py2.py3-none-any.whl Installing collected packages: python-magic ERROR: After October 2020 you may experience errors when installing or updating packages. This is because pip will change the way that it resolves dependency conflicts. We recommend you use --use-feature=2020-resolver to test your packages with the new resolver before it becomes the default. s3cmd 2.1.0 requires python-dateutil, which is not installed. Successfully installed python-magic-0.4.27 [root@host S3CMD]# pip install six-1.16.0-py2.py3-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support WARNING: Running pip install with root privileges is generally not a good idea. Try pip install --user instead. Processing ./six-1.16.0-py2.py3-none-any.whl Installing collected packages: six Successfully installed six-1.16.0 [root@host S3CMD]# pip install python_dateutil-2.8.2-py2.py3-none-any.whl DEPRECATION: Python 2.7 reached the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 is no longer maintained. pip 21.0 will drop support for Python 2.7 in January 2021. More details about Python 2 support in pip can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support WARNING: Running pip install with root privileges is generally not a good idea. Try pip install --user instead. Processing ./python_dateutil-2.8.2-py2.py3-none-any.whl Requirement already satisfied: six>=1.5 in /usr/lib/python2.7/site-packages (from python-dateutil==2.8.2) (1.16.0) Installing collected packages: python-dateutil Successfully installed python-dateutil-2.8.2 [root@host S3CMD]# cd s3cmd-2.1.0/ [root@host s3cmd-2.1.0]# python setup.py install Using xml.etree.ElementTree for XML processing running install running bdist_egg running egg_info writing requirements to s3cmd.egg-info/requires.txt writing s3cmd.egg-info/PKG-INFO writing top-level names to s3cmd.egg-info/top_level.txt writing dependency_links to s3cmd.egg-info/dependency_links.txt reading manifest file s3cmd.egg-info/SOURCES.txt reading manifest template MANIFEST.in writing manifest file s3cmd.egg-info/SOURCES.txt installing library code to build/bdist.linux-x86_64/egg running install_lib running build_py creating build/bdist.linux-x86_64/egg creating build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/Exceptions.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/CloudFront.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/Custom_httplib3x.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/Config.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/ACL.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/HashCache.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/ExitCodes.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/Progress.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/S3.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/init.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/Utils.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/SortedDict.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/S3Uri.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/FileDict.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/Crypto.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/PkgInfo.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/ConnMan.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/FileLists.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/AccessLog.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/Custom_httplib27.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/MultiPart.py -> build/bdist.linux-x86_64/egg/S3 copying build/lib/S3/BidirMap.py -> build/bdist.linux-x86_64/egg/S3 byte-compiling build/bdist.linux-x86_64/egg/S3/Exceptions.py to Exceptions.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/CloudFront.py to CloudFront.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/Custom_httplib3x.py to Custom_httplib3x.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/Config.py to Config.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/ACL.py to ACL.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/HashCache.py to HashCache.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/ExitCodes.py to ExitCodes.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/Progress.py to Progress.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/S3.py to S3.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/init.py to init.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/Utils.py to Utils.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/SortedDict.py to SortedDict.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/S3Uri.py to S3Uri.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/FileDict.py to FileDict.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/Crypto.py to Crypto.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/PkgInfo.py to PkgInfo.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/ConnMan.py to ConnMan.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/FileLists.py to FileLists.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/AccessLog.py to AccessLog.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/Custom_httplib27.py to Custom_httplib27.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/MultiPart.py to MultiPart.pyc byte-compiling build/bdist.linux-x86_64/egg/S3/BidirMap.py to BidirMap.pyc installing package data to build/bdist.linux-x86_64/egg running install_data creating build/bdist.linux-x86_64/egg/share creating build/bdist.linux-x86_64/egg/share/doc creating build/bdist.linux-x86_64/egg/share/doc/packages creating build/bdist.linux-x86_64/egg/share/doc/packages/s3cmd copying README.md -> build/bdist.linux-x86_64/egg/share/doc/packages/s3cmd copying INSTALL.md -> build/bdist.linux-x86_64/egg/share/doc/packages/s3cmd copying LICENSE -> build/bdist.linux-x86_64/egg/share/doc/packages/s3cmd copying NEWS -> build/bdist.linux-x86_64/egg/share/doc/packages/s3cmd creating build/bdist.linux-x86_64/egg/share/man creating build/bdist.linux-x86_64/egg/share/man/man1 copying s3cmd.1 -> build/bdist.linux-x86_64/egg/share/man/man1 creating build/bdist.linux-x86_64/egg/EGG-INFO installing scripts to build/bdist.linux-x86_64/egg/EGG-INFO/scripts running install_scripts running build_scripts creating build/bdist.linux-x86_64/egg/EGG-INFO/scripts copying build/scripts-2.7/s3cmd -> build/bdist.linux-x86_64/egg/EGG-INFO/scripts changing mode of build/bdist.linux-x86_64/egg/EGG-INFO/scripts/s3cmd to 755 copying s3cmd.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO copying s3cmd.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying s3cmd.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying s3cmd.egg-info/requires.txt -> build/bdist.linux-x86_64/egg/EGG-INFO copying s3cmd.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO zip_safe flag not set; analyzing archive contents... creating dist/s3cmd-2.1.0-py2.7.egg and adding build/bdist.linux-x86_64/egg to it removing build/bdist.linux-x86_64/egg (and everything under it) Processing s3cmd-2.1.0-py2.7.egg Removing /usr/lib/python2.7/site-packages/s3cmd-2.1.0-py2.7.egg Copying s3cmd-2.1.0-py2.7.egg to /usr/lib/python2.7/site-packages s3cmd 2.1.0 is already the active version in easy-install.pth Installing s3cmd script to /usr/bin Installed /usr/lib/python2.7/site-packages/s3cmd-2.1.0-py2.7.egg Processing dependencies for s3cmd==2.1.0 Searching for python-magic==0.4.27 Best match: python-magic 0.4.27 Adding python-magic 0.4.27 to easy-install.pth file Using /usr/lib/python2.7/site-packages Searching for python-dateutil==2.8.2 Best match: python-dateutil 2.8.2 Adding python-dateutil 2.8.2 to easy-install.pth file Using /usr/lib/python2.7/site-packages Searching for six==1.16.0 Best match: six 1.16.0 Adding six 1.16.0 to easy-install.pth file Using /usr/lib/python2.7/site-packages Finished processing dependencies for s3cmd==2.1.0 [root@host s3cmd-2.1.0]#

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:TiDB优雅关闭的操作指南
下一篇:TiDB分布式事务实践:解决写写冲突
相关文章