手把手教你改 sysbench 代码

网友投稿 771 2023-04-01

sysbench 原始代码介绍

安装完 sysbench 之后,可以在 /usr/share/sysbench 目录下看到一些 .lua 的脚本,这些脚本就是 sysbench 程序在进行 OLTP 压测时用到的 lua 脚本代码,修改目录下的脚本即可修改 sysbench oltp 压测的行为。

手把手教你改 sysbench 代码

[tidb@172-16-120-12 sysbench]$ ll /usr/share/sysbench/ 总用量 64 -rwxr-xr-x 1 root root 1452 316 2019 bulk_insert.lua -rw-r--r-- 1 root root 14659 628 2022 oltp_common.lua -rwxr-xr-x 1 root root 1290 316 2019 oltp_delete.lua -rwxr-xr-x 1 root root 2415 316 2019 oltp_insert.lua -rwxr-xr-x 1 root root 1265 316 2019 oltp_point_select.lua -rwxr-xr-x 1 root root 1649 316 2019 oltp_read_only.lua -rwxr-xr-x 1 root root 1824 316 2019 oltp_read_write.lua -rwxr-xr-x 1 root root 1118 316 2019 oltp_update_index.lua -rwxr-xr-x 1 root root 1127 316 2019 oltp_update_non_index.lua -rwxr-xr-x 1 root root 1440 316 2019 oltp_write_only.lua -rwxr-xr-x 1 root root 1919 316 2019 select_random_points.lua -rwxr-xr-x 1 root root 2118 316 2019 select_random_ranges.lua drwxr-xr-x 4 root root 4096 628 2022 tests

oltp_point_select.lua/oltp_read_write.lua/oltp_update_index.lua 等脚本是使用 sysbench 进行 OLTP 各项测试的入口

oltp_common.lua 包含了一些常见的 OLTP 基准测试场景和数据模型,包括创建表、插入数据、查询数据等

sysbench prepare 建表以及生成数据的实现逻辑在 `function create_table(drv, con, table_num)` 实现。如果要自定义的表结构或者并发生成数据,主要是通过修改 create_table 函数的逻辑来实现。

修改1 :修改表结构提高单表记录最大数量

默认 sysbench 创建的表结构为

CREATE TABLE `sbtest1` ( `id` int(11) NOT NULL, `k` int(11) NOT NULL DEFAULT 0, `c` char(120) NOT NULL DEFAULT , `pad` char(60) NOT NULL DEFAULT , PRIMARY KEY (`id`) /*T![clustered_index] CLUSTERED */, KEY `k_1` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin

由于 int 类型最大值为 2147483647 ,所以当 id 从 1 开始自增的时候,单表最多只能存 2147483647 条记录,为了测试更大容量的单表性能情况,需要修改建表定义。

修改主要内容

修改2:修改非 AUTO_INCREMENT 主键起始值

sysbench 程序在建完表结构之后,开始往表结构通过 INSERT 方式造数。如果 sysbench 命令设置了 --auto-inc 为 true (默认值为 true),则生成 INSERT INTO sbtestN (k,c,pad) VALUES()()().... 的语句写入,id 字段的值依赖于 auto_increment 自动生成;如果 sysbench 命令设置了 --auto-inc 为 false 则生成 INSERT INTO sbtestN(id,k,c,pad) VALUES()()().... 的语句写入,id 字段的值由下面这个 for 循环迭代生成,显式指定。

for i = 1, sysbench.opt.table_size do c_val = get_c_value() pad_val = get_pad_value() if (sysbench.opt.auto_inc) then query = string.format("(%d, %s, %s)", sb_rand(1, sysbench.opt.table_size), c_val, pad_val) else query = string.format("(%d, %d, %s, %s)", i, sb_rand(1, sysbench.opt.table_size), c_val, pad_val) end con:bulk_insert_next(query) end

原始的 sysbench 程序,在造数过程中如果遇到错误中断,则无法通过重新运行程序继续造数,需要将之前的数据清理干净从头开始造,对于大表造数的中断,影响比较大。所以需要修改造数脚本,让 sysbench 在错误中断之后,继续造数。默认每次重新运行 sysbench 程序 id 都是从 1 开始生成,直到 table-size 值,如果直接重新运行 sysbench 会在两个地方产生错误:

表结构已经存在,重复建表会导致 DDL 错误

id 主键从 1 开始生成,会产生主键冲突的错误

修改方式:

对于重复建表的问题,可以在建表语句末尾加上 if not exists 定义,或者将执行建表语句行直接注释,跳过建表,con:query(query) 这一行就是在执行建表语句,通过 -- 将代码注释。

query = string.format([[ CREATE TABLE sbtest%d( id %s, k INTEGER DEFAULT 0 NOT NULL, c CHAR(120) DEFAULT NOT NULL, pad CHAR(60) DEFAULT NOT NULL, %s (id) ) %s %s]], table_num, id_def, id_index_def, engine_def, extra_table_options) --con:query(query)

对于 id 主键冲突的问题,比较简单的方式就是直接修改 for 循环的起始值和结束值,查询出中断时候,表内已有数据的最大 id 值,for 循环从 max(id)+1 开始,到 table-size 结束。

 但是这种修改方式主要是针对单表造数的情况,如果有多张表的造数,sysbench 遇到错误中断退出后,每个表的 max(id) 值可能是不同的,如果对数据数量以及准确度要求完全跟原生 sysbench 产生的一样,可以将 INSERT 语句修改为 REPLACE 语句,并且 for 循环起始位置取多张表的 max(id) 中的最小值来完成。

修改3:提升单表造数并发度

sysbench 在 Prepare 造数时,每个表只有 1 个线程进行造数,当要造的单表数量比较大的情况下,造数效率比较低。看如何修改 sysbench 脚本,在 Prepare 造数阶段实现单表并发写入数据,提升造数效率。正常应该按照 lua 语言的协程方式去实现并发写入,但是对 lua 学艺不精,看了 Chatgpt 给出的关于 lua 协程的 demo 程序也比较复杂,放弃让 sysbench 脚本直接支持并发。

考虑到前面修改 for 循环起始位置可以控制生成的记录数量以及主键范围,想到一种比较粗糙的实现并发的方式:

通过 for 循环起始值和结束值控制生成的数据范围,同时启动多个 sysbench 程序同时写入数据,就能达到并发写入的效果。

要实现同时启动多个 sysbench 程序同时写入,通过直接修改代码写入数值的方式就比较麻烦,并且 lua 是脚本语言,在前面 sysbench 运行过程中,修改 lua 脚本,可能会导致不预期的行为。所以考虑通过命令行参数的方式将起始值和结束值传入。

for i = 1, sysbench.opt.table_size do

原始 for 循环中的起始值固定为 1 ,结束值是 sysbench 命令行中传入的 --table-size 值,所以添加一个参数的传入即可。

修改之后 sysbench prepare 可以通过在命令行修改 --start-id 和 --table-size 参数来控制生成的记录数量及范围

sysbench --config-file=tidbconfig oltp_read_write_tidb --mysql-db=sbdb --tables=1 --table-size=2200000000 --mysql-ignore-errors=all --auto-inc=false --start-id=2100000001 prepare

实现了传参控制 sysbench 生成数据之后,还是需要手动进行分批,为了解决这一部分工作量的问题,可以考虑配合一个简单的 shell 脚本生成 sysbench 造数命令。

这个脚本由 ChatGPT 生成,执行脚本时需要传入三个参数:id 起始值, id 结束值,batch 值,每个 sysbench 进程负责完成 batch 数量的记录写入。`sh parall_prapare.sh 1 5000000000 100000000` 表示总共需要完成 id 从 1 到 5000000000 行记录的写入,每个 sysbench 进程负责 100000000 行记录的写入,也就是会生成 50 个 sysbench 进程同时进行造数,达到 50 个并发的同时写入的效果。

#!/bin/bash # Check if start, end and batch values are provided if [ $# -ne 3 ]; then echo "Usage: $0 start end batch" exit 1 fi # Parse the arguments start=$1 end=$2 batch=$3 # Set the initial value of tmp to start tmps=$start tmpe=$batch # Loop until tmp is greater than end while [ $tmps -le $end ] do echo "Start from $tmps to $tmpe" sysbench_cmd="sysbench --config-file=tidbconfig oltp_read_write_tidb --mysql-db=sbdb --tables=1 --start-id=$tmps --table-size=$tmpe --mysql-ignore-errors=all --auto-inc=false prepare" #echo "execute $sysbench_cmd" nohup sysbench --config-file=tidbconfig oltp_read_write_tidb --mysql-db=sbdb --tables=1 --start-id=$tmps --table-size=$tmpe --mysql-ignore-errors=all --auto-inc=false prepare > nohup_$tmps.log & #sleep 3 tmps=$((tmps + batch)) tmpe=$((tmpe + batch)) done echo "Done!"

修改4:控制每个 INSERT 语句记录行数

前面看 sysbench 造数的时候,主要是调用了 bulk_insert_init() ,bulk_insert_next(),bulk_insert_done() 三个接口来完成初始数据的写入,具体实现是通过 C++ 实现的:

bulk_insert_init() 初始化 INSERT INTO VALUES 语句,并开启事务

bulk_insert_next() 拼接生成的每一行记录值 (),(),()... ,通过代码看到当拼接的字符串长度大于 BULK_PACKET_SIZE = 524288 之后,就会执行当前 SQL 语句,在 autocommit 模式下等于提交事务

bulk_insert_done() 表示提交事务

if sysbench.opt.auto_inc then query = "INSERT INTO sbtest" .. table_num .. "(k, c, pad) VALUES" else query = "INSERT INTO sbtest" .. table_num .. "(id, k, c, pad) VALUES" end con:bulk_insert_init(query) local c_val local pad_val for i = 1, sysbench.opt.table_size do c_val = get_c_value() pad_val = get_pad_value() if (sysbench.opt.auto_inc) then query = string.format("(%d, %s, %s)", sb_rand(1, sysbench.opt.table_size), c_val, pad_val) else query = string.format("(%d, %d, %s, %s)", i, sb_rand(1, sysbench.opt.table_size), c_val, pad_val) end con:bulk_insert_next(query) end con:bulk_insert_done()

通过 bulk_insert_next() 拼接的 INSERT 语句提交的时机是通过字符串长度进行控制的,所以单个 INSERT 提交多少行记录是不固定的,当我们需要控制 sysbench 按照 500 行记录一次提交的时候,需要修改 sysbench 相关代码。相关接口已经实现,只需要修改一下调用接口的逻辑即可,在 for 循环中记录已经拼接的记录行数,当记录行数达到 500 行,调用 bulk_insert_done() 提交事务,然后重新初始化字符串。

独立修改的测试代码

为了避免上述我们对于 oltp_common.lua 的修改影响 sysbench 压测 MySQL 的逻辑,我们可以将 oltp_common.lua 拷贝一份 oltp_common_tidb.lua 进行修改,并且拷贝一份 oltp_read_write.lua 命名为 oltp_read_write_tidb.lua 。将 oltp_read_write_tidb.lua 文件中的 require 修改为 require("oltp_common_tidb") 就可以正常引用到 oltp_common_tidb.lua 的代码。

调用命令方式

sysbench --config-file=tidbconfig oltp_read_write_tidb --mysql-db=sbdb --tables=1 --start-id=1 --table-size=1000 --mysql-ignore-errors=all --auto-inc=false prepare

附完整修改代码

oltp_common_tidb.lua

-- Copyright (C) 2006-2018 Alexey Kopytov <akopytov@gmail.com> -- This program is free software; you can redistribute it and/or modify -- it under the terms of the GNU General Public License as published by -- the Free Software Foundation; either version 2 of the License, or -- (at your option) any later version. -- This program is distributed in the hope that it will be useful, -- but WITHOUT ANY WARRANTY; without even the implied warranty of -- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -- GNU General Public License for more details. -- You should have received a copy of the GNU General Public License -- along with this program; if not, write to the Free Software -- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA -- ----------------------------------------------------------------------------- -- Common code for OLTP benchmarks. -- ----------------------------------------------------------------------------- function init() assert(event ~= nil, "this script is meant to be included by other OLTP scripts and " .. "should not be called directly.") end if sysbench.cmdline.command == nil then error("Command is required. Supported commands: prepare, prewarm, run, " .. "cleanup, help") end -- Command line options sysbench.cmdline.options = { table_size = {"Number of rows per table", 10000}, range_size = {"Range size for range SELECT queries", 100}, tables = {"Number of tables", 1}, point_selects = {"Number of point SELECT queries per transaction", 10}, simple_ranges = {"Number of simple range SELECT queries per transaction", 1}, sum_ranges = {"Number of SELECT SUM() queries per transaction", 1}, order_ranges = {"Number of SELECT ORDER BY queries per transaction", 1}, distinct_ranges = {"Number of SELECT DISTINCT queries per transaction", 1}, index_updates = {"Number of UPDATE index queries per transaction", 1}, non_index_updates = {"Number of UPDATE non-index queries per transaction", 1}, delete_inserts = {"Number of DELETE/INSERT combinations per transaction", 1}, range_selects = {"Enable/disable all range SELECT queries", true}, auto_inc = {"Use AUTO_INCREMENT column as Primary Key (for MySQL), " .. "or its alternatives in other DBMS. When disabled, use " .. "client-generated IDs", true}, skip_trx = {"Dont start explicit transactions and execute all queries " .. "in the AUTOCOMMIT mode", false}, secondary = {"Use a secondary index in place of the PRIMARY KEY", false}, create_secondary = {"Create a secondary index in addition to the PRIMARY KEY", true}, mysql_storage_engine = {"Storage engine, if MySQL is used", "innodb"}, pgsql_variant = {"Use this *** variant when running with the " .. "*** driver. The only currently supported " .. "variant is redshift. When enabled, " .. "create_secondary is automatically disabled, and " .. "delete_inserts is set to 0"}, start_id = {"Start id",1} } -- Prepare the dataset. This command supports parallel execution, i.e. will -- benefit from executing with --threads > 1 as long as --tables > 1 function cmd_prepare() local drv = sysbench.sql.driver() local con = drv:connect() for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, sysbench.opt.threads do create_table(drv, con, i) end end -- Preload the dataset into the server cache. This command supports parallel -- execution, i.e. will benefit from executing with --threads > 1 as long as -- --tables > 1 -- -- PS. Currently, this command is only meaningful for MySQL/InnoDB benchmarks function cmd_prewarm() local drv = sysbench.sql.driver() local con = drv:connect() assert(drv:name() == "mysql", "prewarm is currently MySQL only") -- Do not create on disk tables for subsequent queries con:query("SET tmp_table_size=2*1024*1024*1024") con:query("SET max_heap_table_size=2*1024*1024*1024") for i = sysbench.tid % sysbench.opt.threads + 1, sysbench.opt.tables, sysbench.opt.threads do local t = "sbtest" .. i print("Prewarming table " .. t) con:query("ANALYZE TABLE sbtest" .. i) con:query(string.format( "SELECT AVG(id) FROM " .. "(SELECT * FROM %s FORCE KEY (PRIMARY) " .. "LIMIT %u) t", t, sysbench.opt.table_size)) con:query(string.format( "SELECT COUNT(*) FROM " .. "(SELECT * FROM %s WHERE k LIKE %%0%% LIMIT %u) t", t, sysbench.opt.table_size)) end end -- Implement parallel prepare and prewarm commands sysbench.cmdline.commands = { prepare = {cmd_prepare, sysbench.cmdline.PARALLEL_COMMAND}, prewarm = {cmd_prewarm, sysbench.cmdline.PARALLEL_COMMAND} } -- Template strings of random digits with 11-digit groups separated by dashes -- 10 groups, 119 characters local c_value_template = "###########-###########-###########-" .. "###########-###########-###########-" .. "###########-###########-###########-" .. "###########" -- 5 groups, 59 characters local pad_value_template = "###########-###########-###########-" .. "###########-###########" function get_c_value() return sysbench.rand.string(c_value_template) end function get_pad_value() return sysbench.rand.string(pad_value_template) end function create_table(drv, con, table_num) local id_index_def, id_def local engine_def = "" local extra_table_options = "" local query if sysbench.opt.secondary then id_index_def = "KEY xid" else id_index_def = "PRIMARY KEY" end if drv:name() == "mysql" or drv:name() == "attachsql" or drv:name() == "drizzle" then if sysbench.opt.auto_inc then id_def = "BIGINT NOT NULL AUTO_INCREMENT" else id_def = "BIGINT NOT NULL" end engine_def = "/*! ENGINE = " .. sysbench.opt.mysql_storage_engine .. " */" extra_table_options = mysql_table_options or "" elseif drv:name() == "pgsql" then if not sysbench.opt.auto_inc then id_def = "INTEGER NOT NULL" elseif pgsql_variant == redshift then id_def = "INTEGER IDENTITY(1,1)" else id_def = "SERIAL" end else error("Unsupported database driver:" .. drv:name()) end print(string.format("Creating table sbtest%d...", table_num)) query = string.format([[ CREATE TABLE IF NOT EXISTS sbtest%d( id %s, k BIGINT DEFAULT 0 NOT NULL, c CHAR(120) DEFAULT NOT NULL, pad CHAR(60) DEFAULT NOT NULL, %s (id) ) %s %s]], table_num, id_def, id_index_def, engine_def, extra_table_options) con:query(query) if (sysbench.opt.table_size > 0) then print(string.format("Inserting %d records into sbtest%d", sysbench.opt.table_size, table_num)) end iquery = "" if sysbench.opt.auto_inc then iquery = "INSERT INTO sbtest" .. table_num .. "(k, c, pad) VALUES" else iquery = "INSERT INTO sbtest" .. table_num .. "(id, k, c, pad) VALUES" end con:bulk_insert_init(iquery) local c_val local pad_val print(string.format("Insert start from %d to %d",sysbench.opt.start_id,sysbench.opt.table_size)) cursize = 1 for i = sysbench.opt.start_id, sysbench.opt.table_size do c_val = get_c_value() pad_val = get_pad_value() if (sysbench.opt.auto_inc) then query = string.format("(%d, %s, %s)", sb_rand(1, sysbench.opt.table_size), c_val, pad_val) else query = string.format("(%d, %d, %s, %s)", i, sb_rand(1, sysbench.opt.table_size), c_val, pad_val) end --con:bulk_insert_next(query) --print(query) if (cursize % 500 ~= 0) then con:bulk_insert_next(query) elseif (cursize ~= 0) then con:bulk_insert_next(query) con:bulk_insert_done() con:bulk_insert_init(iquery) else con:bulk_insert_init(iquery) end cursize = cursize + 1 end --con:bulk_insert_done() if sysbench.opt.create_secondary then print(string.format("Creating a secondary index on sbtest%d...", table_num)) --con:query(string.format("CREATE INDEX k_%d ON sbtest%d(k)", -- table_num, table_num)) end end local t = sysbench.sql.type local stmt_defs = { point_selects = { "SELECT c FROM sbtest%u WHERE id=?", t.INT}, simple_ranges = { "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ?", t.INT, t.INT}, sum_ranges = { "SELECT SUM(k) FROM sbtest%u WHERE id BETWEEN ? AND ?", t.INT, t.INT}, order_ranges = { "SELECT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", t.INT, t.INT}, distinct_ranges = { "SELECT DISTINCT c FROM sbtest%u WHERE id BETWEEN ? AND ? ORDER BY c", t.INT, t.INT}, index_updates = { "UPDATE sbtest%u SET k=k+1 WHERE id=?", t.INT}, non_index_updates = { "UPDATE sbtest%u SET c=? WHERE id=?", {t.CHAR, 120}, t.INT}, deletes = { "DELETE FROM sbtest%u WHERE id=?", t.INT}, inserts = { "INSERT INTO sbtest%u (id, k, c, pad) VALUES (?, ?, ?, ?)", t.INT, t.INT, {t.CHAR, 120}, {t.CHAR, 60}}, } function prepare_begin() stmt.begin = con:prepare("BEGIN") end function prepare_commit() stmt.commit = con:prepare("COMMIT") end function prepare_for_each_table(key) for t = 1, sysbench.opt.tables do stmt[t][key] = con:prepare(string.format(stmt_defs[key][1], t)) local nparam = #stmt_defs[key] - 1 if nparam > 0 then param[t][key] = {} end for p = 1, nparam do local btype = stmt_defs[key][p+1] local len if type(btype) == "table" then len = btype[2] btype = btype[1] end if btype == sysbench.sql.type.VARCHAR or btype == sysbench.sql.type.CHAR then param[t][key][p] = stmt[t][key]:bind_create(btype, len) else param[t][key][p] = stmt[t][key]:bind_create(btype) end end if nparam > 0 then stmt[t][key]:bind_param(unpack(param[t][key])) end end end function prepare_point_selects() prepare_for_each_table("point_selects") end function prepare_simple_ranges() prepare_for_each_table("simple_ranges") end function prepare_sum_ranges() prepare_for_each_table("sum_ranges") end function prepare_order_ranges() prepare_for_each_table("order_ranges") end function prepare_distinct_ranges() prepare_for_each_table("distinct_ranges") end function prepare_index_updates() prepare_for_each_table("index_updates") end function prepare_non_index_updates() prepare_for_each_table("non_index_updates") end function prepare_delete_inserts() prepare_for_each_table("deletes") prepare_for_each_table("inserts") end function thread_init() drv = sysbench.sql.driver() con = drv:connect() -- Create global nested tables for prepared statements and their -- parameters. We need a statement and a parameter set for each combination -- of connection/table/query stmt = {} param = {} for t = 1, sysbench.opt.tables do stmt[t] = {} param[t] = {} end -- This function is a callback defined by individual benchmark scripts prepare_statements() end -- Close prepared statements function close_statements() for t = 1, sysbench.opt.tables do for k, s in pairs(stmt[t]) do stmt[t][k]:close() end end if (stmt.begin ~= nil) then stmt.begin:close() end if (stmt.commit ~= nil) then stmt.commit:close() end end function thread_done() close_statements() con:disconnect() end function cleanup() local drv = sysbench.sql.driver() local con = drv:connect() for i = 1, sysbench.opt.tables do print(string.format("Dropping table sbtest%d...", i)) con:query("DROP TABLE IF EXISTS sbtest" .. i ) end end local function get_table_num() return sysbench.rand.uniform(1, sysbench.opt.tables) end local function get_id() return sysbench.rand.default(1, sysbench.opt.table_size) end function begin() stmt.begin:execute() end function commit() stmt.commit:execute() end function execute_point_selects() local tnum = get_table_num() local i for i = 1, sysbench.opt.point_selects do param[tnum].point_selects[1]:set(get_id()) stmt[tnum].point_selects:execute() end end local function execute_range(key) local tnum = get_table_num() for i = 1, sysbench.opt[key] do local id = get_id() param[tnum][key][1]:set(id) param[tnum][key][2]:set(id + sysbench.opt.range_size - 1) stmt[tnum][key]:execute() end end function execute_simple_ranges() execute_range("simple_ranges") end function execute_sum_ranges() execute_range("sum_ranges") end function execute_order_ranges() execute_range("order_ranges") end function execute_distinct_ranges() execute_range("distinct_ranges") end function execute_index_updates() local tnum = get_table_num() for i = 1, sysbench.opt.index_updates do param[tnum].index_updates[1]:set(get_id()) stmt[tnum].index_updates:execute() end end function execute_non_index_updates() local tnum = get_table_num() for i = 1, sysbench.opt.non_index_updates do param[tnum].non_index_updates[1]:set_rand_str(c_value_template) param[tnum].non_index_updates[2]:set(get_id()) stmt[tnum].non_index_updates:execute() end end function execute_delete_inserts() local tnum = get_table_num() for i = 1, sysbench.opt.delete_inserts do local id = get_id() local k = get_id() param[tnum].deletes[1]:set(id) param[tnum].inserts[1]:set(id) param[tnum].inserts[2]:set(k) param[tnum].inserts[3]:set_rand_str(c_value_template) param[tnum].inserts[4]:set_rand_str(pad_value_template) stmt[tnum].deletes:execute() stmt[tnum].inserts:execute() end end -- Re-prepare statements if we have reconnected, which is possible when some of -- the listed error codes are in the --mysql-ignore-errors list function sysbench.hooks.before_restart_event(errdesc) if errdesc.sql_errno == 2013 or -- CR_SERVER_LOST errdesc.sql_errno == 2055 or -- CR_SERVER_LOST_EXTENDED errdesc.sql_errno == 2006 or -- CR_SERVER_GONE_ERROR errdesc.sql_errno == 2011 -- CR_TCP_CONNECTION then close_statements() prepare_statements() end end

oltp_read_write_tidb.lua

#!/usr/bin/env sysbench -- Copyright (C) 2006-2017 Alexey Kopytov <akopytov@gmail.com> -- This program is free software; you can redistribute it and/or modify -- it under the terms of the GNU General Public License as published by -- the Free Software Foundation; either version 2 of the License, or -- (at your option) any later version. -- This program is distributed in the hope that it will be useful, -- but WITHOUT ANY WARRANTY; without even the implied warranty of -- MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -- GNU General Public License for more details. -- You should have received a copy of the GNU General Public License -- along with this program; if not, write to the Free Software -- Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA -- ---------------------------------------------------------------------- -- Read/Write OLTP benchmark -- ---------------------------------------------------------------------- require("oltp_common_tidb") function prepare_statements() if not sysbench.opt.skip_trx then prepare_begin() prepare_commit() end prepare_point_selects() if sysbench.opt.range_selects then prepare_simple_ranges() prepare_sum_ranges() prepare_order_ranges() prepare_distinct_ranges() end prepare_index_updates() prepare_non_index_updates() prepare_delete_inserts() end function event() if not sysbench.opt.skip_trx then begin() end execute_point_selects() if sysbench.opt.range_selects then execute_simple_ranges() execute_sum_ranges() execute_order_ranges() execute_distinct_ranges() end execute_index_updates() execute_non_index_updates() execute_delete_inserts() if not sysbench.opt.skip_trx then commit() end end

使用 10 并发进行造数例子

[tidb@172-16-120-12 gangshen]$ cat tidbconfig db-ps-mode=auto rand-type=uniform mysql-host=127.0.0.1 mysql-port=14005 mysql-user=root mysql-password= time=600 threads=50 report-interval=100 [tidb@172-16-120-12 gangshen]$ cat parall_prepare.sh #!/bin/bash # Check if start, end and batch values are provided if [ $# -ne 3 ]; then echo "Usage: $0 start end batch" exit 1 fi # Parse the arguments start=$1 end=$2 batch=$3 # Set the initial value of tmp to start tmps=$start tmpe=$batch # Loop until tmp is greater than end while [ $tmps -le $end ] do echo "Start from $tmps to $tmpe" sysbench_cmd="sysbench --config-file=tidbconfig oltp_read_write_tidb --mysql-db=sbtest --tables=1 --start-id=$tmps --table-size=$tmpe --mysql-ignore-errors=all --auto-inc=false prepare" #echo "execute $sysbench_cmd" nohup sysbench --config-file=tidbconfig oltp_read_write_tidb --mysql-db=sbtest --tables=1 --start-id=$tmps --table-size=$tmpe --mysql-ignore-errors=all --auto-inc=false prepare > nohup_$tmps.log & #sleep 3 tmps=$((tmps + batch)) tmpe=$((tmpe + batch)) done echo "Done!" [tidb@172-16-120-12 gangshen]$ sh parall_prepare.sh 1 10000 1000 Start from 1 to 1000 Start from 1001 to 2000 Start from 2001 to 3000 Start from 3001 to 4000 Start from 4001 to 5000 Start from 5001 to 6000 Start from 6001 to 7000 Start from 7001 to 8000 nohup: 重定向标准错误到标准输出 nohup: 重定向标准错误到标准输出 nohup: 重定向标准错误到标准输出 Start from 8001 to 9000 nohup: 重定向标准错误到标准输出 Start from 9001 to 10000 nohup: 重定向标准错误到标准输出 Done! nohup: 重定向标准错误到标准输出 [tidb@172-16-120-12 gangshen]$ nohup: 重定向标准错误到标准输出 nohup: 重定向标准错误到标准输出 nohup: 重定向标准错误到标准输出 nohup: 重定向标准错误到标准输出 mysql> use sbtest; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; +------------------+ | Tables_in_sbtest | +------------------+ | sbtest1 | +------------------+ 1 row in set (0.00 sec) mysql> select count(*) from sbtest1; +----------+ | count(*) | +----------+ | 10000 | +----------+ 1 row in set (0.02 sec)

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:通过Chaos-Mesh打造更稳定TiDB数据库高可用架构(二)
下一篇:TIDB升级发生故障时,快速强行回退方案
相关文章