Locust + Influxdb + Grafana效能測試(升級版)——分散式模式(Windows篇)
引言
前面一篇文章已經講了Locust壓測資料在grafana上展示。這篇主要優化一下。
日誌序列化優化
前面說了兩大塊,主要是讀和寫。我們用的是簡單粗暴的方式去獲取檔案中的資料。現在以正則的方式來寫:
import re import io import platform import os,sys from db_init.conn_influxdb import ConnectInfluxDB BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) # 專案目錄 curPath = os.path.abspath(os.path.dirname(__file__)) rootPath = os.path.split(curPath)[0] sys.path.append(rootPath) pattern = '/' if platform.system() != 'Windows' else '\\' influxdb = ConnectInfluxDB() def pressureData_test(): """ @param make: """ performance_path = os.path.join(BASE_DIR ,'log'+ pattern + "run.log") with io.open(performance_path) as f: data_list = f.readlines() locust_list = [] for data in data_list: res = re.match( r'^\s+(?P<method>GET|POST)\s+(?P<api>[\/\w\?\=\-&\.]+)\s+(?P<reqs>\d+)\s+(?P<fails>[\d\(\.\)\%]+)\s+(\|)\s+(?P<Avg>\d+)\s+(?P<Min>\d+)\s+(?P<Max>\d+)\s+(?P<Median>\d+)\s+(\|)\s+(?P<qps>[\d\(\.\)\%]+)\s+(?P<failures>[\d\(\.\)\%]+)$', data) if res: # print("進入res") method = res.group('method') api = res.group('api') reqs = res.group('reqs') fails = res.group('fails') Avg = res.group('Avg') Min = res.group('Min') Max = res.group('Max') Median = res.group('Median') qps = res.group('qps') failures = res.group('failures') locust_dict = {'Method': method, 'Name': api, 'Requests': reqs, 'Fails': fails,'Failures_s': failures, 'Average_ms': Avg, 'Min_ms': Min, 'Max_ms': Max, 'Median_ms': Median, 'Current_RPS': qps} locust_list.append(locust_dict) aggregate = re.match( r'^\s+(?P<aggregated>Aggregated)\s+(?P<reqs>\d+)\s+(?P<fails>[\d\(\.\)\%]+)\s+(?P<Avg>\d+)\s+(?P<Min>\d+)\s+(?P<Max>\d+)\s+(\|)\s+(?P<Median>\d+)\s+(?P<qps>[\d\(\.\)\%]+)\s+(?P<failures>[\d\(\.\)\%]+)$', data) if aggregate: # print("進入aggregate") api = aggregate.group('aggregated') reqs = aggregate.group('reqs') fails = aggregate.group('fails') Avg = aggregate.group('Avg') Min = aggregate.group('Min') Max = aggregate.group('Max') Median = aggregate.group('Median') qps = aggregate.group('qps') failures = aggregate.group('failures') locust_dict = {'Method': "", 'Name': api, 'Requests': reqs, 'Fails': fails,'Failures_s': failures, 'Average_ms': Avg, 'Min_ms': Min, 'Max_ms': Max, 'Median_ms': Median, 'Current_RPS': qps} locust_list.append(locust_dict) influxdb.post_dump_data(locust_list, "locust") pressureData_test()
鉤子函式控制程式碼
程式碼如下:
import logging from locust import events @events.quitting.add_listener def _(environment, **kw): # 超過1%的請求失敗 if environment.stats.total.fail_ratio > 0.01: logging.error("Test failed due to failure ratio > 1%") environment.process_exit_code = 1 # 平均響應時間超過200毫秒 elif environment.stats.total.avg_response_time > 200: logging.error("Test failed due to average response time ratio > 200 ms") environment.process_exit_code = 1 # 響應時間的第95個百分位數大於800毫秒 elif environment.stats.total.get_response_time_percentile(0.95) > 800: logging.error("Test failed due to 95th percentile response time > 800 ms") environment.process_exit_code = 1 else: environment.process_exit_code = 0
監控成功心跳:
@events.request_success.add_listener def request_success(request_type,name,response_time,response_length,**kwargs): """ :param request_type: :param name: :param response_time: :param response_length: :param kwargs: """ result = 'success' print("{ " + "'message':'{result}' ".format(result=result) + "'request_type':'{}','name':'{}','response_time':'{}','response_length':'{}'".format(request_type,name,response_time,response_length) + " }") pressureData_test()
配置檔案啟動
我們可以將一些變化的引數放到配置檔案中,然後使用關鍵命令來啟動程式,如圖:
# master.conf in current directory locustfile = locust_files/my_locust_file.py headless = true master = true expect-workers = 5 host = http://target-system users = 100 spawn-rate = 10 run-time = 10m
只需在控制檯中輸入簡約命令:
locust --config=master.conf
如圖:
我們再對壓測日誌重定向,命令如下:
locust --config=master.conf >C:\Users\Administrator\Desktop\Locust_grafana_demoV2\log\run.log 2>&1
這樣的話,在控制檯是看不到這些輸出的日誌了。
分散式模式
上一篇主要是講單機模式執行,這篇主要內容就是分散式。
先看一下Locust引數文件:
Usage: locust [OPTIONS] [UserClass ...] Common options: -h, --help show this help message and exit -f LOCUSTFILE, --locustfile LOCUSTFILE Python module file to import, e.g. '../other.py'. Default: locustfile --config CONFIG Config file path -H HOST, --host HOST Host to load test in the following format: http://10.21.32.33 -u NUM_USERS, --users NUM_USERS Number of concurrent Locust users. Primarily used together with --headless -r SPAWN_RATE, --spawn-rate SPAWN_RATE The rate per second in which users are spawned. Primarily used together with --headless -t RUN_TIME, --run-time RUN_TIME Stop after the specified amount of time, e.g. (300s, 20m, 3h, 1h30m, etc.). Only used together with --headless -l, --list Show list of possible User classes and exit Web UI options: --web-host WEB_HOST Host to bind the web interface to. Defaults to '*' (all interfaces) --web-port WEB_PORT, -P WEB_PORT Port on which to run web host --headless Disable the web interface, and instead start the load test immediately. Requires -u and -t to be specified. --web-auth WEB_AUTH Turn on Basic Auth for the web interface. Should be supplied in the following format: username:password --tls-cert TLS_CERT Optional path to TLS certificate to use to serve over HTTPS --tls-key TLS_KEY Optional path to TLS private key to use to serve over HTTPS Master options: Options for running a Locust Master node when running Locust distributed. A Master node need Worker nodes that connect to it before it can run load tests. --master Set locust to run in distributed mode with this process as master --master-bind-host MASTER_BIND_HOST Interfaces (hostname, ip) that locust master should bind to. Only used when running with --master. Defaults to * (all available interfaces). --master-bind-port MASTER_BIND_PORT Port that locust master should bind to. Only used when running with --master. Defaults to 5557. --expect-workers EXPECT_WORKERS How many workers master should expect to connect before starting the test (only when --headless used). Worker options: Options for running a Locust Worker node when running Locust distributed. Only the LOCUSTFILE (-f option) need to be specified when starting a Worker, since other options such as -u, -r, -t are specified on the Master node. --worker Set locust to run in distributed mode with this process as worker --master-host MASTER_NODE_HOST Host or IP address of locust master for distributed load testing. Only used when running with --worker. Defaults to 127.0.0.1. --master-port MASTER_NODE_PORT The port to connect to that is used by the locust master for distributed load testing. Only used when running with --worker. Defaults to 5557. Tag options: Locust tasks can be tagged using the @tag decorator. These options let specify which tasks to include or exclude during a test. -T [TAG [TAG ...]], --tags [TAG [TAG ...]] List of tags to include in the test, so only tasks with any matching tags will be executed -E [TAG [TAG ...]], --exclude-tags [TAG [TAG ...]] List of tags to exclude from the test, so only tasks with no matching tags will be executed Request statistics options: --csv CSV_PREFIX Store current request stats to files in CSV format. Setting this option will generate three files: [CSV_PREFIX]_stats.csv, [CSV_PREFIX]_stats_history.csv and [CSV_PREFIX]_failures.csv --csv-full-history Store each stats entry in CSV format to _stats_history.csv file. You must also specify the '-- csv' argument to enable this. --print-stats Print stats in the console --only-summary Only print the summary stats --reset-stats Reset statistics once spawning has been completed. Should be set on both master and workers when running in distributed mode Logging options: --skip-log-setup Disable Locust's logging setup. Instead, the configuration is provided by the Locust test or Python defaults. --loglevel LOGLEVEL, -L LOGLEVEL Choose between DEBUG/INFO/WARNING/ERROR/CRITICAL. Default is INFO. --logfile LOGFILE Path to log file. If not set, log will go to stdout/stderr Step load options: --step-load Enable Step Load mode to monitor how performance metrics varies when user load increases. Requires --step-users and --step-time to be specified. --step-users STEP_USERS User count to increase by step in Step Load mode. Only used together with --step-load --step-time STEP_TIME Step duration in Step Load mode, e.g. (300s, 20m, 3h, 1h30m, etc.). Only used together with --step-load Other options: --show-task-ratio Print table of the User classes' task execution ratio --show-task-ratio-json Print json data of the User classes' task execution ratio --version, -V Show program's version number and exit --exit-code-on-error EXIT_CODE_ON_ERROR Sets the process exit code to use when a test result contain any failure or error -s STOP_TIMEOUT, --stop-timeout STOP_TIMEOUT Number of seconds to wait for a simulated user to complete any executing task before exiting. Default is to terminate immediately. This parameter only needs to be specified for the master process when running Locust distributed. User classes: UserClass Optionally specify which User classes that should be used (available User classes can be listed with -l or --list)
Locust系統分散式架構圖:
- locust架構上使用master-slave模型,支援單機和分散式
- master和slave(即worker)使用 ZeroMQ 協議通訊
- 提供web頁面管理master,從而控制slave,同時展示壓測過程和彙總結果
- 可選headless模式(headless 一般用於除錯)
- 基於Python本身已經支援跨平臺
先來搞清楚幾個關鍵的命令:
Master主機命令:
主選項: 執行Locust分散式系統時用於執行Locust Master節點的選項。 主節點需要連線到它的輔助節點,然後它才能執行負載測試。 --master將蝗蟲設定為此以分散式模式執行 作為主人的過程 --master-bind-host MASTER_BIND_HOST 蝗蟲主控機應使用的介面(主機名,IP) 繫結到。 僅在與--master一起執行時使用。 預設為*(所有可用介面)。 --master-bind-port MASTER_BIND_PORT 蝗蟲主應該繫結的埠。 僅在以下情況下使用 與--master一起執行。 預設為5557 --expect-workers EXPECT_WORKERS 主人應該期望連線多少工人 在開始測試之前(僅在使用--headless時)。
Slave從機命令:
工人選項: 執行Locust分散式時執行Locust Worker節點的選項。 啟動Worker時,僅需要指定LOCUSTFILE(-f選項),因為在主節點上指定了-u,-r,-t等其他選項。 --worker使用此命令將蝗蟲設定為在分散式模式下執行 作為工人的過程 --master-host MASTER_NODE_HOST 分散式主機的主機或IP地址 負載測試。 僅在與--worker一起執行時使用。 預設為127.0.0.1。 --master-port MASTER_NODE_PORT 蝗蟲使用的連線埠 進行分散式負載測試的主機。 僅在以下情況下使用 與--worker一起執行。 預設為5557
主機執行命令:
locust -f locustfile.py --master --master-bind-port 8089 --headless -u 10 -r 3 --expect-worker 2 -t 5m -H https://api.apiopen.top/ --csv D:\locust_test_20190228\locust_performance_test\csv\ --logfile D:\locust_test_20190228\log\locust.log --loglevel=INFO 1>D:\locust_test_20190228\log\run.log 2>&1
從機執行命令:
locust -f locustfile.py --master-host localhost --master-port 8089 --headless --worker
此處從機命令需要執行兩次,本地的話,就開啟兩個控制檯分別執行一次,相當於兩臺從機,因為主機已經指定了從機數2。
進步模式
其實在locust分散式中,還可以使用一種模式——進步模式,具體命令如下:
--step-load 啟用步進模式 --step-users 每級的使用者增量 --step-time 增量間隔
這兩個引數放在主機命令中執行,可以滿足一些複雜性能測試場景。
例如:
locust -f --headless -u 1000 -r 100 --run-time 30m --step-load --step-users 300 --step-time 1m 無web介面啟動locust,設定總使用者數1000,每秒增量100個使用者,執行總時長30分鐘,啟動步進模式,步進使用者300,維持每個步進模式時間為1分鐘 當達到300使用者時會維持一分鐘,然後在持續增量使用者,達到600在維持一分鐘,以此類推
配置化生成指令碼執行命令
如果是分散式,排程機執行一條命令,從機執行一條命令,如果是多臺電腦,就會很多命令,不可能每次手動去填引數,然後再執行命令,那樣很麻煩。
現在通過配置化,一鍵生成命令的方式來執行
專案結構:
新增了兩個配置,一個是單擊模式的配置,一個是分散式模式的配置,然後讀取配置,生成命令的方法,程式碼如下:
def master_order(): """ 生成主機命令 :return: """ master_kw = "locust -f %s --%s --master-bind-port %s --%s " % (locustfile,command_cmd['master'],command_cmd['slaveport'],command_cmd['headless']) master_args = "-u %s -r %s --expect-worker %s -t %s -H %s --step-load --step-users %s --step-time %s --csv %s --logfile %s --loglevel=INFO 1>%s 2>&1"% (command_cmd['users'], command_cmd['rate'],command_cmd['expect_workers'], command_cmd['run_time'], command_cmd['host'], command_cmd['step_users'],command_cmd['step_time'],csvfile,logfile,runfile) cmd = master_kw + master_args print(cmd) return master_order def slave_order(): """ 生成從機命令 :return: """ slave_cmd = "locust -f %s --master-host %s --master-port %s --headless --worker" % (locustfile,command_cmd['master_host'], command_cmd['master_port']) print(slave_cmd) return slave_cmd
執行結果:
報告展示
上面優化部分到此,後續可能還要繼續優化。
現在簡單設定一下場景一臺排程機設定成10個使用者,每秒增加3個使用者,壓測5分鐘,以進步模式進行,使用者增加到10個使用者,持續一分鐘,累積增加到20個使用者,持續一分鐘,以此類推,壓力機是2臺,同時對伺服器進行施壓。
進入grafana圖報表,檢視報告:
總結
本套locust壓測框架降到此,有興趣的夥伴可以加入QQ測試開發交流群,一起學習和進步。