1. 程式人生 > >PostgreSQL 同步流複製延遲測試(二)

PostgreSQL 同步流複製延遲測試(二)

1主2從SR同步流複製測

  • 搭建環境

伺服器 | Role
|- | :-: | -: |
10.10.56.16 | master
10.10.56.17 | slave1
10.10.56.19 | slave2

  • 16查詢狀態
pocdb=# SELECT client_addr,application_name,sync_state FROM pg_stat_replication;
 client_addr | application_name | sync_state
-------------+------------------+------------
 10.10
.56.17 | slave1 | sync 10.10.56.19 | slave2 | potential (2 rows) pocdb=#

當從庫大於兩臺機器時同步狀態為 potential ,表示可能會提升為 master

  • 查詢使用者
pocdb=# \du+
                                          List of roles
 Role name |                         Attributes                         | Member of | Description
-----------+------------------------------------------------------------+-----------+-------------
postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {} | repl | Replication | {} | pocdb=#
  • 16建立同步測試表 synctest 和序列
pocdb=# create table synctest (id bigint primary key ,number bigint,date timestamp default
now());
CREATE TABLE pocdb=# create sequence seq_synctest increment by 1 minvalue 1 maxvalue 99999999999999 cache 50 no cycle; CREATE SEQUENCE pocdb=#
  • 查看錶大小
pocdb=# \d+
                           List of relations
 Schema |     Name     |   Type   |  Owner   |    Size    | Description
--------+--------------+----------+----------+------------+-------------
 public | seq_synctest | sequence | postgres | 8192 bytes |
 public | synctest     | table    | postgres | 0 bytes    |
(2 rows)

pocdb=#

此時17、19伺服器自動同步該表,可分別去查詢

  • 16伺服器編寫插入指令碼
postgres@clw-db1:/pgdata/10/poc/scripts> vi bench_script_for_insert_20180717.sql
[email protected]:/pgdata/10/poc/scripts> cat bench_script_for_insert_20180717.sql
\set number random(1, 100000000000000000000000)
INSERT INTO synctest(id,number) VALUES (nextval('seq_synctest'),:number);
postgres@clw-db1:/pgdata/10/poc/scripts>
  • 16、17、19 編寫監控延遲指令碼
[email protected]:/pgdata/10/poc/scripts> cat monior_syncSR_relay.sh
#!/bin/bash

/opt/pgsql-10/bin/psql pocdb<<EOF
select now();
select client_addr, application_name, write_lag, flush_lag, replay_lag \
from pg_stat_replication where usename='l_repl' and application_name='slave1';
 \q
EOF
[email protected]:/pgdata/10/poc/scripts>
  • 啟動pgbench進行壓測
[email protected]-db1:/pgdata/10/poc/scripts> /opt/pgsql-10/bin/pgbench -T 1200 -j 600 -c 500  -f  bench_script_for_insert_20180717.sql pocdb
  • 編寫slave1 的資料查詢指令碼
for i in {1..1000000000}
do
/pgdata/10/poc/scripts/monior_syncSR_relay_slave1.sh >> syncSR_relay_slave1_result
sleep 10
done
for i in {1..1000000000}
do
/pgdata/10/poc/scripts/monior_syncSR_relay_slave2.sh >> syncSR_relay_slave2_result
sleep 10
done
[email protected]:/pgdata/10/poc/scripts> cat query_count_slave1.sh
#!/bin/bash

/opt/pgsql-10/bin/psql pocdb<<EOF
select now();
select max(id) from synctest;
 \q
EOF
[email protected]:/pgdata/10/poc/scripts>
for i in {1..1000000000}
do
/pgdata/10/poc/scripts/query_count_result.sh >> query_count_sum
sleep 10
done

查詢插入數量指令碼

[email protected]:/pgdata/10/poc/scripts> cat query_count_result.sh
#!/bin/bash

/opt/pgsql-10/bin/psql pocdb<<EOF
select now();
select max(id) from synctest;
 \q
EOF
[email protected]:/pgdata/10/poc/scripts>
  • 編寫記憶體、網路、IO效能監控指令碼
/home/super/pgsoft/nmon_x86_64_sles11 -f  -c 150  -s 10 
  • 17 查詢數量指令碼
for i in {1..1000000000}
do
/pgdata/10/poc/scripts/query_count_slave1.sh >> query_count_slave1_sum
sleep 10
done
  • 17 效能監控指令碼
/home/pgsoft/nmon_x86_64_sles11 -f  -c 150  -s 10 
  • 19 查詢數量指令碼
for i in {1..1000000000}
do
/pgdata/10/poc/scripts/query_count_slave2.sh >> query_count_slave2_sum
sleep 10
done
  • 19效能監控
/home/pgsoft/nmon_x86_64_sles11 -f  -c 150  -s 10 

-16測試結果

could not connect to server: Resource temporarily unavailable
        Is the server running locally and accepting
        connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
transaction type: bench_script_for_insert_20180717.sql
scaling factor: 1
query mode: simple
number of clients: 500
number of threads: 500
duration: 1200 s
number of transactions actually processed: 9723149
latency average = 61.736 ms
tps = 8098.953578 (including connections establishing)
tps = 8099.988751 (excluding connections establishing)
[email protected]:/pgdata/10/poc/scripts>

pocdb=# \d+
                           List of relations
 Schema |     Name     |   Type   |  Owner   |    Size    | Description
--------+--------------+----------+----------+------------+-------------
 public | seq_synctest | sequence | postgres | 8192 bytes |
 public | synctest     | table    | postgres | 377 MB     |
(2 rows)

pocdb=#

1主2從同步測試結果

測試背景:伺服器10.10.56.16,10.10.56.17,10.10.56.19 CPU 8核 記憶體200G 網路傳輸3M/s

測試方法: 20 分鐘 600執行緒 500客戶端連線數 不停往 16master 伺服器寫入資料

測試結果:CPU 佔用率 80% ,磁碟 I/O 23MB/s ,TPS:7419 ,資料量為:1880萬,
同步時延大約為 3毫秒左右

  • 壓測指令碼(-T 1200秒 -j 600 執行緒 -c 客戶端連併發接數500 -f 輸出壓測結果)
/opt/pgsql-10/bin/pgbench -T 1200 -j 600 -c 500  -f  bench_script_for_insert_20180717.sql pocdb
  • 第一次測試(1主2從同步SR)
transaction type: bench_script_for_insert_20180717.sql
scaling factor: 1
query mode: simple
number of clients: 500
number of threads: 500
duration: 1200 s
number of transactions actually processed: 8906813
latency average = 67.391 ms
tps = 7419.410395 (including connections establishing)
tps = 7420.536021 (excluding connections establishing)
[email protected]:/pgdata/10/poc/scripts>
  • 第二次測試(1主2從同步SR)
[email protected]-db1:/pgdata/10/poc/scripts> /opt/pgsql-10/bin/pgbench -T 1200 -j 600 -c 500  -f bench_script_for_insert_20180717.sql pocdb
query mode: simple
number of clients: 500
number of threads: 500
duration: 1200 s
number of transactions actually processed: 8239738
latency average = 72.848 ms
tps = 6863.613524 (including connections establishing)
tps = 6865.035476 (excluding connections establishing)
[email protected]:/pgdata/10/poc/scripts>
  • 查詢主從資料延遲,主要觀察引數(write_lag、flush_lsn、replay_lag)
pocdb=# select * from pg_stat_replication;
-[ RECORD 1 ]----+------------------------------
pid              | 23564
usesysid         | 16393
usename          | repl
application_name | slave2
client_addr      | 10.10.56.19
client_hostname  |
client_port      | 52820
backend_start    | 2018-05-16 17:43:23.726216+08
backend_xmin     |
state            | streaming
sent_lsn         | 1/7D4FF030
write_lsn        | 1/7D4FF030
flush_lsn        | 1/7D4FEF78
replay_lsn       | 1/7D4FEF78
write_lag        | 00:00:00.003057
flush_lag        | 00:00:00.003057
replay_lag       | 00:00:00.003057
sync_priority    | 2
sync_state       | potential
-[ RECORD 2 ]----+------------------------------
pid              | 23562
usesysid         | 16393
usename          | repl
application_name | slave1
client_addr      | 10.10.56.17
client_hostname  |
client_port      | 33647
backend_start    | 2018-05-16 17:43:17.371715+08
backend_xmin     |
state            | streaming
sent_lsn         | 1/7D524110
write_lsn        | 1/7D523F30
flush_lsn        | 1/7D523570
replay_lsn       | 1/7D523570
write_lag        | 00:00:00.000329
flush_lag        | 00:00:00.000329
replay_lag       | 00:00:00.000329
sync_priority    | 1
sync_state       | sync

pocdb=# select * from pg_stat_replication;
-[ RECORD 1 ]----+------------------------------
pid              | 23564
usesysid         | 16393
usename          | repl
application_name | slave2
client_addr      | 10.10.56.19
client_hostname  |
client_port      | 52820
backend_start    | 2018-05-16 17:43:23.726216+08
backend_xmin     |
state            | streaming
sent_lsn         | 1/7EB37850
write_lsn        | 1/7EB36F00
flush_lsn        | 1/7EB34EE0
replay_lsn       | 1/7EB34EE0
write_lag        | 00:00:00.000841
flush_lag        | 00:00:00.000841
replay_lag       | 00:00:00.000841
sync_priority    | 2
sync_state       | potential
-[ RECORD 2 ]----+------------------------------
pid              | 23562
usesysid         | 16393
usename          | repl
application_name | slave1
client_addr      | 10.10.56.17
client_hostname  |
client_port      | 33647
backend_start    | 2018-05-16 17:43:17.371715+08
backend_xmin     |
state            | streaming
sent_lsn         | 1/7EB37F80
write_lsn        | 1/7EB27B08
flush_lsn        | 1/7EB27B08
replay_lsn       | 1/7EB27B08
write_lag        | 00:00:00.001525
flush_lag        | 00:00:00.003983
replay_lag       | 00:00:00.012568
sync_priority    | 1
sync_state       | sync
  • 查詢資料量
pocdb=# \d+
                           List of relations
 Schema |     Name     |   Type   |  Owner   |    Size    | Description
--------+--------------+----------+----------+------------+-------------
 public | seq_synctest | sequence | postgres | 8192 bytes |
 public | synctest     | table    | postgres | 377 MB     |
(2 rows)

pocdb=# select max(id) from synctest;
   max
----------
 18812501
(1 row)

pocdb=#

單庫測試

  • 48伺服器建立資料庫 pocdb 和表 synctest
postgres=# create database pocdb;
CREATE DATABASE

pocdb=# create table synctest (id bigint primary key ,number bigint,date timestamp default now());
CREATE TABLE
pocdb=# create sequence seq_synctest increment by 1 minvalue 1 maxvalue 99999999999999 cache 50 no cycle;
CREATE SEQUENCE
pocdb=# \d+
                           List of relations
 Schema |     Name     |   Type   |  Owner   |    Size    | Description
--------+--------------+----------+----------+------------+-------------
 public | seq_synctest | sequence | postgres | 8192 bytes |
 public | synctest     | table    | postgres | 0 bytes    |
(2 rows)

pocdb=#
  • 48效能監控
/home/postgres/pgsoft/nmon_x86_64_sles11 -c 180 -f -s 10
  • 48 查詢數量指令碼
for i in {1..100000000}
do
/pgdata/10/scripts/query_count_result.sh >> query_count_slave2_sum
sleep 10
done
  • 48 壓測指令碼
 /opt/pgsql-10/bin/pgbench -T 1200 -j 800 -c 400  -f bench_script_for_insert_20180717.sql pocdb

單例項庫測試

  • 測試背景: 伺服器 : 10.10.56.17 cpu:8核 記憶體:128GB

  • 測試過程: 在伺服器上執行 20分鐘寫入資料庫測試,500 執行緒數 500 客戶端連線數

    • 測試結果: 寫入資料庫共3500萬 ,平均響應時間 17.125ms ,TPS 為29000 ,CPU佔用95% 磁碟寫速率:35MB/s ,網路傳輸速率:3MB/s
  • 17 伺服器 單庫壓測結果

    transaction type: bench_script_for_insert_20180717.sql
    scaling factor: 1
    query mode: simple
    number of clients: 500
    number of threads: 500
    duration: 1200 s
    number of transactions actually processed: 35047438
    latency average = 17.125 ms
    tps = 29197.286164 (including connections establishing)
    tps = 29205.049038 (excluding connections establishing)
    [email protected]:/pgdata/estest10/scripts>
  • 查詢資料
    “`
    pocdb1=# \d+
    List of relations
    Schema | Name | Type | Owner | Size | Description
    ——–+————–+———-+———-+————+————-
    public | seq_synctest | sequence | postgres | 8192 bytes |
    public | synctest | table | postgres | 1747 MB |
    (2 rows)

pocdb1=# select max(id) from synctest;

max

35947438
(1 row)

pocdb1=#
“`

主從與單庫效能對比

結果分析:單褲寫入效能更高,大約為1主2從的2倍,TPS 單庫大約為主從的4倍,單庫磁碟寫入速率更高,以上僅為測試資料,僅供參考。

從節點宕機測試

  • 測試過程: kill 掉從節點,模擬意外宕機,往主節點寫入資料

  • 測試結果: 同步流複製時,從節點宕機會影響主庫,主庫進行寫入和更新時會被夯住。

  • kill掉17 slave1,主庫插入資料,會夯住。

pocdb=# insert into synctest(age,date)values(nextval(seq_synctest'),28701752,now());
pocdb'#

一主一從

  • 測試背景:伺服器 : 10.10.56.17 cpu:8核 記憶體:128GB
  • 測試過程: 在伺服器上執行 20分鐘寫入資料庫測試,500 執行緒數 500 客戶端連線數
  • 測試結果:寫入資料:3890萬 CPU佔用:80% 磁碟寫:25Mb/s 延遲:2毫秒 ,TPS :8288

    測試結果

transaction type: bench_script_for_insert_20180717.sql
scaling factor: 1
query mode: simple
number of clients: 400
number of threads: 400
duration: 1200 s
number of transactions actually processed: 9946727
latency average = 48.258 ms
tps = 8288.726962 (including connections establishing)
tps = 8289.701749 (excluding connections establishing)
[email protected]:/pgdata/10/poc/scripts>
  • 查詢插入資料量
pocdb=# \d+
                           List of relations
 Schema |     Name     |   Type   |  Owner   |    Size    | Description
--------+--------------+----------+----------+------------+-------------
 public | seq_synctest | sequence | postgres | 8192 bytes |
 public | synctest     | table    | postgres | 496 MB     |
(2 rows)

pocdb=# select max(id) from synctest;
   max
----------
 38918501
(1 row)

pocdb=#
  • 查詢延遲,觀察引數(write_lag,flush_lag,replay_lag)
pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.002566 | 00:00:00.002566 | 00:00:00.002566 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.000774 | 00:00:00.000774 | 00:00:00.000774 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.000648 | 00:00:00.000648 | 00:00:00.000648 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.000408 | 00:00:00.000408 | 00:00:00.000408 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.002627 | 00:00:00.002666 | 00:00:00.008609 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.000593 | 00:00:00.000593 | 00:00:00.000593 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.003211 | 00:00:00.009313 | 00:00:00.015688 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.001078 | 00:00:00.001078 | 00:00:00.001078 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.021852 | 00:00:00.034961 | 00:00:00.035026 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag   | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.000829 | 00:00:00.002209 | 00:00:00.00334 | sync       | streaming
(1 row)

pocdb=# select client_addr,usename, application_name, write_lag, flush_lag, replay_lag,sync_state,state from pg_stat_replication;
 client_addr | usename | application_name |    write_lag    |    flush_lag    |   replay_lag    | sync_state |   state
-------------+---------+------------------+-----------------+-----------------+-----------------+------------+-----------
 10.10.56.19 | repl    | slave2           | 00:00:00.054129 | 00:00:00.054134 | 00:00:00.054964 | sync       | streaming
(1 row)

效能對比

測試對比:1主1從寫 入效能和 單庫寫 效能相差不大,20分鐘寫入資料量大約都為3700萬,1主兩從寫入效能較差,資料只有它倆的 一半,延遲比 1主1從 高點。以上僅為測試資料,僅供參考,沒有達到極限的情況

相關推薦

PostgreSQL 同步複製延遲測試

1主2從SR同步流複製測 搭建環境 伺服器 | Role |- | :-: | -: | 10.10.56.16 | master 10.10.56.17 | slave1 10.10.56.19 | slave2 16查詢狀

Python 接口測試

expect type version not found 指定 刷新 created 進行 拷貝 三:http狀態碼含義(來源於w3school): 狀態碼: 1xx: 信息 消息: 描述: 100 Continue 服務器僅接收到部分請求,但是一旦

soapui接口性能測試---- 模擬不同類型的負載

output tor 10個 相對 超過 對話框 interval -s 根據 SoapUI中提供的不同負載策略允許您模擬各種類型的負載,隨時間的變化,您可以在許多條件下輕松測試目標服務的性能。由於SoapUI還允許您同時運行多個LoadTests(參見下文的示例),可以使

UI自動化測試瀏覽器操作及對元素的定位方法xpath定位和css定位詳解

cli 刷新 ota api enter 版本 ror apache 窗口 Selenium下的Webdriver工具支持FireFox(geckodriver)、 IE(InternetExplorerDriver)、Chrome(ChromeDriver)、 Opera

selenium+python自動化測試對瀏覽器的簡單操作

cat quit 報錯 簡單 conn port ted href ide 1.最大化 maximize_window 1 # coding = utf-8 2 3 from selenium import webdriver 4 chromedriver =

創建撲克測試

java Collections ArrayList 1.Mainimport java.util.List; public class Main { /** * 1.面向對象思維(一張撲克) * 抽取共性屬性 * 花色 int * 牌值 int * 花色符號 S

兩種開源聊天機器人的性能測試——基於tensorflow的chatbot

drive 找到 環境配置 gpu版本 hat dict 終端 開源 fontsize http://blog.csdn.net/hfutdog/article/details/78155676 開源項目鏈接:https://github.com/dennybritz/c

python+request+Excel做接口自動化測試

h+ put res setup 時間 except name 做了 resp 今天整了下python用request寫接口測試用例,做了個粗糙的大概的執行,具體還需找時間優化。這個采用對象化,配置以及常用函數為一個類,執行測試用例為另外的一個類。測試用例的撰寫大概如下(還

單元測試

features 連接 相關 ext add 才會 ofo itself 源碼 前言:junit 介紹。轉載請註明出處:https://www.cnblogs.com/yuxiaole/p/9405519.html 二、 java單測工具 junit4介紹 Java用的

Java接口多線程並發測試

ron size get() 傳說 time() ade fix 功能 例如 原文地址http://www.cnblogs.com/yezhenhan/archive/2012/01/09/2317636.html 這是一篇很不錯的文章,感謝原博主的分享! JAVA多線程實

RobotFramework-UI自動化測試

info manual date load conf 嘗試 resolve gin 關鍵字 1、在cmd中輸入ride.py打開界面 創建project項目 選擇菜單欄file----->new Project 2、創建測試套件

異數OS TCP協議棧測試--短連線篇

異數OS TCP協議棧測試(二)--短連線篇 本文來自異數OS社群   github:  https://github.com/yds086/HereticOS 異數OS社群QQ群:  652455784 異數OS-織夢師(訊息中介軟體)群: 47626038

自動化測試如何用python寫一個使用者登陸功能

需求資訊: 寫一個判斷登入的程式: 輸入: username password 最大錯誤次數是3次,輸入3次都沒有登入成功,提示錯誤次數達到上限 需要判斷輸入是否為空,什麼也不輸入,輸入一個空格、n個空格都算空 登入成功,提示歡迎xxx,今天的日期是 xxx 可以用多個使用者登入,選做(多個使用者登入,

SmartBear SoapUI Pro入門教程--第一次功能測試

SoapUI Pro擁有許多其他web服務測試工具所不具備的高階技術和功能。對於REST、SOAP以及其他流行的API和物聯網協議,SoapUI Pro提供了業界最全面的功能測試功能。通過易用的圖形介面和企業級功能,SoapUI Pro允許你快速建立和執行自動功能測試。在單一環境下,SoapUI P

利用神州靈雲AppTrace抓取到的APP資料反向給做Jmeter介面測試

用jmeter模擬登入月光茶人APP選購支付流程(或者大量併發,實現壓測效果) 現實中APP對登入都有限制,同一賬號只能同時登入一次,且手裡沒有多餘的賬號如何進行併發測試呢,這個時候只需單獨對登入http請求進行控制即可;其他請求操作可以放在一塊進行併發測試;

cartographer_ros 下使用hokuyo測試

https://google-cartographer-ros.readthedocs.io/en/latest/ https://github.com/googlecartographer/cartographer_ros 環境 : ubuntu14.04 + ros indigo

OpenStack-Restful API介面測試

       在上一篇文章使用Postman簡單測試發現,Restful API客戶端能夠呼叫keytone、nova、glance服務。本文將以建立虛擬機器為例,講述使用Restful API客戶端完成OpenStack各元件複雜的操作。 1.準備工作        

postgresql從入門到菜鳥服務端配置和psql連線

上一篇文章我們已經成功在rehel上安裝了postgrel資料庫。 關於window環境下的安裝,可以直接下載install檔案,進行雙擊安裝,或者通過編譯原始碼的方式安裝。 本篇文章將介紹postgresql資料的基本配置,以及客戶端與伺服器端的連線。 先看伺服器端 當我們通過

(web安全實踐)phpstorm+phpstorm社交論壇網站搭建+安全性測試

(二)mysql資料庫設計+網站登入註冊介面實現 由於本次實踐時間關係和偏重於網站安全性的測試,對於網站前端ui的設計很簡單,簡單的html。。。。。,重點是背後的註冊檢測。(下一節) 1.資料庫設

易用性測試

1. 靈活性對於測試的影響主要在狀態和資料: 狀態跳轉。靈活的軟體實現同一任務有多種選擇和方式,結果是增加了通向軟體各種狀態的途徑。 狀態終止和跳過。如果測試具有超級使用者功能的軟體,就需要保證在跳過所有中間狀態或者提前終止時正確設定狀態變數。 資料輸入和輸出。使用者希