postgresql之sql優化
如何查詢TOP SQL (例如IO消耗最高的SQL) (包含SQL優化內容)
背景
資料庫是較大型的應用,對於繁忙的資料庫,需要消耗大量的記憶體、CPU、IO、網路資源。
SQL優化是資料庫優化的手段之一,優化什麼SQL效果最佳呢?首先要了解最耗費資源的SQL,即TOP SQL。
從哪裡可以瞭解資料庫的資源都被哪些SQL消耗掉了呢?
資源分為多個維度,CPU,記憶體,IO等。如何能瞭解各個維度層面的TOP SQL呢?
pg_stat_statements外掛可以用於統計資料庫的資源開銷,分析TOP SQL。
載入pg_stat_statements模組
vi $PGDATA/postgresql.conf shared_preload_libraries='pg_stat_statements'
如果要跟蹤IO消耗的時間,還需要開啟如下引數
track_io_timing = on
設定單條SQL的最長長度,超過被截斷顯示(可選)
track_activity_query_size = 2048
配置pg_stat_statements取樣引數
vi $PGDATA/postgresql.conf pg_stat_statements.max = 10000 # 在pg_stat_statements中最多保留多少條統計資訊,通過LRU演算法,覆蓋老的記錄。 pg_stat_statements.track = all # all - (所有SQL包括函式內巢狀的SQL), top - 直接執行的SQL(函式內的sql不被跟蹤), none - (不跟蹤) pg_stat_statements.track_utility = off # 是否跟蹤非DML語句 (例如DDL,DCL), on表示跟蹤, off表示不跟蹤 pg_stat_statements.save = on # 重啟後是否保留統計資訊
重啟資料庫
pg_ctl restart -m fast
建立pg_stat_statements extension
在需要查詢TOP SQL的資料庫中,建立extension
create extension pg_stat_statements;
分析TOP SQL
pg_stat_statements輸出內容介紹
查詢pg_stat_statements檢視,可以得到統計資訊
SQL語句中的一些過濾條件在pg_stat_statements中會被替換成變數,減少重複顯示的問題。
pg_stat_statements檢視包含了一些重要的資訊,例如:
\1. SQL的呼叫次數,總的耗時,最快執行時間,最慢執行時間,平均執行時間,執行時間的方差(看出抖動),總共掃描或返回或處理了多少行;
\2. shared buffer的使用情況,命中,未命中,產生髒塊,驅逐髒塊。
\3. local buffer的使用情況,命中,未命中,產生髒塊,驅逐髒塊。
\4. temp buffer的使用情況,讀了多少髒塊,驅逐髒塊。
\5. 資料塊的讀寫時間。
Name | Type | References | Description |
---|---|---|---|
userid | oid | pg_authid.oid | OID of user who executed the statement |
dbid | oid | pg_database.oid | OID of database in which the statement was executed |
queryid | bigint | - | Internal hash code, computed from the statement's parse tree |
query | text | - | Text of a representative statement |
calls | bigint | - | Number of times executed |
total_time | double precision | - | Total time spent in the statement, in milliseconds |
min_time | double precision | - | Minimum time spent in the statement, in milliseconds |
max_time | double precision | - | Maximum time spent in the statement, in milliseconds |
mean_time | double precision | - | Mean time spent in the statement, in milliseconds |
stddev_time | double precision | - | Population standard deviation of time spent in the statement, in milliseconds |
rows | bigint | - | Total number of rows retrieved or affected by the statement |
shared_blks_hit | bigint | - | Total number of shared block cache hits by the statement |
shared_blks_read | bigint | - | Total number of shared blocks read by the statement |
shared_blks_dirtied | bigint | - | Total number of shared blocks dirtied by the statement |
shared_blks_written | bigint | - | Total number of shared blocks written by the statement |
local_blks_hit | bigint | - | Total number of local block cache hits by the statement |
local_blks_read | bigint | - | Total number of local blocks read by the statement |
local_blks_dirtied | bigint | - | Total number of local blocks dirtied by the statement |
local_blks_written | bigint | - | Total number of local blocks written by the statement |
temp_blks_read | bigint | - | Total number of temp blocks read by the statement |
temp_blks_written | bigint | - | Total number of temp blocks written by the statement |
blk_read_time | double precision | - | Total time the statement spent reading blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) |
blk_write_time | double precision | - | Total time the statement spent writing blocks, in milliseconds (if track_io_timing is enabled, otherwise zero) |
最耗IO SQL
單次呼叫最耗IO SQL TOP 5
select userid::regrole, dbid, query from pg_stat_statements order by (blk_read_time+blk_write_time)/calls desc limit 5;
總最耗IO SQL TOP 5
select userid::regrole, dbid, query from pg_stat_statements order by (blk_read_time+blk_write_time) desc limit 5;
最耗時 SQL
單次呼叫最耗時 SQL TOP 5
select userid::regrole, dbid, query from pg_stat_statements order by mean_time desc limit 5;
總最耗時 SQL TOP 5(最需要關注的是這個)
select userid::regrole, dbid, query from pg_stat_statements order by total_time desc limit 5;
響應時間抖動最嚴重 SQL
select userid::regrole, dbid, query from pg_stat_statements order by stddev_time desc limit 5;
最耗共享記憶體 SQL
select userid::regrole, dbid, query from pg_stat_statements order by (shared_blks_hit+shared_blks_dirtied) desc limit 5;
最耗臨時空間 SQL
select userid::regrole, dbid, query from pg_stat_statements order by temp_blks_written desc limit 5;
重置統計資訊
使用者也可以定期清理歷史的統計資訊,通過呼叫如下SQL
select pg_stat_statements_reset();
慢SQL到底慢在哪裡?
如果要分析慢SQL到底慢在哪裡,使用資料庫命令explain (analyze,verbose,timing,costs,buffers,timing) SQL;
就可以,再加上一些開關,可以看到更加詳細的資訊。
開關, 當前會話生效,列印更加詳細的資訊
set client_min_messages=debug5;
set log_checkpoints = on;
set log_error_verbosity = verbose ;
set log_lock_waits = on;
set log_replication_commands = off;
set log_temp_files = 0;
set track_activities = on;
set track_counts = on;
set track_io_timing = on;
set track_functions = 'all';
set trace_sort=on;
set log_statement_stats = off;
set log_parser_stats = on;
set log_planner_stats = on;
set log_executor_stats = on;
set log_autovacuum_min_duration=0;
set deadlock_timeout = '1s';
set debug_print_parse = off;
set debug_print_rewritten = off;
set debug_print_plan = off;
set debug_pretty_print = on;
如
explain (analyze,verbose,timing,costs,buffers) select count(*),relkind from pg_class group by relkind order by count(*) desc limit 1;
慢SQL、TOP SQL優化示例
1、檢視真實的執行計劃
begin;
set local lock_timeout='1s';
set local statement_timeout=0;
explain (analyze,verbose,timing,costs,buffers,timing) SQL; -- SQL代替為要分析的SQL
rollback;
2、從explain結果中,找到最耗時的NODE
postgres=# explain (analyze,verbose,timing,costs,buffers) select count(*),c34 from test where c33<3 group by c34;
QUERY PLAN
---------------------------------------------------------------------------------------------------------------
HashAggregate (cost=18042933.67..18042933.78 rows=11 width=16) (actual time=79898.384..79898.386 rows=11 loops=1)
Output: count(*), c34
Group Key: test.c34
Buffers: shared hit=3296 read=16663371
-> Seq Scan on public.test (cost=0.00..17916667.00 rows=25253334 width=8) (actual time=0.065..74406.748 rows=24997473 loops=1) 大量耗費
Output: id, c1, c2, c3, c4, c5, c6, c7, c8, c9, c10, c11, c12, c13, c14, c15, c16, c17, c18, c19, c20, c21, c22, c23, c24, c25, c26, c27, c28, c29, c30, c31, c32, c33, c34, c35, c36, c37, c38, c39, c40, c41, c42, c43, c44, c45, c46, c47, c48, c49, c50, c51, c52, c53, c54, c55, c56, c57, c58, c59, c60, c61, c62, c63, c64
Filter: (test.c33 < 3)
Rows Removed by Filter: 75002527 過濾了大量的行,但是還有很多行需要被查詢
Buffers: shared hit=3296 read=16663371
Planning Time: 0.096 ms
Execution Time: 79898.435 ms
(11 rows)
3、針對NODE進行優化
3.1、以上例子,實際上就是掃描花費了很多時間,並且掃描後過濾的結果佔比比較低,可以使用索引解決。
postgres=# create index idx on test (c33,c34);
postgres=# explain (analyze,verbose,timing,costs,buffers) select count(*),c34 from test where c33<3 group by c34;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=685855.26..685855.37 rows=11 width=16) (actual time=8056.793..8056.795 rows=11 loops=1)
Output: count(*), c34
Group Key: test.c34
Buffers: shared hit=112642
-> Index Only Scan using idx on public.test (cost=0.57..557588.60 rows=25653333 width=8) (actual time=0.031..3691.071 rows=24997473 loops=1)
Output: c33, c34
Index Cond: (test.c33 < 3)
Heap Fetches: 0
Buffers: shared hit=112642 掃描了多少 index 資料塊
Planning Time: 0.166 ms
Execution Time: 8056.842 ms
(11 rows)
3.2、加索引後的優化,聚集。
如果未使用index only scan,那麼需要回表,回表可能導致掃描更多的資料塊。
當資料分散儲存是,使用聚集可以優化,本例使用了idx only scan,不需要優化
cluster test using idx;
3.3、聚集後的優化,並行。
postgres=# set max_parallel_workers_per_gather =32;
SET
postgres=# explain (analyze,verbose,timing,costs,buffers) select count(*),c34 from test where c33<3 group by c34;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Finalize GroupAggregate (cost=318089.48..318125.45 rows=11 width=16) (actual time=999.918..1000.053 rows=11 loops=1)
Output: count(*), c34
Group Key: test.c34
Buffers: shared hit=9623
-> Gather Merge (cost=318089.48..318124.02 rows=264 width=16) (actual time=999.768..999.973 rows=275 loops=1)
Output: c34, (PARTIAL count(*))
Workers Planned: 24
Workers Launched: 24
Buffers: shared hit=9623
-> Sort (cost=317088.90..317088.93 rows=11 width=16) (actual time=926.196..926.198 rows=11 loops=25)
Output: c34, (PARTIAL count(*))
Sort Key: test.c34
Sort Method: quicksort Memory: 25kB
Worker 0: Sort Method: quicksort Memory: 25kB
Worker 1: Sort Method: quicksort Memory: 25kB
Worker 2: Sort Method: quicksort Memory: 25kB
Worker 3: Sort Method: quicksort Memory: 25kB
Worker 4: Sort Method: quicksort Memory: 25kB
Worker 5: Sort Method: quicksort Memory: 25kB
Worker 6: Sort Method: quicksort Memory: 25kB
Worker 7: Sort Method: quicksort Memory: 25kB
Worker 8: Sort Method: quicksort Memory: 25kB
Worker 9: Sort Method: quicksort Memory: 25kB
Worker 10: Sort Method: quicksort Memory: 25kB
Worker 11: Sort Method: quicksort Memory: 25kB
Worker 12: Sort Method: quicksort Memory: 25kB
Worker 13: Sort Method: quicksort Memory: 25kB
Worker 14: Sort Method: quicksort Memory: 25kB
Worker 15: Sort Method: quicksort Memory: 25kB
Worker 16: Sort Method: quicksort Memory: 25kB
Worker 17: Sort Method: quicksort Memory: 25kB
Worker 18: Sort Method: quicksort Memory: 25kB
Worker 19: Sort Method: quicksort Memory: 25kB
Worker 20: Sort Method: quicksort Memory: 25kB
Worker 21: Sort Method: quicksort Memory: 25kB
Worker 22: Sort Method: quicksort Memory: 25kB
Worker 23: Sort Method: quicksort Memory: 25kB
Buffers: shared hit=207494
Worker 0: actual time=923.125..923.126 rows=11 loops=1
Buffers: shared hit=8571
Worker 1: actual time=922.567..922.568 rows=11 loops=1
Buffers: shared hit=7575
Worker 2: actual time=923.209..923.210 rows=11 loops=1
Buffers: shared hit=8448
Worker 3: actual time=922.613..922.615 rows=11 loops=1
Buffers: shared hit=7958
Worker 4: actual time=923.265..923.266 rows=11 loops=1
Buffers: shared hit=8706
Worker 5: actual time=923.329..923.330 rows=11 loops=1
Buffers: shared hit=8800
Worker 6: actual time=923.298..923.299 rows=11 loops=1
Buffers: shared hit=8637
Worker 7: actual time=922.778..922.780 rows=11 loops=1
Buffers: shared hit=7168
Worker 8: actual time=923.348..923.349 rows=11 loops=1
Buffers: shared hit=8804
Worker 9: actual time=923.303..923.304 rows=11 loops=1
Buffers: shared hit=8576
Worker 10: actual time=923.270..923.272 rows=11 loops=1
Buffers: shared hit=8848
Worker 11: actual time=923.308..923.309 rows=11 loops=1
Buffers: shared hit=8500
Worker 12: actual time=923.415..923.417 rows=11 loops=1
Buffers: shared hit=8606
Worker 13: actual time=922.827..922.828 rows=11 loops=1
Buffers: shared hit=7402
Worker 14: actual time=923.307..923.309 rows=11 loops=1
Buffers: shared hit=8415
Worker 15: actual time=922.994..922.996 rows=11 loops=1
Buffers: shared hit=7467
Worker 16: actual time=923.456..923.457 rows=11 loops=1
Buffers: shared hit=8460
Worker 17: actual time=923.364..923.366 rows=11 loops=1
Buffers: shared hit=8647
Worker 18: actual time=923.287..923.289 rows=11 loops=1
Buffers: shared hit=8549
Worker 19: actual time=922.968..922.969 rows=11 loops=1
Buffers: shared hit=7211
Worker 20: actual time=923.361..923.363 rows=11 loops=1
Buffers: shared hit=8650
Worker 21: actual time=923.178..923.179 rows=11 loops=1
Buffers: shared hit=7691
Worker 22: actual time=923.129..923.131 rows=11 loops=1
Buffers: shared hit=7609
Worker 23: actual time=923.427..923.428 rows=11 loops=1
Buffers: shared hit=8573
-> Partial HashAggregate (cost=317088.60..317088.71 rows=11 width=16) (actual time=926.136..926.138 rows=11 loops=25)
Output: c34, PARTIAL count(*)
Group Key: test.c34
Buffers: shared hit=207326
Worker 0: actual time=923.055..923.058 rows=11 loops=1
Buffers: shared hit=8564
Worker 1: actual time=922.506..922.509 rows=11 loops=1
Buffers: shared hit=7568
Worker 2: actual time=923.159..923.162 rows=11 loops=1
Buffers: shared hit=8441
Worker 3: actual time=922.551..922.553 rows=11 loops=1
Buffers: shared hit=7951
Worker 4: actual time=923.220..923.223 rows=11 loops=1
Buffers: shared hit=8699
Worker 5: actual time=923.285..923.288 rows=11 loops=1
Buffers: shared hit=8793
Worker 6: actual time=923.254..923.257 rows=11 loops=1
Buffers: shared hit=8630
Worker 7: actual time=922.695..922.698 rows=11 loops=1
Buffers: shared hit=7161
Worker 8: actual time=923.290..923.293 rows=11 loops=1
Buffers: shared hit=8797
Worker 9: actual time=923.254..923.256 rows=11 loops=1
Buffers: shared hit=8569
Worker 10: actual time=923.223..923.226 rows=11 loops=1
Buffers: shared hit=8841
Worker 11: actual time=923.224..923.226 rows=11 loops=1
Buffers: shared hit=8493
Worker 12: actual time=923.373..923.376 rows=11 loops=1
Buffers: shared hit=8599
Worker 13: actual time=922.766..922.769 rows=11 loops=1
Buffers: shared hit=7395
Worker 14: actual time=923.232..923.235 rows=11 loops=1
Buffers: shared hit=8408
Worker 15: actual time=922.935..922.938 rows=11 loops=1
Buffers: shared hit=7460
Worker 16: actual time=923.406..923.409 rows=11 loops=1
Buffers: shared hit=8453
Worker 17: actual time=923.317..923.319 rows=11 loops=1
Buffers: shared hit=8640
Worker 18: actual time=923.204..923.206 rows=11 loops=1
Buffers: shared hit=8542
Worker 19: actual time=922.893..922.895 rows=11 loops=1
Buffers: shared hit=7204
Worker 20: actual time=923.283..923.286 rows=11 loops=1
Buffers: shared hit=8643
Worker 21: actual time=923.089..923.092 rows=11 loops=1
Buffers: shared hit=7684
Worker 22: actual time=923.049..923.051 rows=11 loops=1
Buffers: shared hit=7602
Worker 23: actual time=923.379..923.381 rows=11 loops=1
Buffers: shared hit=8566
-> Parallel Index Only Scan using idx on public.test (cost=0.57..311744.15 rows=1068889 width=8) (actual time=0.294..726.243 rows=999899 loops=25)
Output: c33, c34
Index Cond: (test.c33 < 3)
Heap Fetches: 0
Buffers: shared hit=207326
Worker 0: actual time=0.249..739.989 rows=1028079 loops=1
Buffers: shared hit=8564
Worker 1: actual time=0.500..698.527 rows=912456 loops=1
Buffers: shared hit=7568
Worker 2: actual time=0.449..733.146 rows=1010592 loops=1
Buffers: shared hit=8441
Worker 3: actual time=0.554..712.277 rows=953955 loops=1
Buffers: shared hit=7951
Worker 4: actual time=0.088..736.872 rows=1047915 loops=1
Buffers: shared hit=8699
Worker 5: actual time=0.172..734.815 rows=1056267 loops=1
Buffers: shared hit=8793
Worker 6: actual time=0.052..737.294 rows=1040346 loops=1
Buffers: shared hit=8630
Worker 7: actual time=0.086..696.398 rows=862866 loops=1
Buffers: shared hit=7161
Worker 8: actual time=0.051..735.082 rows=1053918 loops=1
Buffers: shared hit=8797
Worker 9: actual time=0.336..740.511 rows=1031994 loops=1
Buffers: shared hit=8569
Worker 10: actual time=0.496..735.275 rows=1063836 loops=1
Buffers: shared hit=8841
Worker 11: actual time=0.238..728.468 rows=1016595 loops=1
Buffers: shared hit=8493
Worker 12: actual time=0.049..737.655 rows=1035648 loops=1
Buffers: shared hit=8599
Worker 13: actual time=0.302..699.745 rows=888966 loops=1
Buffers: shared hit=7395
Worker 14: actual time=0.200..729.542 rows=1011114 loops=1
Buffers: shared hit=8408
Worker 15: actual time=0.296..695.864 rows=898623 loops=1
Buffers: shared hit=7460
Worker 16: actual time=0.070..734.046 rows=1015812 loops=1
Buffers: shared hit=8453
Worker 17: actual time=0.053..737.755 rows=1040868 loops=1
Buffers: shared hit=8640
Worker 18: actual time=0.081..737.488 rows=1030689 loops=1
Buffers: shared hit=8542
Worker 19: actual time=0.092..694.639 rows=870957 loops=1
Buffers: shared hit=7204
Worker 20: actual time=0.523..737.503 rows=1040607 loops=1
Buffers: shared hit=8643
Worker 21: actual time=1.978..709.165 rows=925182 loops=1
Buffers: shared hit=7684
Worker 22: actual time=0.294..699.942 rows=907497 loops=1
Buffers: shared hit=7602
Worker 23: actual time=0.120..739.781 rows=1030689 loops=1
Buffers: shared hit=8566
Planning Time: 0.311 ms
Execution Time: 1007.876 ms
(193 rows)
3.4、並行後的優化,列儲存。
當前未有內建列存,可以使用VOPS外掛,或者CSTORE外掛
3.5、列存後的優化,動態編譯、向量計算。
set jit=on