記一次HBase的NotServingRegionException問題
1. 問題
在hbase測試叢集上,訪問一些hbase表,均報類似下面的錯誤:region不可用
Wed Oct 28 14:00:56 CST 2020, RpcRetryingCaller{globalStartTime=1603864856185, pause=100, maxAttempts=4}, org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: tbl_ishis_hb_tips_levy_inf,,1578557543052.0ca2591b1209239ee963893abdfa4b53. is not online on vm3928.hadoop.com,16020,1603800273275 at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:3325) ......
2. 排查問題
首先懷疑是HBase叢集內正在進行Region的Split和不同機器之間的Region Balance,這時候Region是不可用的狀態,但是Region切分和再均衡持續時間不會很久,而現在的問題昨天已經出現了,於是看下 HMaster 日誌,一直在報錯,關鍵資訊如下:
2020-10-28 14:00:17,941 WARN org.apache.hadoop.hbase.coordination.SplitLogManagerCoordination: Error splitting /hbase/splitWAL/WALs%2Fvm3928.hadoop.com%2C16020%2C1598158651562-splitting%2Fvm3928.hadoop.com%252C16020%252C1598158651562.vm3928.hadoop.com%252C16020%252C1598158651562.regiongroup-0.1603692260004 2020-10-28 14:00:17,942 WARN org.apache.hadoop.hbase.master.SplitLogManager: error while splitting logs in [hdfs://nameservice1/hbase/WALs/vm3928.hadoop.com,16020,1598158651562-splitting] installed = 1 but only 0 done 2020-10-28 14:00:17,942 WARN org.apache.hadoop.hbase.master.procedure.ServerCrashProcedure: Failed state=SERVER_CRASH_SPLIT_LOGS, retry pid=1, state=RUNNABLE:SERVER_CRASH_SPLIT_LOGS, locked=true; ServerCrashProcedure server=vm3928.hadoop.com,16020,1598158651562, splitWal=true, meta=true; cycles=3455 java.io.IOException: error or interrupted while splitting logs in [hdfs://nameservice1/hbase/WALs/vm3928.hadoop.com,16020,1598158651562-splitting] Task = installed = 1 done = 0 error = 1 at org.apache.hadoop.hbase.master.SplitLogManager.splitLogDistributed(SplitLogManager.java:271) at org.apache.hadoop.hbase.master.MasterWalManager.splitLog(MasterWalManager.java:401) at org.apache.hadoop.hbase.master.MasterWalManager.splitLog(MasterWalManager.java:386) at org.apache.hadoop.hbase.master.MasterWalManager.splitLog(MasterWalManager.java:283 ......
可以推斷是拆分WAL日誌時出錯或中斷導致的問題,該問題影響到了 region 恢復
https://www.w3cschool.cn/hbase_doc/
Write Ahead Log
Write Ahead Log(WAL)將HBase中資料的所有更改記錄到基於檔案的儲存中。在正常操作下,不需要WAL,因為資料更改從MemStore移動到StoreFiles。但是,如果在重新整理MemStore之前RegionServer崩潰或變得不可用,則WAL確保可以重播對資料所做的更改。如果寫入WAL失敗,則修改資料的整個操作將失敗。WAL位於HDFS中的/hbase/WALs/目錄下,每個區域有子目錄。
WAL拆分
RegionServer服務於許多region。分割槽伺服器中的所有region共享相同活動的WAL檔案。WAL檔案中的每個編輯都包含有關它屬於哪個region的資訊。當開啟region時,需要重播屬於該region的WAL檔案中的編輯。因此,WAL檔案中的編輯必須按region分組,以便可以重播特定的集合以重新生成特定region中的資料。按region對WAL編輯進行分組的過程稱為日誌拆分。如果region伺服器出現故障,它是恢復資料的關鍵過程。
在群集啟動時由HMaster完成日誌拆分,或者在region伺服器關閉時由ServerShutdownHandler完成日誌拆分。為保證一致性,受影響的區域在資料恢復之前不可用。所有WAL編輯都需要在給定region再次可用之前恢復並重播。因此,受到日誌拆分影響的region在該過程完成之前不可用。
過程:日誌分割,分步執行
新目錄按以下模式命名:
/hbase/WALs/, , 目錄被重新命名。重新命名該目錄非常重要,因為即使HMaster認為它已關閉,RegionServer仍可能啟動並接受請求。如果RegionServer沒有立即響應,也沒有檢測到它的ZooKeeper會話,HMaster可能會將其解釋為RegionServer失敗。重新命名日誌目錄可確保現有的有效WAL檔案仍然由活動但繁忙的RegionServer使用,而不會意外寫入。新目錄根據以下模式命名:
/hbase/WALs/, , -splitting
這種重新命名的目錄的例子可能如下所示:
/hbase/WALs/srv.example.com,60020,1254173957298-splitting
每個日誌檔案都被拆分,每次一個。日誌拆分器一次讀取一個編輯項的日誌檔案,並將每個編輯條目放入對應於編輯區域的緩衝區中。同時,拆分器啟動多個編寫器執行緒。編寫器執行緒選取相應的緩衝區,並將緩衝區中的編輯項寫入臨時恢復的編輯檔案。臨時編輯檔案使用以下命名模式儲存到磁碟:
/hbase/<table_name>/<region_id>/recovered.edits/.temp
該檔案用於儲存此區域的WAL日誌中的所有編輯。日誌拆分完成後,.temp檔案將被重新命名為寫入檔案的第一個日誌的序列ID。要確定是否所有編輯都已寫入,將序列ID與寫入HFile的上次編輯的序列進行比較。如果最後編輯的序列大於或等於檔名中包含的序列ID,則很明顯,編輯檔案中的所有寫入操作都已完成。
日誌拆分完成後,每個受影響的區域將分配給RegionServer。開啟該區域時,會檢查recoverededed資料夾以找到恢復的編輯檔案。如果存在任何這樣的檔案,則通過讀取編輯並將其儲存到MemStore來重播它們。在重放所有編輯檔案後,MemStore的內容被寫入磁碟(HFile),編輯檔案被刪除。
繼續檢視hdfs上的WAL目錄,可以發現存在splitting目錄,WAL切分正常結束的話不會出現該目錄
hadoop fs -ls /hbase/WALs
Found 4 items
drwxr-xr-x - hbase hbase 0 2020-10-26 18:40 /hbase/WALs/vm3928.hadoop.com,16020,1598158651562-splitting
drwxr-xr-x - hbase hbase 0 2020-10-28 15:50 /hbase/WALs/vm3928.hadoop.com,16020,1603800273275
drwxr-xr-x - hbase hbase 0 2020-10-28 15:49 /hbase/WALs/vm3929.hadoop.com,16020,1603800273410
drwxr-xr-x - hbase hbase 0 2020-10-28 15:49 /hbase/WALs/vm3930.hadoop.com,16020,1603800273039
考慮到HMaster一直在報WAL切分失敗的log,大致推測由於HMaster一直再做WAL切分任務,導致部分region一直不可用,因此無法正常查詢hbase資料,而WAL切分又因為某些原因一直失敗,這樣最終導致WAL切分不斷失敗,又不斷進行,region一直不可用,涉及到的region均在28這臺RegionServer上。
3. 初步解決
參考網上的一些解決方法是優先恢復服務,將splitting目錄move 其他地方,於是執行以下操作
hadoop fs -mv /hbase/WALs/vm3928.hadoop.com,16020,1598158651562-splitting /hbase/bk
這時在cdh管理介面上檢視28這臺伺服器上region數量,可用的region由1個恢復到了244個
這時查詢了一些hbase表的前幾條資料,已經可以訪問資料了(其實還有些問題,看後文)
4. 分析WAL切分失敗原因
繼續分析WAL切分失敗的原因,檢視splitting目錄下面檔案的部分內容
hadoop fs -cat /hbase/WALs/vm3928.hadoop.com,16020,1598158651562-splitting/vm3928.hadoop.com%2C16020%2C1598158651562.vm3928.hadoop.com%2C16020%2C1598158651562.regiongroup-0.1603692260004
20/10/28 11:00:10 WARN hdfs.DFSClient: Found Checksum error for BP-1135703146-10.200.39.28-1574043673848:blk_1084998682_11258280 from DatanodeInfoWithStorage[10.200.39.30:1004,DS-cfda5a02-d48f-4269-896e-083d55fdd33f,DISK] at 512
20/10/28 11:00:10 WARN hdfs.DFSClient: No live nodes contain block BP-1135703146-10.200.39.28-1574043673848:blk_1084998682_11258280 after checking nodes = [DatanodeInfoWithStorage[10.200.39.30:1004,DS-cfda5a02-d48f-4269-896e-083d55fdd33f,DISK], DatanodeInfoWithStorage[10.200.39.29:1004,DS-dffdf8e8-0f2a-4a43-b39d-b70b50263e0c,DISK], DatanodeInfoWithStorage[10.200.39.28:1004,DS-21d3927c-e42d-45ce-8e79-25c14202cbe5,DISK]], ignoredNodes = null
......
可以發現存在hbase資料block丟失的情況,而hbase的資料儲存在hdfs上面的,hdfs有三副本策略,3份全丟狀態就為 corrupt,日誌中可以發現叢集的三個節點上均找不到某個block,接下來對hbase資料目錄做下檢查,結果如下:
hdfs fsck /hbase/data
Connecting to namenode via http://vm3928.hadoop.com:9870/fsck?ugi=hbase&path=%2Fhbase%2Fdata
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/36cafb11c3a6455aaba1605b72bbb7c7: CORRUPT blockpool BP-1135703146-10.200.39.28-1574043673848 block blk_1084998838
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/36cafb11c3a6455aaba1605b72bbb7c7: CORRUPT 1 blocks of total size 2559961 B.
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/756ffcf3efd24971abe0195f8de5af1e: Under replicated BP-1135703146-10.200.39.28-1574043673848:blk_1084998837_11258152. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas_idx_mchnt_cd/72c126f1fa9cef3a0fdd8fcb5e49501e/d/a64f19e4dbfc47a991d919a5595b4e29: Under replicated BP-1135703146-10.200.39.28-1574043673848:blk_1084998835_11258150. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: CORRUPT
Number of data-nodes: 3
Number of racks: 1
Total dirs: 3016
Total symlinks: 0
Replicated Blocks:
Total size: 114299895515 B
Total files: 2386
Total blocks (validated): 2406 (avg. block size 47506190 B)
********************************
UNDER MIN REPL'D BLOCKS: 1 (0.04156276 %)
dfs.namenode.replication.min: 1
CORRUPT FILES: 1
CORRUPT BLOCKS: 1
CORRUPT SIZE: 2559961 B
********************************
Minimally replicated blocks: 2405 (99.958435 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 2 (0.08312552 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.9970906
Missing blocks: 0
Corrupt blocks: 1
Missing replicas: 4 (0.055417012 %)
Blocks queued for replication: 0
The filesystem under path '/hbase/data' is CORRUPT
挑出其中的關鍵資訊如下:
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/36cafb11c3a6455aaba1605b72bbb7c7: CORRUPT 1 blocks of total size 2559961 B.
可以發現表名為tbl_glhis_hb_ct_settle_dtl_bas,region名為42a907b4ffafb72205d9e1d1fa5e8301,列族d的一份資料檔案36cafb11c3a6455aaba1605b72bbb7c7存在缺失的block,可以猜測如果訪問到了tbl_glhis_hb_ct_settle_dtl_bas表的42a907b4ffafb72205d9e1d1fa5e8301分割槽時,應該還會報錯,果然,在hbase shell中直接count操作,一直數到了13224000行,需要訪問42a907b4ffafb72205d9e1d1fa5e8301,開始報了NotServingRegionException錯,如下所示,與預期一致。
Current count: 13224000, row: 4995600052377126-79799371-79799880787440-0119212559-269394-03014510-00092900-X-Q1#010
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=8, exceptions:
Wed Oct 28 17:21:50 CST 2020, RpcRetryingCaller{globalStartTime=1603876909641, pause=100, maxAttempts=8}, org.apache.hadoop.hbase.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: tbl_glhis_hb_ct_settle_dtl_bas,5,1578557420156.42a907b4ffafb72205d9e1d1fa5e8301. is not online on vm3928.hadoop.com,16020,1603800273039
至此,可以猜測原因:由於存在hbase某張表底層的hdfs資料塊缺失,導致該表的某個region不可用(非region正在分裂或再均衡引起的不可用),進而引起WAL切分失敗,重試,失敗,重試一致進行,最終造成到RegionServer上所有region都不可用。使用刪除切分目錄的方法,使不存在丟失資料塊的表正常訪問,但真正丟失資料塊的hbase表仍然存在缺失資料塊的region不可用的問題(環境中為什麼會出現hdfs資料庫丟失的情況還不確定原因)
5. 處理狀態異常資料
檢視目前的hbase資料狀態
hdfs fsck /hbase/data
Connecting to namenode via http://vm3928.hadoop.com:9870/fsck?ugi=hbase&path=%2Fhbase%2Fdata
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/36cafb11c3a6455aaba1605b72bbb7c7: CORRUPT blockpool BP-1135703146-10.200.39.28-1574043673848 block blk_1084998838
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/36cafb11c3a6455aaba1605b72bbb7c7: CORRUPT 1 blocks of total size 2559961 B.
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/756ffcf3efd24971abe0195f8de5af1e: CORRUPT blockpool BP-1135703146-10.200.39.28-1574043673848 block blk_1084998837
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/756ffcf3efd24971abe0195f8de5af1e: CORRUPT 1 blocks of total size 40464 B.
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas_idx_mchnt_cd/72c126f1fa9cef3a0fdd8fcb5e49501e/d/a64f19e4dbfc47a991d919a5595b4e29: Under replicated BP-1135703146-10.200.39.28-1574043673848:blk_1084998835_11258150. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: CORRUPT
Number of data-nodes: 3
Number of racks: 1
Total dirs: 2884
Total symlinks: 0
Replicated Blocks:
Total size: 122742037956 B
Total files: 2440
Total blocks (validated): 2519 (avg. block size 48726493 B)
********************************
UNDER MIN REPL'D BLOCKS: 2 (0.07939658 %)
dfs.namenode.replication.min: 1
CORRUPT FILES: 2
CORRUPT BLOCKS: 2
CORRUPT SIZE: 2600425 B
********************************
Minimally replicated blocks: 2517 (99.9206 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1 (0.03969829 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.996824
Missing blocks: 0
Corrupt blocks: 2
Missing replicas: 2 (0.02646553 %)
Blocks queued for replication: 0
The filesystem under path '/hbase/data' is CORRUPT
可以發現tbl_glhis_hb_ct_settle_dtl_bas表有兩個資料塊丟失,具體檔案也顯示出來了,另外tbl_glhis_hb_ct_settle_dtl_bas_idx_mchnt_cd表存在資料塊只有一個副本,預設應該為3個副本,也是不健康的狀態,這時去hbase web頁面檢視,顯示一直有Hbase region in transition,涉及到的region為上述不健康的region,當訪問到這些region資料時,也會遇到NotServingRegionException異常
什麼是RIT狀態?
As regions are managed by the master and region servers to, for example, balance the load across servers, they go through short phases of transition. This applies to opening, closing, and splitting a region. Before the operation is performed, the region is added to the “Regions in Transition” list on the WEB UI, and once the operation is complete, it is removed.
rit是Region在進入開啟,關閉,分割,再均衡等操作前的一種狀態,而我們的環境一直出現,猜測跟資料塊損壞有關
首先對丟失資料塊的檔案執行刪除操作,執行後,檢視RIT,tbl_glhis_hb_ct_settle_dtl_bas表涉及到region已經不在RIT列表裡,再檢查/hbase/data目錄,已經沒有缺失的資料塊,狀態為健康
hdfs fsck /hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/36cafb11c3a6455aaba1605b72bbb7c7 -delete
hdfs fsck /hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas/42a907b4ffafb72205d9e1d1fa5e8301/d/756ffcf3efd24971abe0195f8de5af1e -delete
hdfs fsck /hbase/data
Connecting to namenode via http://vm3928.hadoop.com:9870/fsck?ugi=hbase&path=%2Fhbase%2Fdata
/hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas_idx_mchnt_cd/72c126f1fa9cef3a0fdd8fcb5e49501e/d/a64f19e4dbfc47a991d919a5595b4e29: Under replicated BP-1135703146-10.200.39.28-1574043673848:blk_1084998835_11258150. Target Replicas is 3 but found 1 live replica(s), 0 decommissioned replica(s), 0 decommissioning replica(s).
Status: HEALTHY
Number of data-nodes: 3
Number of racks: 1
Total dirs: 2787
Total symlinks: 0
Replicated Blocks:
Total size: 122670925814 B
Total files: 2437
Total blocks (validated): 2512 (avg. block size 48833967 B)
Minimally replicated blocks: 2512 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 1 (0.039808918 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.999204
Missing blocks: 0
Corrupt blocks: 0
Missing replicas: 2 (0.026539277 %)
Blocks queued for replication: 0
The filesystem under path '/hbase/data' is HEALTHY
繼續對副本數不夠的檔案,執行如下修復操作,設定副本數為3
hadoop fs -setrep -w 3 /hbase/data/default/tbl_glhis_hb_ct_settle_dtl_bas_idx_mchnt_cd/72c126f1fa9cef3a0fdd8fcb5e49501e/d/a64f19e4dbfc47a991d919a5595b4e29
但是,在我們環境裡,這個操作一直是waitting狀態,查看了namenode節點上的日誌如下,發現一直在報java.io.EOFException異常,看起來時datanode之間同步副本異常,暫時還沒解決。。。
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: vm3928.hadoop.com:1004:DataXceiver error processing READ_BLOCK operation src: /10.200.39.29:57166 dst: /10.200.39.28:1004
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:315)
at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:139)
at org.apache.hadoop.hdfs.server.datanode.BlockMetadataHeader.readHeader(BlockMetadataHeader.java:153)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.loadLastPartialChunkChecksum(FsVolumeImpl.java:1137)
at org.apache.hadoop.hdfs.server.datanode.FinalizedReplica.loadLastPartialChunkChecksum(FinalizedReplica.java:157)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.getPartialChunkChecksumForFinalized(BlockSender.java:444)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.<init>(BlockSender.java:265)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:598)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:152)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:104)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)