以太坊原始碼深入分析(8)-- 以太坊核心BlockChain原始碼分析
阿新 • • 發佈:2019-02-15
前面幾節都在分析以太坊的通訊協議,怎麼廣播,怎麼同步,怎麼下載。這一節講講以太坊的核心模組BlockChain,也就是以太坊的區塊鏈。
1,建立各種lru快取(最近最少使用的演算法)
2,初始化triegc(用於垃圾回收的區塊number 對應的優先順序佇列),初始化stateDb,NewBlockValidator()初始化區塊和狀態驗證器,NewStateProcessor()初始化區塊狀態處理器
3,NewHeaderChain()初始化區塊頭部鏈
4,bc.genesisBlock = bc.GetBlockByNumber(0) 拿到第0個區塊,也就是創世區塊
5,bc.loadLastState() 載入最新的狀態資料
6,查詢本地區塊鏈上時候有硬分叉的區塊,如果有呼叫bc.SetHead回到硬分叉之前的區塊頭
7,go bc.update() 定時處理future block
2,從stateDb中開啟最新區塊的狀態trie,如果開啟失敗呼叫bc.repair(¤tBlock)方法進行修復。修復方法就是從當前區塊一個個的往前面找,直到找到好的區塊,然後賦值給currentBlock。
3,獲取到最新的區塊頭
2,重新設定bc.currentBlock,bc.currentFastBlock
3,呼叫bc.loadLastState(),重新載入狀態
2,呼叫共識引擎bc.engine.VerifyHeaders,驗證區塊鏈的這些headers。(共識驗證是個複雜的事情,在講共識機制的時候再分析)
3,如果共識驗證沒有問題,再呼叫bc.Validator().ValidateBody(block)驗證block的Body,這個方法只驗證block的叔區塊hash和區塊交易列表的hash。
4,根據ValidateBody驗證結果,如果是還沒有插入本地的區塊,但是其父區塊在bc.futureBlocks就加入bc.futureBlocks。如果父區塊是本地區塊,但是沒有狀態,就遞迴呼叫bc.insertChain(winner),直到有狀態才插入。
5,獲得父區塊的狀態,呼叫processor.Process()處理block的交易資料,並生成收據,日誌等資訊,產生本區塊的狀態。Process()方法,執行了Block裡面包含的的所有交易,根據交易的過程和結果生成所有交易的收據和日誌資訊。(fast模式下收據資料是同步過來的,full模式下是本地重現了交易並生成了收據資料)
6,呼叫bc.Validator().ValidateState,對產生的區塊交易收據資料,和消費的gas於收到的block相關資料進行對比驗證。對比消費的gas是否一樣,對比bloom是否一致,根據收據生成hash是否一致,對比header.root和stateDb的merkle樹的根hash是否一致。
7,呼叫bc.WriteBlockWithState(block, receipts, state),將block寫入到本地的區塊鏈中,並返回status。根據status判斷插入的是主鏈還是side鏈,如果是主鏈bc.gcproc需要加上驗證花費的時間。
2,呼叫WriteBlock(batch, block) 把block的body和header都寫到資料庫
3,呼叫state.Commit(bc.chainConfig.IsEIP158(block.Number()))把狀態寫入資料庫並獲取到狀態root。
4,按規則處理bc.stateCache快取,並清理垃圾回收器
5,呼叫WriteBlockReceipts,把收據資料寫入資料庫
6,如果發現block的父區塊不是本地當前最新區塊,呼叫bc.reorg(currentBlock, block),如果新區塊比老區塊td高,則把高出來的區塊一一insert進blockChain。
7,呼叫WriteTxLookupEntries,根據交易hash建立資料庫的索引。呼叫WritePreimages,把Preimages寫入資料庫,Preimages在evm執行sha3指令時產生。
2,呼叫SetReceiptsData(bc.chainConfig, block, receipts)把收到的交易收據資料加入到block中
3,呼叫WriteBody(),把blockbody資料寫入資料庫
4,呼叫WriteBlockReceipts(),把收據資料寫入資料庫
5,呼叫WriteTxLookupEntries,根據交易hash建立資料庫的索引。
6,更新BlockChain的currentFastBlock資訊
一,BlockChain的初始化
Ethereum服務初始化的時候會呼叫core.SetupGenesisBlock來載入創始區塊。顧名思義,創始區塊就是以太坊區塊鏈中的第一個區塊,number值為0。緊接著呼叫core.NewBlockChain來載入以太坊的區塊鏈。
初始化方法做了這麼幾件事:func NewBlockChain(db ethdb.Database, cacheConfig *CacheConfig, chainConfig *params.ChainConfig, engine consensus.Engine, vmConfig vm.Config) (*BlockChain, error) { if cacheConfig == nil { cacheConfig = &CacheConfig{ TrieNodeLimit: 256 * 1024 * 1024, TrieTimeLimit: 5 * time.Minute, } } bodyCache, _ := lru.New(bodyCacheLimit) bodyRLPCache, _ := lru.New(bodyCacheLimit) blockCache, _ := lru.New(blockCacheLimit) futureBlocks, _ := lru.New(maxFutureBlocks) badBlocks, _ := lru.New(badBlockLimit) bc := &BlockChain{ chainConfig: chainConfig, cacheConfig: cacheConfig, db: db, triegc: prque.New(), stateCache: state.NewDatabase(db), quit: make(chan struct{}), bodyCache: bodyCache, bodyRLPCache: bodyRLPCache, blockCache: blockCache, futureBlocks: futureBlocks, engine: engine, vmConfig: vmConfig, badBlocks: badBlocks, } bc.SetValidator(NewBlockValidator(chainConfig, bc, engine)) bc.SetProcessor(NewStateProcessor(chainConfig, bc, engine)) var err error bc.hc, err = NewHeaderChain(db, chainConfig, engine, bc.getProcInterrupt) if err != nil { return nil, err } bc.genesisBlock = bc.GetBlockByNumber(0) if bc.genesisBlock == nil { return nil, ErrNoGenesis } if err := bc.loadLastState(); err != nil { return nil, err } // Check the current state of the block hashes and make sure that we do not have any of the bad blocks in our chain for hash := range BadHashes { if header := bc.GetHeaderByHash(hash); header != nil { // get the canonical block corresponding to the offending header's number headerByNumber := bc.GetHeaderByNumber(header.Number.Uint64()) // make sure the headerByNumber (if present) is in our current canonical chain if headerByNumber != nil && headerByNumber.Hash() == header.Hash() { log.Error("Found bad hash, rewinding chain", "number", header.Number, "hash", header.ParentHash) bc.SetHead(header.Number.Uint64() - 1) log.Error("Chain rewind was successful, resuming normal operation") } } } // Take ownership of this particular state go bc.update() return bc, nil }
1,建立各種lru快取(最近最少使用的演算法)
2,初始化triegc(用於垃圾回收的區塊number 對應的優先順序佇列),初始化stateDb,NewBlockValidator()初始化區塊和狀態驗證器,NewStateProcessor()初始化區塊狀態處理器
3,NewHeaderChain()初始化區塊頭部鏈
4,bc.genesisBlock = bc.GetBlockByNumber(0) 拿到第0個區塊,也就是創世區塊
5,bc.loadLastState() 載入最新的狀態資料
6,查詢本地區塊鏈上時候有硬分叉的區塊,如果有呼叫bc.SetHead回到硬分叉之前的區塊頭
7,go bc.update() 定時處理future block
二,看看bc.loadLastState()方法
1,獲取到最新區塊以及它的hashfunc (bc *BlockChain) loadLastState() error { // Restore the last known head block head := GetHeadBlockHash(bc.db) if head == (common.Hash{}) { // Corrupt or empty database, init from scratch log.Warn("Empty database, resetting chain") return bc.Reset() } // Make sure the entire head block is available currentBlock := bc.GetBlockByHash(head) if currentBlock == nil { // Corrupt or empty database, init from scratch log.Warn("Head block missing, resetting chain", "hash", head) return bc.Reset() } // Make sure the state associated with the block is available if _, err := state.New(currentBlock.Root(), bc.stateCache); err != nil { // Dangling block without a state associated, init from scratch log.Warn("Head state missing, repairing chain", "number", currentBlock.Number(), "hash", currentBlock.Hash()) if err := bc.repair(¤tBlock); err != nil { return err } } // Everything seems to be fine, set as the head block bc.currentBlock.Store(currentBlock) // Restore the last known head header currentHeader := currentBlock.Header() if head := GetHeadHeaderHash(bc.db); head != (common.Hash{}) { if header := bc.GetHeaderByHash(head); header != nil { currentHeader = header } } bc.hc.SetCurrentHeader(currentHeader) // Restore the last known head fast block bc.currentFastBlock.Store(currentBlock) if head := GetHeadFastBlockHash(bc.db); head != (common.Hash{}) { if block := bc.GetBlockByHash(head); block != nil { bc.currentFastBlock.Store(block) } } // Issue a status log for the user currentFastBlock := bc.CurrentFastBlock() headerTd := bc.GetTd(currentHeader.Hash(), currentHeader.Number.Uint64()) blockTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64()) fastTd := bc.GetTd(currentFastBlock.Hash(), currentFastBlock.NumberU64()) log.Info("Loaded most recent local header", "number", currentHeader.Number, "hash", currentHeader.Hash(), "td", headerTd) log.Info("Loaded most recent local full block", "number", currentBlock.Number(), "hash", currentBlock.Hash(), "td", blockTd) log.Info("Loaded most recent local fast block", "number", currentFastBlock.Number(), "hash", currentFastBlock.Hash(), "td", fastTd) return nil }
2,從stateDb中開啟最新區塊的狀態trie,如果開啟失敗呼叫bc.repair(¤tBlock)方法進行修復。修復方法就是從當前區塊一個個的往前面找,直到找到好的區塊,然後賦值給currentBlock。
3,獲取到最新的區塊頭
4,找到最新的fast模式下的block,並設定bc.currentFastBlock
三,再看看用來回滾區塊的bc.SetHead()方法
func (bc *BlockChain) SetHead(head uint64) error {
log.Warn("Rewinding blockchain", "target", head)
bc.mu.Lock()
defer bc.mu.Unlock()
// Rewind the header chain, deleting all block bodies until then
delFn := func(hash common.Hash, num uint64) {
DeleteBody(bc.db, hash, num)
}
bc.hc.SetHead(head, delFn)
currentHeader := bc.hc.CurrentHeader()
// Clear out any stale content from the caches
bc.bodyCache.Purge()
bc.bodyRLPCache.Purge()
bc.blockCache.Purge()
bc.futureBlocks.Purge()
// Rewind the block chain, ensuring we don't end up with a stateless head block
if currentBlock := bc.CurrentBlock(); currentBlock != nil && currentHeader.Number.Uint64() < currentBlock.NumberU64() {
bc.currentBlock.Store(bc.GetBlock(currentHeader.Hash(), currentHeader.Number.Uint64()))
}
if currentBlock := bc.CurrentBlock(); currentBlock != nil {
if _, err := state.New(currentBlock.Root(), bc.stateCache); err != nil {
// Rewound state missing, rolled back to before pivot, reset to genesis
bc.currentBlock.Store(bc.genesisBlock)
}
}
// Rewind the fast block in a simpleton way to the target head
if currentFastBlock := bc.CurrentFastBlock(); currentFastBlock != nil && currentHeader.Number.Uint64() < currentFastBlock.NumberU64() {
bc.currentFastBlock.Store(bc.GetBlock(currentHeader.Hash(), currentHeader.Number.Uint64()))
}
// If either blocks reached nil, reset to the genesis state
if currentBlock := bc.CurrentBlock(); currentBlock == nil {
bc.currentBlock.Store(bc.genesisBlock)
}
if currentFastBlock := bc.CurrentFastBlock(); currentFastBlock == nil {
bc.currentFastBlock.Store(bc.genesisBlock)
}
currentBlock := bc.CurrentBlock()
currentFastBlock := bc.CurrentFastBlock()
if err := WriteHeadBlockHash(bc.db, currentBlock.Hash()); err != nil {
log.Crit("Failed to reset head full block", "err", err)
}
if err := WriteHeadFastBlockHash(bc.db, currentFastBlock.Hash()); err != nil {
log.Crit("Failed to reset head fast block", "err", err)
}
return bc.loadLastState()
}
1,首先呼叫bc.hc.SetHead(head, delFn),回滾head對應的區塊頭。並清除中間區塊頭所有的資料和快取。設定head為新的currentHeadr。2,重新設定bc.currentBlock,bc.currentFastBlock
3,呼叫bc.loadLastState(),重新載入狀態
四,之前分析Downloader和Fetcher的時候,在同步完區塊後會呼叫InsertChain方法插入到本地BlockChain中。我們看看InsertChain怎麼工作的
func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) {
n, events, logs, err := bc.insertChain(chain)
bc.PostChainEvents(events, logs)
return n, err
}
呼叫bc.insertChain(chain),並將插入的結果廣播給訂閱了blockChain事件的物件。
func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*types.Log, error) {
// Do a sanity check that the provided chain is actually ordered and linked
for i := 1; i < len(chain); i++ {
if chain[i].NumberU64() != chain[i-1].NumberU64()+1 || chain[i].ParentHash() != chain[i-1].Hash() {
// Chain broke ancestry, log a messge (programming error) and skip insertion
log.Error("Non contiguous block insert", "number", chain[i].Number(), "hash", chain[i].Hash(),
"parent", chain[i].ParentHash(), "prevnumber", chain[i-1].Number(), "prevhash", chain[i-1].Hash())
return 0, nil, nil, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, chain[i-1].NumberU64(),
chain[i-1].Hash().Bytes()[:4], i, chain[i].NumberU64(), chain[i].Hash().Bytes()[:4], chain[i].ParentHash().Bytes()[:4])
}
}
// Pre-checks passed, start the full block imports
bc.wg.Add(1)
defer bc.wg.Done()
bc.chainmu.Lock()
defer bc.chainmu.Unlock()
// A queued approach to delivering events. This is generally
// faster than direct delivery and requires much less mutex
// acquiring.
var (
stats = insertStats{startTime: mclock.Now()}
events = make([]interface{}, 0, len(chain))
lastCanon *types.Block
coalescedLogs []*types.Log
)
// Start the parallel header verifier
headers := make([]*types.Header, len(chain))
seals := make([]bool, len(chain))
for i, block := range chain {
headers[i] = block.Header()
seals[i] = true
}
abort, results := bc.engine.VerifyHeaders(bc, headers, seals)
defer close(abort)
// Iterate over the blocks and insert when the verifier permits
for i, block := range chain {
// If the chain is terminating, stop processing blocks
if atomic.LoadInt32(&bc.procInterrupt) == 1 {
log.Debug("Premature abort during blocks processing")
break
}
// If the header is a banned one, straight out abort
if BadHashes[block.Hash()] {
bc.reportBlock(block, nil, ErrBlacklistedHash)
return i, events, coalescedLogs, ErrBlacklistedHash
}
// Wait for the block's verification to complete
bstart := time.Now()
err := <-results
if err == nil {
err = bc.Validator().ValidateBody(block)
}
switch {
case err == ErrKnownBlock:
// Block and state both already known. However if the current block is below
// this number we did a rollback and we should reimport it nonetheless.
if bc.CurrentBlock().NumberU64() >= block.NumberU64() {
stats.ignored++
continue
}
case err == consensus.ErrFutureBlock:
// Allow up to MaxFuture second in the future blocks. If this limit is exceeded
// the chain is discarded and processed at a later time if given.
max := big.NewInt(time.Now().Unix() + maxTimeFutureBlocks)
if block.Time().Cmp(max) > 0 {
return i, events, coalescedLogs, fmt.Errorf("future block: %v > %v", block.Time(), max)
}
bc.futureBlocks.Add(block.Hash(), block)
stats.queued++
continue
case err == consensus.ErrUnknownAncestor && bc.futureBlocks.Contains(block.ParentHash()):
bc.futureBlocks.Add(block.Hash(), block)
stats.queued++
continue
case err == consensus.ErrPrunedAncestor:
// Block competing with the canonical chain, store in the db, but don't process
// until the competitor TD goes above the canonical TD
currentBlock := bc.CurrentBlock()
localTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())
externTd := new(big.Int).Add(bc.GetTd(block.ParentHash(), block.NumberU64()-1), block.Difficulty())
if localTd.Cmp(externTd) > 0 {
if err = bc.WriteBlockWithoutState(block, externTd); err != nil {
return i, events, coalescedLogs, err
}
continue
}
// Competitor chain beat canonical, gather all blocks from the common ancestor
var winner []*types.Block
parent := bc.GetBlock(block.ParentHash(), block.NumberU64()-1)
for !bc.HasState(parent.Root()) {
winner = append(winner, parent)
parent = bc.GetBlock(parent.ParentHash(), parent.NumberU64()-1)
}
for j := 0; j < len(winner)/2; j++ {
winner[j], winner[len(winner)-1-j] = winner[len(winner)-1-j], winner[j]
}
// Import all the pruned blocks to make the state available
bc.chainmu.Unlock()
_, evs, logs, err := bc.insertChain(winner)
bc.chainmu.Lock()
events, coalescedLogs = evs, logs
if err != nil {
return i, events, coalescedLogs, err
}
case err != nil:
bc.reportBlock(block, nil, err)
return i, events, coalescedLogs, err
}
// Create a new statedb using the parent block and report an
// error if it fails.
var parent *types.Block
if i == 0 {
parent = bc.GetBlock(block.ParentHash(), block.NumberU64()-1)
} else {
parent = chain[i-1]
}
state, err := state.New(parent.Root(), bc.stateCache)
if err != nil {
return i, events, coalescedLogs, err
}
// Process block using the parent state as reference point.
receipts, logs, usedGas, err := bc.processor.Process(block, state, bc.vmConfig)
if err != nil {
bc.reportBlock(block, receipts, err)
return i, events, coalescedLogs, err
}
// Validate the state using the default validator
err = bc.Validator().ValidateState(block, parent, state, receipts, usedGas)
if err != nil {
bc.reportBlock(block, receipts, err)
return i, events, coalescedLogs, err
}
proctime := time.Since(bstart)
// Write the block to the chain and get the status.
status, err := bc.WriteBlockWithState(block, receipts, state)
if err != nil {
return i, events, coalescedLogs, err
}
switch status {
case CanonStatTy:
log.Debug("Inserted new block", "number", block.Number(), "hash", block.Hash(), "uncles", len(block.Uncles()),
"txs", len(block.Transactions()), "gas", block.GasUsed(), "elapsed", common.PrettyDuration(time.Since(bstart)))
coalescedLogs = append(coalescedLogs, logs...)
blockInsertTimer.UpdateSince(bstart)
events = append(events, ChainEvent{block, block.Hash(), logs})
lastCanon = block
// Only count canonical blocks for GC processing time
bc.gcproc += proctime
case SideStatTy:
log.Debug("Inserted forked block", "number", block.Number(), "hash", block.Hash(), "diff", block.Difficulty(), "elapsed",
common.PrettyDuration(time.Since(bstart)), "txs", len(block.Transactions()), "gas", block.GasUsed(), "uncles", len(block.Uncles()))
blockInsertTimer.UpdateSince(bstart)
events = append(events, ChainSideEvent{block})
}
stats.processed++
stats.usedGas += usedGas
stats.report(chain, i, bc.stateCache.TrieDB().Size())
}
// Append a single chain head event if we've progressed the chain
if lastCanon != nil && bc.CurrentBlock().Hash() == lastCanon.Hash() {
events = append(events, ChainHeadEvent{lastCanon})
}
return 0, events, coalescedLogs, nil
}
1,首先確保插入的blocks是安次序排列的2,呼叫共識引擎bc.engine.VerifyHeaders,驗證區塊鏈的這些headers。(共識驗證是個複雜的事情,在講共識機制的時候再分析)
3,如果共識驗證沒有問題,再呼叫bc.Validator().ValidateBody(block)驗證block的Body,這個方法只驗證block的叔區塊hash和區塊交易列表的hash。
4,根據ValidateBody驗證結果,如果是還沒有插入本地的區塊,但是其父區塊在bc.futureBlocks就加入bc.futureBlocks。如果父區塊是本地區塊,但是沒有狀態,就遞迴呼叫bc.insertChain(winner),直到有狀態才插入。
5,獲得父區塊的狀態,呼叫processor.Process()處理block的交易資料,並生成收據,日誌等資訊,產生本區塊的狀態。Process()方法,執行了Block裡面包含的的所有交易,根據交易的過程和結果生成所有交易的收據和日誌資訊。(fast模式下收據資料是同步過來的,full模式下是本地重現了交易並生成了收據資料)
6,呼叫bc.Validator().ValidateState,對產生的區塊交易收據資料,和消費的gas於收到的block相關資料進行對比驗證。對比消費的gas是否一樣,對比bloom是否一致,根據收據生成hash是否一致,對比header.root和stateDb的merkle樹的根hash是否一致。
7,呼叫bc.WriteBlockWithState(block, receipts, state),將block寫入到本地的區塊鏈中,並返回status。根據status判斷插入的是主鏈還是side鏈,如果是主鏈bc.gcproc需要加上驗證花費的時間。
8,返回結果事件和日誌資訊,用於通知給訂閱了區塊插入事件的物件
五, 再分析一下bc.WriteBlockWithState是怎麼工作的:
func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.Receipt, state *state.StateDB) (status WriteStatus, err error) {
bc.wg.Add(1)
defer bc.wg.Done()
// Calculate the total difficulty of the block
ptd := bc.GetTd(block.ParentHash(), block.NumberU64()-1)
if ptd == nil {
return NonStatTy, consensus.ErrUnknownAncestor
}
// Make sure no inconsistent state is leaked during insertion
bc.mu.Lock()
defer bc.mu.Unlock()
currentBlock := bc.CurrentBlock()
localTd := bc.GetTd(currentBlock.Hash(), currentBlock.NumberU64())
externTd := new(big.Int).Add(block.Difficulty(), ptd)
// Irrelevant of the canonical status, write the block itself to the database
if err := bc.hc.WriteTd(block.Hash(), block.NumberU64(), externTd); err != nil {
return NonStatTy, err
}
// Write other block data using a batch.
batch := bc.db.NewBatch()
if err := WriteBlock(batch, block); err != nil {
return NonStatTy, err
}
root, err := state.Commit(bc.chainConfig.IsEIP158(block.Number()))
if err != nil {
return NonStatTy, err
}
triedb := bc.stateCache.TrieDB()
// If we're running an archive node, always flush
if bc.cacheConfig.Disabled {
if err := triedb.Commit(root, false); err != nil {
return NonStatTy, err
}
} else {
// Full but not archive node, do proper garbage collection
triedb.Reference(root, common.Hash{}) // metadata reference to keep trie alive
bc.triegc.Push(root, -float32(block.NumberU64()))
if current := block.NumberU64(); current > triesInMemory {
// Find the next state trie we need to commit
header := bc.GetHeaderByNumber(current - triesInMemory)
chosen := header.Number.Uint64()
// Only write to disk if we exceeded our memory allowance *and* also have at
// least a given number of tries gapped.
var (
size = triedb.Size()
limit = common.StorageSize(bc.cacheConfig.TrieNodeLimit) * 1024 * 1024
)
if size > limit || bc.gcproc > bc.cacheConfig.TrieTimeLimit {
// If we're exceeding limits but haven't reached a large enough memory gap,
// warn the user that the system is becoming unstable.
if chosen < lastWrite+triesInMemory {
switch {
case size >= 2*limit:
log.Warn("State memory usage too high, committing", "size", size, "limit", limit, "optimum", float64(chosen-lastWrite)/triesInMemory)
case bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit:
log.Info("State in memory for too long, committing", "time", bc.gcproc, "allowance", bc.cacheConfig.TrieTimeLimit, "optimum", float64(chosen-lastWrite)/triesInMemory)
}
}
// If optimum or critical limits reached, write to disk
if chosen >= lastWrite+triesInMemory || size >= 2*limit || bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit {
triedb.Commit(header.Root, true)
lastWrite = chosen
bc.gcproc = 0
}
}
// Garbage collect anything below our required write retention
for !bc.triegc.Empty() {
root, number := bc.triegc.Pop()
if uint64(-number) > chosen {
bc.triegc.Push(root, number)
break
}
triedb.Dereference(root.(common.Hash), common.Hash{})
}
}
}
if err := WriteBlockReceipts(batch, block.Hash(), block.NumberU64(), receipts); err != nil {
return NonStatTy, err
}
// If the total difficulty is higher than our known, add it to the canonical chain
// Second clause in the if statement reduces the vulnerability to selfish mining.
// Please refer to http://www.cs.cornell.edu/~ie53/publications/btcProcFC.pdf
reorg := externTd.Cmp(localTd) > 0
currentBlock = bc.CurrentBlock()
if !reorg && externTd.Cmp(localTd) == 0 {
// Split same-difficulty blocks by number, then at random
reorg = block.NumberU64() < currentBlock.NumberU64() || (block.NumberU64() == currentBlock.NumberU64() && mrand.Float64() < 0.5)
}
if reorg {
// Reorganise the chain if the parent is not the head block
if block.ParentHash() != currentBlock.Hash() {
if err := bc.reorg(currentBlock, block); err != nil {
return NonStatTy, err
}
}
// Write the positional metadata for transaction and receipt lookups
if err := WriteTxLookupEntries(batch, block); err != nil {
return NonStatTy, err
}
// Write hash preimages
if err := WritePreimages(bc.db, block.NumberU64(), state.Preimages()); err != nil {
return NonStatTy, err
}
status = CanonStatTy
} else {
status = SideStatTy
}
if err := batch.Write(); err != nil {
return NonStatTy, err
}
// Set new head.
if status == CanonStatTy {
bc.insert(block)
}
bc.futureBlocks.Remove(block.Hash())
return status, nil
}
1,從資料庫中獲取到parent的td。加上block 的difficulty,計算新的total difficulty值,並寫入資料庫。2,呼叫WriteBlock(batch, block) 把block的body和header都寫到資料庫
3,呼叫state.Commit(bc.chainConfig.IsEIP158(block.Number()))把狀態寫入資料庫並獲取到狀態root。
4,按規則處理bc.stateCache快取,並清理垃圾回收器
5,呼叫WriteBlockReceipts,把收據資料寫入資料庫
6,如果發現block的父區塊不是本地當前最新區塊,呼叫bc.reorg(currentBlock, block),如果新區塊比老區塊td高,則把高出來的區塊一一insert進blockChain。
7,呼叫WriteTxLookupEntries,根據交易hash建立資料庫的索引。呼叫WritePreimages,把Preimages寫入資料庫,Preimages在evm執行sha3指令時產生。
8,如果新的td大於或等於本地td,說明是主鏈區塊,呼叫bc.insert(block),更新blockChain的currentBlock,currentHeader,currentFastBolck等資訊。如果不是主鏈區塊則不會。
六,在分析Downloader的時候,只有full模式才會呼叫InsertChain()方法,而fast模式是InsertReceiptChain()方法。我們來看看InsertReceiptChain()方法做了什麼,它和InsertChain()方法有什麼區別。
func (bc *BlockChain) InsertReceiptChain(blockChain types.Blocks, receiptChain []types.Receipts) (int, error) {
bc.wg.Add(1)
defer bc.wg.Done()
// Do a sanity check that the provided chain is actually ordered and linked
for i := 1; i < len(blockChain); i++ {
if blockChain[i].NumberU64() != blockChain[i-1].NumberU64()+1 || blockChain[i].ParentHash() != blockChain[i-1].Hash() {
log.Error("Non contiguous receipt insert", "number", blockChain[i].Number(), "hash", blockChain[i].Hash(), "parent", blockChain[i].ParentHash(),
"prevnumber", blockChain[i-1].Number(), "prevhash", blockChain[i-1].Hash())
return 0, fmt.Errorf("non contiguous insert: item %d is #%d [%x…], item %d is #%d [%x…] (parent [%x…])", i-1, blockChain[i-1].NumberU64(),
blockChain[i-1].Hash().Bytes()[:4], i, blockChain[i].NumberU64(), blockChain[i].Hash().Bytes()[:4], blockChain[i].ParentHash().Bytes()[:4])
}
}
var (
stats = struct{ processed, ignored int32 }{}
start = time.Now()
bytes = 0
batch = bc.db.NewBatch()
)
for i, block := range blockChain {
receipts := receiptChain[i]
// Short circuit insertion if shutting down or processing failed
if atomic.LoadInt32(&bc.procInterrupt) == 1 {
return 0, nil
}
// Short circuit if the owner header is unknown
if !bc.HasHeader(block.Hash(), block.NumberU64()) {
return i, fmt.Errorf("containing header #%d [%x…] unknown", block.Number(), block.Hash().Bytes()[:4])
}
// Skip if the entire data is already known
if bc.HasBlock(block.Hash(), block.NumberU64()) {
stats.ignored++
continue
}
// Compute all the non-consensus fields of the receipts
SetReceiptsData(bc.chainConfig, block, receipts)
// Write all the data out into the database
if err := WriteBody(batch, block.Hash(), block.NumberU64(), block.Body()); err != nil {
return i, fmt.Errorf("failed to write block body: %v", err)
}
if err := WriteBlockReceipts(batch, block.Hash(), block.NumberU64(), receipts); err != nil {
return i, fmt.Errorf("failed to write block receipts: %v", err)
}
if err := WriteTxLookupEntries(batch, block); err != nil {
return i, fmt.Errorf("failed to write lookup metadata: %v", err)
}
stats.processed++
if batch.ValueSize() >= ethdb.IdealBatchSize {
if err := batch.Write(); err != nil {
return 0, err
}
bytes += batch.ValueSize()
batch.Reset()
}
}
if batch.ValueSize() > 0 {
bytes += batch.ValueSize()
if err := batch.Write(); err != nil {
return 0, err
}
}
// Update the head fast sync block if better
bc.mu.Lock()
head := blockChain[len(blockChain)-1]
if td := bc.GetTd(head.Hash(), head.NumberU64()); td != nil { // Rewind may have occurred, skip in that case
currentFastBlock := bc.CurrentFastBlock()
if bc.GetTd(currentFastBlock.Hash(), currentFastBlock.NumberU64()).Cmp(td) < 0 {
if err := WriteHeadFastBlockHash(bc.db, head.Hash()); err != nil {
log.Crit("Failed to update head fast block hash", "err", err)
}
bc.currentFastBlock.Store(head)
}
}
bc.mu.Unlock()
log.Info("Imported new block receipts",
"count", stats.processed,
"elapsed", common.PrettyDuration(time.Since(start)),
"number", head.Number(),
"hash", head.Hash(),
"size", common.StorageSize(bytes),
"ignored", stats.ignored)
return 0, nil
}
1,首先確保插入的blocks是安次序排列的2,呼叫SetReceiptsData(bc.chainConfig, block, receipts)把收到的交易收據資料加入到block中
3,呼叫WriteBody(),把blockbody資料寫入資料庫
4,呼叫WriteBlockReceipts(),把收據資料寫入資料庫
5,呼叫WriteTxLookupEntries,根據交易hash建立資料庫的索引。
6,更新BlockChain的currentFastBlock資訊