BEP(Idea v0.1) Segmented History Data Maintenance

1.Summary

This BEP proposes a practical solution to address the problem of increasing history data storage on the BSC(BNB Smart Chain) for full nodes, so they only need to maintain a limited range of blocks.

2.Motivation

The history data includes block header, body and receipt, they are useless to execute the latest blocks. A recent storage profile on Jul-02-2023 shows that the size of history data has reached ~1288 GB, which makes it a big burden to run a BSC full node. As BSC keeps generating new blocks, the history data also keeps growing.

Actually the history data can be maintained by some archive node or by a DA layer; a full node does not need to keep a full copy of the history data. But simply deleting the history data is also not acceptable, it will make the P2P network in chaos and hard to sync.

We need to make a rule so that full nodes only need to keep a recent period of blocks, like several months, it would be better to keep the bounded history data size within 200GB. It could reduce both node’s storage pressure and the network traffic cost.

3.Specification

Parameter Value Description
BoundStartBlock 31,268,530 The starting block height of the first segment, block 31,268,530 is expected to be produced on Aug-29-2023, 3 years after the BSC genesis block.
HistorySegmentLength 2,592,000 assume 1 block for every 3 second, 2,592,000 blocks will be produced in 90 days.

3.1.General Workflow

The history data will be divided into several segments, the 1st segment is segment_0, which is from genesis to BoundStartBlock-1, then following segments will have the same length: HistorySegmentLength.

HistorySegment_0 = [0, BoundStartBlock)
HistorySegment_1 = [BoundStartBlock, BoundStartBlock+HistorySegmentLength)
...
HistorySegment_N = [BoundStartBlock+(N-1)*HistorySegmentLength, BoundStartBlock+N*HistorySegmentLength)

The BSC node only needs to maintain the latest 2 segments, in case of block reorg, the current segment number is calculated based on the “finalized” block.

ImmutabilityThreshold = 90000 // ?
func GetFinalizedBlock() {
  blockByAttestation := GetFinalizedByAttestation()
  blockByThreshold := GetCurrentBlockNumber() - ImmutabilityThreshold
  return max(blockByAttestation, blockByThreshold)
}

func GetCurrentSegmentIndex(blockNum int) {
  if blockNum < BoundStartBlock {
    return 0
  }
  boundBlocks = blockNum - BoundStartBlock
  return (boundBlocks/HistorySegmentLength) + 1
}

finalizedBlock = GetFinalizedBlock()
segIndex = GetCurrentSegmentIndex(finalizedBlock)
if segIndex > = 2 {
// the segments before (segIndex - 2) can be pruned
}

3.2.Prune

The offline block prune would need to be boundary aligned and leave the most recent 2 segments. The most recent segment is determined by the finalized block, since FastFinality is to be enabled, we can take use of the feature to determine the finalized block and then get the current segment index.

3.3.Node Sync

It would be difficult to sync from genesis, since most nodes may choose to not preserve these old blocks, but it is still possible as long as some nodes still keep the whole history data.

There could have 2 approaches to do sync after this proposal

  • directly download a snapshot of a recent state, from a snapshot service provider or DA layer like GreenField
  • segment based snap sync: user can take the boundary blockhash as the new GenesisHash and start from it directly.

3.4.P2P Protocol

There would also be some change to the current P2P protocol, since nodes no longer maintain the whole history data. There would be additional negotiation steps to check if the remote peer has the desired blocks before the connection can be established.

Simple diagram to show the procedure:

3.4.Data Availability

Some of the nodes like archive nodes would keep maintaining the whole history data.

And meanwhile, could take use of a DA layer, like greenfield to make sure the whole block data is available.

4.Rationale

4.1.BoundStartBlock & HistorySegmentLength

As the current history data is already very large, we prefer to enable this proposal faster, so
Set BoundStartBlock to 31,268,530, which is still more than 1 month ahead, it could be an acceptable date.

Set HistorySegmentLength to 2,592,000(~90 days on BSC). We did a profile, for the past 6 months(Jan-2023 to July 2023), ~1.2GB history data was generated per day on average. But it is somehow low traffic during this period. Traffic volume could be 3 times if the bull market starts, that is ~3.6GB per day. To keep the historical data size within 500GB in the bull market and 200GB in the bear market, 90 days could be a reasonable value.

5.Forward Compatibility

5.1.Portal Network

Portal network is a hot topic to solve the storage pressure, once it is ready, it is possible that portal network can replace this proposal if it has a more applicable solution.

6.Backward Compatibility

6.1.Archive Node

If you run an archive node, you can just keep all the history data, no impact to its business.

6.2.RPC API

If users query history data that have been pruned, could return a new error code to show it is expired and removed.

7. License

The content is licensed under CC0.

1 Like

offiline means the prune task only works after node shutdown?
Because if you execute prune task with a running node, the chain summary for the P2P connection may be changed, which may result in P2P being disconnected.

Another concern: in the long run, most people may choose the prune mode to run a fullnode, is it necessary to design an incentive model to make sure that there are always existing enough fullnodes or decentralized servers providing the whole chain data?

1.yes, offline means the node shutdown. As you mentioned, it could be complicated to support in-line prune. And IMO off-line prune is acceptable, as the prune could be fast and users do not need to prune frequently, maybe once per year?

2.yes, very good advice. But also it is a little bit complicated, at the first step, there are some kinds of data availability:

  • some archive nodes keeps the full history
  • centralized download service, like the bsc-snapshot download (GitHub - bnb-chain/bsc-snapshots)
  • decentralized storage: GreenField, Filecoin, Arweave…

In long-term solution, we may need a new incentive with a decentralized network protocol(like Portal Network?)

In all, this proposal wants to provide a simple way for history data maintenance, although it may not a perfect one.

1 Like

Get it! There is no perfect solution, only better attempts. Good luck

FAQ:
Q1.How To Sync?
Q2.What Will The Ancient Folder Looks Like?
Q3.Prune And Recover?
Q4.PBSS Support?