allada / bsc-archive-snapshot

Free public Binance Smart Chain (BSC) Archive Snapshot

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Procedure to create new latest

Denis-score opened this issue · comments

Hello,

Can you please share with us the procedure you do to stop the current latest, make it a "readonly" node, and start the new latest please ?

Many thanks

Here's the script I use:

#!/bin/bash
set -ex
sudo service bsc-geth-archive-latest stop
sudo rm -rf /latest/geth/bsc.log*
sudo zfs destroy tank/latest@snap || true
sudo zfs snap tank/latest@snap

cd /latest
/geth/readonly.sh 6340 &
READ_ONLY_PID=$!

# Wait for port to start listening
while ! nc -z localhost 6340; do   
  sleep 0.5
done
sleep 1

rm -rf /tmp/get_range || true
mkdir /tmp/get_range
sh -c 'cd /tmp/get_range && yarn add ethers'
cat <<'EOF' > /tmp/get_range/get_range.js
const ethers = require('ethers');
async function main() {
  const provider = new ethers.providers.WebSocketProvider('http://127.0.0.01:6340');
  const KNOWN_ADDRESS = '0x0000000000000000000000000000000000001004';
  const lastBlock = await provider.getBlockNumber();
  let left = 0;
  let right = lastBlock;
  while (left <= right) {
    const mid = Math.floor((right + left) / 2);
    try {
      await provider.getBalance(KNOWN_ADDRESS, mid);
      right = mid - 1;
    } catch (e) {
      left = mid + 1;
    }
  }
  console.log(`${left}-${lastBlock}`);
  await provider.destroy();
}
main();
EOF
RANGE=$(node /tmp/get_range/get_range.js)
kill $READ_ONLY_PID || true
wait $READ_ONLY_PID

sudo zfs rollback tank/latest@snap
sleep 3
sudo zfs destroy tank/latest@snap || true

/geth/geth --config /geth/config/latest.toml --datadir $PWD/geth db compact

tar c - geth | /zstd/zstd -6 -T12 -v | aws s3 cp - s3://public-blockchain-snapshots/bsc/$RANGE.tar.zstd --expected-size $(( $(zfs list tank/latest -o refer -H | numfmt --from=iec) * 10 / 8 ))
sudo zfs snap tank/latest@snap || true
cd /latest
/geth/prune.sh
sudo service bsc-geth-archive-latest start
echo "DONE"

Notes

  • You don't need to run the db compact command, it just makes access speeds faster
  • This setup does require zfs formatted drives
  • If your machine is powerful enough, you can use some zfs magic can allow you to prune, compact, upload and run a node all at the same time. This would dramatically reduce downtime from hours/days to seconds, but requires a significant amount of compute resources and disk space.