openethereum / parity-ethereum

The fast, light, and robust client for Ethereum-like networks.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Ropsten network split on Parity v2.5.13 / 2.7.2 / 3.0.0+

gituser opened this issue · comments

  • OpenEthereum version: parity-v2.5.13-stable-253ff3f-20191231
  • Operating system: Linux
  • Installation: built from source
  • Fully synchronized: yes
  • Network: ropsten
  • Restarted: yes

Seems ropsten network had a split at block https://ropsten.etherscan.io/block/8563600

Parity v2.5.13 is stuck at block #8579999 while according to https://ropsten.etherscan.io last block is #8578042

Parity somehow went into forked chain at block #8563600 (the hash of this block differs from the one at ropsten.etherscan.io, whilst previous block #8563599 matches the hash).

> eth.getBlock(8563600);
{
  author: "0x1b3de7fce5b35eabda39ddfafc8a9dfc03cb85be",
  difficulty: 17144366,
  extraData: "0x746573742e756c6579706f6f6c2e636f6d",
  gasLimit: 8000000,
  gasUsed: 698339,
  hash: "0x4fc898095ce76eb03e1aef45dab8163fcb5b8e4333bfdf2228b56b0e3f92e7ea",
  logsBloom: "0x0300000000000200000000000020000000000000000000000000000000000020000000000000101000000000800002000000000000000000000042000020000000001008000040000000000c000001000000000000000020000000000000002000000000000000000002000000000000022000000002004000040010000040400000000000000000000002010000000000000100000000000000010000000000020000000080000810000002110000000000200400000020000020001100200200000002000080010000000080000000000400000000000000000000004002000010000000000000200000040000000000000000002000000000000000800000",
  miner: "0x1b3de7fce5b35eabda39ddfafc8a9dfc03cb85be",
  mixHash: "0xd78b88a7c8acc98096e829af495f07207e4c0f60a40150689caf9346f47a20a7",
  nonce: "0xdf9500000051851c",
  number: 8563600,
  parentHash: "0x3ce09d8b044489cbd67e54575cbf0a6a5ff60cad4355177c46ec4c4665634191",
  receiptsRoot: "0x56987c0b5bade11f79e42a32d641fede105fc5819d371e44163d79e6dccdfdad",
  sealFields: ["0xa0d78b88a7c8acc98096e829af495f07207e4c0f60a40150689caf9346f47a20a7", "0x88df9500000051851c"],
  sha3Uncles: "0x1dcc4de8dec75d7aab85b567b6ccd41ad312451b948a7413f0a142fd40d49347",
  size: 2823,
  stateRoot: "0x816649cdad360503290da1d3feea6c16108b87f9c9543199c61e514436aaa2c8",
  timestamp: 1598372610,
  totalDifficulty: 31408038823054793,
  transactions: ["0xb68006f0a38f5181bafdeb546d00f86f3adc1049410ecc2d52abeef331507e63", "0x27638196d23667a96e96b84f9f7f04b0c5e7c5a379fcd1677fa4c78a85d29824", "0x92a43efdb54354f482a1449e0784260cc97da5ba567d3ad1d454f1fd49f2316e", "0x28c8294662a80e0d832fe4662e624be27901fc133ad5675a53d88105619c4695", "0x25f3c6ca0cc29a110f21280ceb387fffef7c8726ff726c829abfed50bf289e47", "0x6548e20c2e95fa593cb5cbcc63fb493530f9cd5ec85c08b4e7ea14411505d524", "0x74c9327c4d48bc738d7f760d9824174e1024a2e42c2b11d0f9ccdc02b2c49b37", "0x67a890bb03c4e6637f44855ee3e3b104798b3e814b7d125e05a7c14606b7120f", "0x2c38ae2c83f138e048968c7dbaeb73d9594529cc11b30b0a0b6b6d77dc23db6f", "0xbf1b9d8dc778ecfd375549d21e0af4db0fa937e69e27537c1b1fb28daa6af874", "0x91b2baa96f2e5112137b026132c9846709a3b58150e9856b09f224b4983a9c85", "0xe42371c2e8a80ae8993cdb361d3eccf31eff5c060c196d556130fe2c1fd0f7c9", "0x28f9b6c7e09ef2ac29a0fdf6785758f7b2a579b9580d902afda0da4b8bb8218a", "0x69448b6be0e02dd6243093e4ab5069cf05a0b4a7c6fb6f51ad7a299c3f8dfd16", "0xe9b7345ad3b75f26955643863b778bae719572e71d088335a2d667620d92a207", "0x5e8650ddb5c96c1aa52f2074a464a6686368badce1dfc205ba9a43ed57d9659c"],
  transactionsRoot: "0x3a5cf086fb275e03d3fc67ae95353a0cc528c11d33f73ed569895477e431f4c6",
  uncles: []
}

From the parity logs:

  1. initially it imports correct block with correct hash 0xeda7...6a26 which matches https://ropsten.etherscan.io/block/8563600
2020-08-25 19:23:30  Verifier #13 INFO import  Imported #8563599 0x3ce0<E2><80><A6>4191 (29 txs, 1.05 Mgas, 8 ms, 4.62 KiB)
2020-08-25 19:23:37  Verifier #2 INFO import  Imported #8563600 0xeda7<E2><80><A6>6a26 (16 txs, 0.70 Mgas, 6 ms, 2.76 KiB)

  1. and then happens a reorg:
2020-08-25 19:23:53  Verifier #8 INFO reorg  Reorg to #8563601 0x1eac<E2><80><A6>0714 (0xeda7<E2><80><A6>6a26 #8563599 0x3ce0<E2><80><A6>4191 0x4fc8<E2><80><A6>e7ea)
2020-08-25 19:23:53  Verifier #8 INFO import  Imported #8563601 0x1eac<E2><80><A6>0714 (12 txs, 1.61 Mgas, 63 ms, 5.35 KiB) + another 1 block(s) containing 16 tx(s)
2020-08-25 19:23:54  Verifier #10 INFO import  Imported #8563602 0x9d64<E2><80><A6>2639 (2 txs, 0.04 Mgas, 2 ms, 0.75 KiB)
2020-08-25 19:23:57  Verifier #0 INFO import  Imported #8563603 0x78f3<E2><80><A6>6c9d (5 txs, 0.27 Mgas, 3 ms, 1.41 KiB)
2020-08-25 19:24:13  Verifier #1 INFO import  Imported #8563604 0x3f8f<E2><80><A6>3a52 (16 txs, 1.12 Mgas, 16 ms, 3.58 KiB)
2020-08-25 19:24:13  IO Worker #3 INFO import    26/50 peers     84 MiB chain  122 MiB db  0 bytes queue  250 KiB sync  RPC:  0 conn,    0 req/s,   29 <C2><B5>s
2020-08-25 19:24:26  Verifier #13 INFO import  Imported #8563605 0x4cf2<E2><80><A6>5e9a (21 txs, 0.92 Mgas, 6 ms, 4.70 KiB)
2020-08-25 19:24:43  IO Worker #2 INFO import    26/50 peers     84 MiB chain  122 MiB db  0 bytes queue  250 KiB sync  RPC:  0 conn,    6 req/s,   24 <C2><B5>s
2020-08-25 19:24:48  Verifier #3 INFO import  Imported #8563606 0x5313<E2><80><A6>843e (31 txs, 1.62 Mgas, 18 ms, 6.43 KiB)
2020-08-25 19:24:53  Verifier #7 INFO import  Imported #8563607 0x5e31<E2><80><A6>82d5 (3 txs, 0.13 Mgas, 2 ms, 1.01 KiB)
2020-08-25 19:24:56  Verifier #9 INFO import  Imported #8563608 0x2f7d<E2><80><A6>2278 (0 txs, 0.00 Mgas, 0 ms, 0.53 KiB)
2020-08-25 19:25:02  http.worker20 WARN rpc  eth_accounts is deprecated and will be removed in future versions: Account management is being phased out see #9997 for alternatives.
2020-08-25 19:25:02  Verifier #5 INFO import  Imported #8563609 0xa0ee<E2><80><A6>3fbc (10 txs, 1.73 Mgas, 59 ms, 2.97 KiB)
2020-08-25 19:25:10  Verifier #11 INFO import  Imported #8563610 0x6bec<E2><80><A6>9c9c (13 txs, 0.59 Mgas, 15 ms, 3.06 KiB)
2020-08-25 19:25:13  IO Worker #3 INFO import    26/50 peers     85 MiB chain  122 MiB db  0 bytes queue  250 KiB sync  RPC:  0 conn,    0 req/s,    7 <C2><B5>s
2020-08-25 19:25:20  Verifier #12 INFO import  Imported #8563612 0x8c12<E2><80><A6>46b3 (13 txs, 1.64 Mgas, 11 ms, 4.55 KiB) + another 1 block(s) containing 1 tx(s)
2020-08-25 19:25:22  Verifier #1 INFO import  Imported #8563613 0xa55d<E2><80><A6>dd76 (1 txs, 0.02 Mgas, 0 ms, 0.64 KiB)
2020-08-25 19:25:25  Verifier #6 INFO import  Imported #8563614 0xb36a<E2><80><A6>cf67 (4 txs, 0.27 Mgas, 6 ms, 1.33 KiB)

It might be a similar issue as happened on Ethereum Classic some time ago https://github.com/openethereum/openethereum/issues/11843

I'm running parity with --pruning-history 1024 so can't even rollback as the latest block went way past 1024 and I'm not sure that full re-sync will help in this case.

Any ideas how to fix this issue?

Seems this issue also affects v2.7.2 as well (from gitter):

MinwooJ @MinwooJ Aug 28 09:00
Hello~

I'm running a ropesteen testnet through Parity (v2.7.X), and it's stuck syncing at block #8579999.
But when you look at etherscan, it's currently sinking from #8569345, and some other explorer's sync is stuck at #8579999, like me.
Ropsten's chain seems to have divided, so I'd like to know which is right.

This affected all my parity nodes. sudo parity --chain ropsten db kill and subsequent default resync did not fix the issue. Stalling again at block 8579999.

Currently attempting a default resync with openethereum v3.0.0-stable

Both of my attempts with v3.0.0-stable resynced to #8579999:

  1. killing the db and warp syncing via the default settings
  2. killing the db and warp syncing using --warp-barrier 8563500

I do confirm the issue, no new block after #8579999 on docker image: parity/parity:v2.5.13-stable

Any OpenEthereum's developers reaction?

Ping @adria0 @rakita @vorot93 @seunlanlege @denisgranha @sorpaas

Hello, sorry for not responding, to be honest, I don't have anything to add.

There were multiple reorgs, first one was 194 blocks at #8568394 biggest one was 11606 blocks at #8568394.

I tried to sync it with pruning DB and as guys said it gets stuck, archive node should be able to do this big reorg. Maybe it is connected with warping, I got warped from #8575000 and this snapshot is probably made on abandon fork.

I am currently syncing without warping to see what happens (I am currently at #5780341), will update when finish.

I hope it will resolved quickly... ETH Mainnet gas price is crazy for test T.T

https://ropsten.etherscan.io/blocks_forked?p=8

Screenshot_2020-08-31 Ropsten Forked Blocks Etherscan

One reorg at 8568394 was 11606 in depth. 8568394 + 11606 == 8580000.

@holiman so what to do in order to get parity node working on ropsten again?

any news guys?

Just in time
The first takeaway, as you already know, this was a 51% attack on the Ropsten test network and they succeeded to revert two days of worth of blocks and the attacker chain became valid. Just to say it, this is a test network, and these things can happen there.

Okay now, for things related to OE:

  1. Because reorg was very big, OE when in pruning mode in its nature is not capable to reorganize itself with this 11k reorg that happened and it stayed with the original chain. KaiRo said in discord that his archive node reorganizes successfully and this is info that tell me that OE works as expected.
  2. in those two days snapshot was made on block #8575000 and every resync with warping that was tried will be warped to invalid chain. (Or at least this is what i got) log
  3. Because two days of worth of blocks were reverted, if you tried to resync without warping on Sunday/Monday I am pretty sure that you will get synced to an original chain because in those two/three days original chain had the highest block.

What you can do:

  1. Sync with --no-warp and this passed for me without a problem. Here is log
  2. Try syncing with --warp-barrier 8690000 with this you will force skip problematic snapshot (but for the next few days I not sure how many peers will have valid new snapshot), in future as new snapshots are generated this force skip will not be needed.

@rakita Thank you for your suggestions. I completed a --warp-barrier 8563500 reync on Saturday alas no success. Will try my luck with --no-warp today

If you know any nodes that have a post-attack snapshot, that would be great to get to use as --reserved-peers to get a successful warp sync.

@kepikoi --warp--barrier at that block number probably take 8575000. What barrier does is enforce that Snapshot must be greater than the warp barrier if any

@rakita
I had parity ropsten's backup from block #7857894 and resync didn't work for me with warp disabled and tracing = on

I have in my parity_testnet.toml:

[network]
# Parity will listen for connections on port 50315.
port = 30303
# Parity will connect only to public network IP addresses.
allow_ips = "public"
# Min Peers
min_peers = 25
# Max Peers
max_peers = 50

# Disable serving light peers.
no_serve_light = true
# Parity will sync by downloading latest state first. Node will be operational in couple minutes.
warp = false

[footprint]
# Compute and Store tracing data. (Enables trace_* APIs).
tracing = "on"
# Prune old state data. Maintains journal overlay - fast but extra 50MB of memory used.
pruning = "fast"
# Will keep up to 1024 old state entries.
pruning_history = 12000

# Will keep up to 128 MB old state entries.
pruning_memory = 128

# Will keep up to 4096MB data in Database cache.
cache_size_db = 4096

I've set now bootnodes from geth https://github.com/ethereum/go-ethereum/blob/master/params/bootnodes.go#L37
and pruning_history = 32000 will see how it go.

I've noticed you have in your logs:
UTC Starting Parity-Ethereum/v2.5.13-stable-b5695c1d7-20200824/x86_64-linux-gnu/rustc1.45.2

whilst I have parity-v2.5.13-stable-253ff3f-20191231 built from source, did you apply additional commits to your build?

NVM, found: openethereum/openethereum@b5695c1

I've managed to get in sync.

Here is what I've used:

  • first you need your communication port accessible outside, default port is 30303, if you're running behind NAT, you need to add into your config, where 1.2.3.4 is your outside IP:
[network]
port=30303
nat = "extip:1.2.3.4"

and also forward this port 30303 on your router / iptables to your machine

  • add bootnodes (these are from geth):
# Override the bootnodes from selected chain file.
[network]
bootnodes = ["enode://30b7ab30a01c124a6cceca36863ece12c4f5fa68e3ba9b0b51407ccc002eeed3b3102d20a88f1c1d3c3154e2449317b8ef95090e77b312d5cc39354f86d5d606@52.176.7.10:30303", "enode://865a63255b3bb68023b6bffd5095118fcc13e79dcf014fe4e47e065c350c7cc72af2e53eff895f11ba1bbb6a2b33271c1116ee870f266618eadfc2e78aa7349c@52.176.100.77:30303"]
  • add warp=false
[network]
# Parity will sync by downloading latest state first. Node will be operational in couple minutes.
warp = false
  • also increase pruning_history to 20000, not sure it might work with lower value, but just to be sure
[footprint]
# Compute and Store tracing data. (Enables trace_* APIs).
tracing = "on"
# Prune old state data. Maintains journal overlay - fast but extra 50MB of memory used.
pruning = "fast"
# Will keep up to 1024 old state entries.
pruning_history = 20000

# Will keep up to 128 MB old state entries.
pruning_memory = 128

# Will keep up to 4096MB data in Database cache.
cache_size_db = 4096

NOTE: parity will require a LOT of memory in this configuration about ~30GB or more in my case with v2.5.13.

Probably you can change certain value back to defaults or lower them after the problem is mitigated.

@gituser not sure what happened there. I think it helped that you used different bootnodes

Sorry to bother @gituser @rakita but how did you manage sync so fast? I'm on a 8 core / 32GB / premium SSD VM and looking at this graph (diff to reference node) it could take a week to --no-warp Ropsten
image

Some relevant parts from my config:

[parity]
mode = "active"
chain = "ropsten"

[network]
min_peers = 25
max_peers = 50
max_pending_peers = 50

[footprint]
tracing = "off"
pruning = "fast"
db_compaction = "ssd"
cache_size = 4096

@rakita trying to warp sync on another machine for two days with --warp-barrier 8690000 but just like you assumed it stalled lacking a complete snapshot

2020-09-04 07:49:42 UTC IO Worker #0 INFO import  Syncing snapshot 2870/3834        #0   23/25 peers   920 bytes chain 100 KiB db 0 bytes queue 172 KiB sync  RPC:  0 co
nn,    0 req/s,   16 µs
2020-09-04 07:49:47 UTC IO Worker #3 INFO import  Syncing snapshot 2870/3834        #0   23/25 peers   920 bytes chain 100 KiB db 0 bytes queue 172 KiB sync  RPC:  0 co
nn,    0 req/s,   16 µs
2020-09-04 07:49:52 UTC IO Worker #0 INFO import  Syncing snapshot 2870/3834        #0   23/25 peers   920 bytes chain 100 KiB db 0 bytes queue 172 KiB sync  RPC:  0 co
nn,    0 req/s,   16 µs
2020-09-04 07:49:57 UTC IO Worker #2 INFO import  Syncing snapshot 2870/3834        #0   23/25 peers   920 bytes chain 100 KiB db 0 bytes queue 172 KiB sync  RPC:  0 co
nn,    0 req/s,   16 µs

@kepikoi
I had a backup previously from block #7857894 that's why it was much faster.

I also exported ropsten's recent blocks and state into: snap and rlp files I can share them somewhere probably for everyone.

$ du -sh * --total
58G	ropsten_blocks_20200903.rlp
12G	ropsten_snapshot_20200903.snap
70G	total

@kepikoi also try adding bootnodes into your config and you might try increasing buffers a little for:

# Override the bootnodes from selected chain file.
[network]
bootnodes = ["enode://30b7ab30a01c124a6cceca36863ece12c4f5fa68e3ba9b0b51407ccc002eeed3b3102d20a88f1c1d3c3154e2449317b8ef95090e77b312d5cc39354f86d5d606@52.176.7.10:30303", "enode://865a63255b3bb68023b6bffd5095118fcc13e79dcf014fe4e47e065c350c7cc72af2e53eff895f11ba1bbb6a2b33271c1116ee870f266618eadfc2e78aa7349c@52.176.100.77:30303"]

[footprint]
# Will keep up to 1024 old state entries.
pruning_history = 20000

# Will keep up to 128 MB old state entries.
pruning_memory = 128
# Will keep up to 1024MB data in Blockchain cache.
cache_size_blocks = 1024
# Will keep up to 512MB of blocks in block import queue.
cache_size_queue = 512
# Will keep up to 256MB data in State cache.
cache_size_state = 256

Sorry to clutter the thread, but will this issue eventually be "fixed" in a way that makes it easy for me to get my node going again or am I going to have to roll up my sleeves?

Eventually, there will be nodes that have post-attack snapshots and that you will be able to use for warp sync, and then things should get back to normal.
If anyone knows the enode-URI for such nodes, please let me know, I'd love to use such a snapshot to get my warp/fast-sync node back up.

@noconsulate no you can't rollback.

It's by design if you have fast node (that means you only keep only certain amount of state history, by default 64 last states), you can't rollback easily.

If you run an archive node you can rollback to any block and start syncing from it again.

Fortunately it's no big deal to wipe the database and sync all over again because Ropsten is relatively small. How will I know when there are post-attack snapshots? Since this happened a week ago is it safe to assume now is a good time to sync?

I had resync from genesis with warp=false during 5 days. but it eventually failed and stuck on 8579999.
so I dont think now is a good time to sync.

but I had made DB snapshot at 8556199. so I remove stuck DB and retry sync.

My node restart sync with option pruning_history = 20000 at this time.
and eventually jump 8579999.

I'm happy. thank you @gituser

  • My node version is customized 2.6.8-beta
  • Node is on AWS c5.xlarge during resync

Tried the recommendations from above over the weekend. Still no luck :(

  • warp synced my v2.6.8-beta node using @gituser's bootnodes config again and stalled at block #8579999
  • fully no-warp resynced my v3.0.0-stable node all over again and stalled at block #8579999

image

Will try the backup route next...

I deleted my database and re-synced with out any special parameters. I set it and forgot about it and now it's stuck on #8579999 once again.

version OpenEthereum/v3.0.1-stable-8ca8089-20200601/x86_64-unknown-linux-gnu/rustc1.43.1

This should work:

Add these options to the [network] section in the config.toml file:

[network]
reserved_peers = "<path to peers file .e.g /config/peers >"
reserved_only = true
min_peers = 1
max_peers = 25
warp = false

This should force the sync to be only from the nodes specified in the peers file.

Create a corresponding peers file with the following nodes:

enode://8a4bc82041f560873d48f9f0d6754677096880195fc2eab3d57783370483efca8a36009d3fd87e4ee497ee7286425b886a16e6047f876b863666e57276ef8aad@35.246.127.164:30303
enode://74b86aed3ff9b23c8e423b673293044ec5b5525b43c237ac35a8634ea051dfd3cf4365c15133c8064c713684cd6a9911c15b32a7323f9db1d70f01706a682501@35.246.26.6:30303
enode://8c5131f577ee602ccaad5e5f600011c024d43d33e6af6f8a89ed26cbfadd7efe903a8e426dde4071b857e0838a2efa29ae3b268f8b55acd59e72d4a919673cbc@13.251.47.174:30303

The nodes above have the correct state (same as ropsten.etherscan) so your node should start syncing the correct state from those peers.

After the node is fully synced - you can remove the reserved_peers and reserved_only options from the configuration file and delete the peers file.

I finally got a snapshot at block #8645000 today, so my sync with --warp-barrier=8580000 succeeded and I should have a useful node again, even though I miss the event logs that were helpful in testing, but more important to actually have a node that works :)

Here I added a commit that will mark block 8,563,601 from the original chain as invalid and force the client to take the current chain. Now when you sync without warp it will go to current chain or block syncing if you dont have peers with current chain.

Thank you @nostdm , your solution helped!

any news? it still not work

i tried many times. this is the log:

2020-10-12 03:54:56 UTC IO Worker #1 INFO import     0/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   32 µs
2020-10-12 03:55:21 UTC IO Worker #3 INFO import  Syncing #8579999 0x0b22…26bc     0.00 blk/s    0.0 tx/s    0.0 Mgas/s      0+    0 Qed  #8596041    1/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    1 req/s,   32 µs
2020-10-12 03:55:26 UTC IO Worker #1 INFO import  Syncing #8579999 0x0b22…26bc     0.00 blk/s    0.0 tx/s    0.0 Mgas/s      0+    0 Qed  #8596041    1/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   32 µs
2020-10-12 03:55:46 UTC IO Worker #3 INFO import  Syncing #8579999 0x0b22…26bc     0.00 blk/s    0.0 tx/s    0.0 Mgas/s      0+    0 Qed  #8596041    1/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   32 µs
2020-10-12 03:56:16 UTC IO Worker #2 INFO import     0/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   32 µs
2020-10-12 03:56:46 UTC IO Worker #1 INFO import     0/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   54 µs
2020-10-12 03:56:56 UTC IO Worker #0 INFO import  Syncing #8579999 0x0b22…26bc     0.00 blk/s    0.0 tx/s    0.0 Mgas/s      0+    0 Qed  #8596041    1/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   54 µs
2020-10-12 03:57:26 UTC IO Worker #1 INFO import  Syncing #8579999 0x0b22…26bc     0.00 blk/s    0.0 tx/s    0.0 Mgas/s      0+    0 Qed  #8596041    1/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   54 µs
2020-10-12 03:57:56 UTC IO Worker #2 INFO import  Syncing #8579999 0x0b22…26bc     0.00 blk/s    0.0 tx/s    0.0 Mgas/s      0+    0 Qed  #8596041    1/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   54 µs
2020-10-12 03:58:31 UTC IO Worker #2 INFO import     0/ 1 peers     74 KiB chain   54 MiB db  0 bytes queue    5 MiB sync  RPC:  0 conn,    0 req/s,   54 µs

this is my config:

[parity]
chain = "ropsten"

[network]
# Parity will listen for connections on port 30304.
reserved_peers = "/data/peers.conf"
reserved_only = true
min_peers = 1
max_peers = 25
warp = false
[rpc]
disable = false
port = 8545
interface = "10.xx.0.xx"
cors = ["test-parity"]
apis = ["web3", "eth", "pubsub", "net", "parity", "parity_pubsub", "traces", "rpc", "shh", "shh_pubsub", "personal", "parity_accounts", "parity_set", "signer"]
hosts = ["all"]

[footprint]
tracing = "on"
# Prune old state data. Maintains journal overlay - fast but extra 50MB of memory used.
pruning = "fast"
# # Will keep up to 1024 old state entries.
pruning_history = 50000

# # Will keep up to 128 MB old state entries.
pruning_memory = 128

# # Will keep up to 4096MB data in Database cache.
cache_size_db = 4096
[misc]
logging = "own_tx=trace, rpc=trace, txqueue=trace"
log_file = "/var/log/parity.log"
color = true

Can you help me check is everything ok?

Do we need to clear database and resync?

@trongcauhcmus Yes, you need to clear your database, add the config lines from this comment https://github.com/openethereum/openethereum/issues/11862#issuecomment-688521370 and resync.

@trongcauhcmus yes you need to clear DB and start again. Another option other what gituser mentioned is if you are can build from source, to use this commit that invalidates the first block in bad chain: https://github.com/openethereum/openethereum/issues/11862#issuecomment-690948102
and ether way use --warp-barrier=8850000 option to skip to the newest snapshot.

There is nothing more to be done here, hence I am closing this.

thank you @gituser @rakita