node-real / bsc-erigon

Ethereum implementation on the efficiency frontier

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

bad tx data from archive node.

xjhweb opened this issue · comments

Command:

curl -H "Content-Type: application/json" -X POST --data \
'{"jsonrpc":"2.0","method":"eth_getTransactionReceipt","params":["0x1bb7efae7a4da81dabb851baa053467877d10bd501a8c54895ce9f62729a9996"],"id":1}'  \
<RPCURL>

From Erigon Response:


{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "blockHash": "0xe1b462e520535ab502b6326b66d42c15070ba3bf932c688c90979758d76f4eb8",
    "blockNumber": "0x1f3d7f6",
    "contractAddress": null,
    "cumulativeGasUsed": "0xb55159",
    "effectiveGasPrice": "0xb2d05e00",
    "from": "0xc77e536db155332cd4f06cc92f6a86b9b5482ce6",
    "gasUsed": "0xca68",
    "logs": [
      {
        "address": "0xc8a11f433512c16ed895245f34bcc2ca44eb06bd",
        "topics": [
          "0xddf252ad1be2c89b69c2b068fc378daa952ba7f163c4a11628f55a4df523b3ef",
          "0x000000000000000000000000c77e536db155332cd4f06cc92f6a86b9b5482ce6",
          "0x000000000000000000000000375728ddd5d6388348f137f3855ca9e23231fc3e"
        ],
        "data": "0x0000000000000000000000000000000000000000000000000027147114877fff",
        "blockNumber": "0x1f3d7f6",
        "transactionHash": "0x1bb7efae7a4da81dabb851baa053467877d10bd501a8c54895ce9f62729a9996",
        "transactionIndex": "0x4e",
        "blockHash": "0xe1b462e520535ab502b6326b66d42c15070ba3bf932c688c90979758d76f4eb8",
        "logIndex": "0xd5",
        "removed": false
      }
    ],
    "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000002800000000000000000000000000000020000000000000000000000000000000000000000000000000000008000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000400000000000000000000000000000000000000000000040008000000000040000000000000000000000000000000000002000000000000000008000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
    "status": "0x1",
    "to": "0x772c4c8bbba09d9d896ffeecc76e4b006b7bf873",
    "transactionHash": "0x1bb7efae7a4da81dabb851baa053467877d10bd501a8c54895ce9f62729a9996",
    "transactionIndex": "0x4e",
    "type": "0x2"
  }
}

From Geth Response:

{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "blockHash": "0xe1b462e520535ab502b6326b66d42c15070ba3bf932c688c90979758d76f4eb8",
    "blockNumber": "0x1f3d7f6",
    "contractAddress": null,
    "cumulativeGasUsed": "0xb5a361",
    "effectiveGasPrice": "0xb2d05e00",
    "from": "0x67aee92c7b71002826491e9dccb270eeba4def03",
    "gasUsed": "0x5208",
    "logs": [],
    "logsBloom": "0x00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000",
    "status": "0x1",
    "to": "0x772c4c8bbba09d9d896ffeecc76e4b006b7bf873",
    "transactionHash": "0x1bb7efae7a4da81dabb851baa053467877d10bd501a8c54895ce9f62729a9996",
    "transactionIndex": "0x4f",
    "type": "0x2"
  }
}

this TX 0x1bb7efae7a4da81dabb851baa053467877d10bd501a8c54895ce9f62729a9996 is a transfer tx without logs

But erigon returned wrong logs and from address.

This happens on 2 different archive nodes.

erigon version: the last dev branch.

Have you try this #234 ? In my local, it return the same as geth.

Have you try this #234 ? In my local, it return the same as geth.

thx for reply.

In fact, this happened after #234.

I will do it again with new build.

Is this just chaos after 30,000,000?

Have you try this #234 ? In my local, it return the same as geth.

thx for reply.

In fact, this happened after #234.

I will do it again with new build.

Is this just chaos after 30,000,000?

Maybe not, it depending on when you change to the wrong version. if you have enough time, you can delete all the snapshots.

Maybe not, it depending on when you change to the wrong version. if you have enough time, you can delete all the snapshots.

These snapshots were created from the local MDB instead of being downloaded from P2P?

I haven't found the fastest reconstruction method, although I've reconstructed it several times over a year.

I can't be sure whether the historical blocks don't need to be replayed and can be directly restored using verifiable results. But it doesn't seem to be the case?

What does the "integration stage_headers --reset" command do to the data? What is the logical sequence?
If I want to roll back to a specific height and start from that height again, how should I do it?
How to start from block 1 and quickly verify that the data of each block is intact and correct?

sorry for many questions.

What does the "integration stage_headers --reset" command do to the data? What is the logical sequence? If I want to roll back to a specific height and start from that height again, how should I do it? How to start from block 1 and quickly verify that the data of each block is intact and correct?

sorry for many questions.

  1. both, it will download from P2P according to the bsc-erigon-snapshots and then the later will created from local MDB.
  2. integration stage_headers --reset will redownload all the body and header data and verify
  3. the quickly way is delete all the snapshots in #234, it will need hours to redownload all the snapshots. It will take near about one day to download all the snapshot、header、body data. But good news it's don't need exec all the data.

@blxdyx
unfortunately~ runtime error: index out of range [0] with length

INFO[11-22|21:18:53.282] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:18:53.282] Req/resp stats                           req=768 reqMin=29500382 reqMax=33716162 skel=16 skelMin=29499998 skelMax=29795102 resp=55 respMin=29499998 respMax=33716176 dups=1918
INFO[11-22|21:19:13.281] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:19:13.281] Req/resp stats                           req=612 reqMin=29500190 reqMax=33716162 skel=16 skelMin=29499998 skelMax=29795102 resp=66 respMin=29499998 respMax=33716182 dups=3142
INFO[11-22|21:19:33.282] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:19:33.282] Req/resp stats                           req=648 reqMin=29500190 reqMax=33716182 skel=14 skelMin=29499998 skelMax=29795102 resp=46 respMin=29499998 respMax=33716189 dups=1934
INFO[11-22|21:19:53.281] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:19:53.281] Req/resp stats                           req=714 reqMin=29500190 reqMax=33716181 skel=15 skelMin=29499998 skelMax=29795102 resp=55 respMin=29502304 respMax=33716196 dups=1942
EROR[11-22|21:19:55.503] Staged Sync                              err="runtime error: index out of range [1] with length 1, trace: [stageloop.go:128 panic.go:890 panic.go:113 btree_generic.go:522 btree_generic.go:779 header_algos.go:393 stage_headers.go:844 stage_headers.go:154 default_stages.go:36 sync.go:358 sync.go:260 stageloop.go:177 stageloop.go:92 asm_amd64.s:1598]"
INFO[11-22|21:19:56.598] [2/15 Headers] Waiting for headers...    from=29499999
INFO[11-22|21:20:16.598] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:20:20.151] [p2p] GoodPeers                          eth66=702 eth67=17
INFO[11-22|21:20:36.598] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:20:56.599] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:21:16.598] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:21:36.599] [2/15 Headers] No block headers to write in this log period block number=29499999
INFO[11-22|21:21:36.599] Req/resp stats                           req=790 reqMin=29500958 reqMax=33716217 skel=15 skelMin=29499998 skelMax=29795102 resp=81 respMin=29499998 respMax=33716230 dups=3294
INFO[11-22|21:21:47.024] [parlia] snapshots build, gather headers block=29500000
INFO[11-22|21:21:47.024] [parlia] snapshots build, recover from headers block=29500000

Try: integration stage_headers --unwind=100

We've encountered a similar issue involving two servers. The first server was rsync'ed with the second server about a month ago. The second server, running the latest official release, is generating incorrect results with bogus logs. Meanwhile, the first server, which was recently rebuilt (about 3-4 days ago) using the master branch, is producing correct results.

This situation is quite alarming because it's difficult to determine when we receive corrupted data and which transactions are affected. Is there a method that, although potentially slower, can ensure the accuracy of the data?

Earlier this year, we experienced issues with transactions marked as failed, when they were in fact valid. This error led to a significant amount of time spent on data correction.

Reopen if still have problem.