confluentinc / kafka-connect-hdfs

Kafka Connect HDFS connector

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

how to recover errors on WAL things by itself?

kimnami opened this issue · comments

Hello there!

I'm testing some cases which are having problems, e.g. dying one of the nodes in the kafka-connect cluster, hdfs failover and etc. (cp-5.2.0 version)

Dying one of the nodes in the cluster, I get a bunch of errors like below:

[2021-05-03 16:30:12,045] ERROR Exception on topic partition test6-0:  (io.confluent.connect.hdfs.TopicPartitionWriter)
org.apache.kafka.connect.errors.DataException: java.nio.channels.ClosedChannelException
        at io.confluent.connect.hdfs.wal.FSWAL.append(FSWAL.java:65)
        at io.confluent.connect.hdfs.TopicPartitionWriter.beginAppend(TopicPartitionWriter.java:735)
        at io.confluent.connect.hdfs.TopicPartitionWriter.appendToWAL(TopicPartitionWriter.java:726)
        at io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:392)
        at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:375)
        at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:114)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.channels.ClosedChannelException
        at org.apache.hadoop.hdfs.DFSOutputStream.checkClosed(DFSOutputStream.java:1546)
        at org.apache.hadoop.hdfs.DFSOutputStream.flushOrSync(DFSOutputStream.java:2054)
        at org.apache.hadoop.hdfs.DFSOutputStream.hsync(DFSOutputStream.java:1923)
        at org.apache.hadoop.fs.FSDataOutputStream.hsync(FSDataOutputStream.java:139)
        at io.confluent.connect.hdfs.wal.WALFile$Writer.hsync(WALFile.java:346)
        at io.confluent.connect.hdfs.wal.FSWAL.append(FSWAL.java:61)
        ... 16 more
[2021-05-03 16:30:15,098] ERROR Failed creating a WAL Writer: Failed to APPEND_FILE /user/test/warehouse/logs/test6/0/log for DFSClient_NONMAPREDUCE_864164529_853 on xx.xxx.xxx.xx1 because this file lease is currently owned by DFSClient_NONMAPREDUCE_-802410335_876 on xx.xxx.xxx.xx2
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2600)
        at org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2635)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
 (io.confluent.connect.hdfs.wal.WALFile)
[2021-05-03 16:30:15,098] INFO Cannot acquire lease on WAL hdfs://npay/user/test/warehouse/logs/test6/0/log (io.confluent.connect.hdfs.wal.FSWAL)
[2021-05-03 16:30:56,501] ERROR Failed creating a WAL Writer: Failed to APPEND_FILE /user/test/warehouse/logs/test6/0/log for DFSClient_NONMAPREDUCE_2141187264_853 on xx.xxx.xxx.xx1 because lease recovery is in progress. Try again later.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2587)
        at org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2635)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
 (io.confluent.connect.hdfs.wal.WALFile)
[2021-05-03 16:30:56,501] ERROR Recovery failed at state RECOVERY_PARTITION_PAUSED (io.confluent.connect.hdfs.TopicPartitionWriter)
org.apache.kafka.connect.errors.ConnectException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.RecoveryInProgressException): Failed to APPEND_FILE /user/test/warehouse/logs/test6/0/log for DFSClient_NONMAPREDUCE_2141187264_853 on xx.xxx.xxx.xx1 because lease recovery is in progress. Try again later.
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:2587)
        at org.apache.hadoop.hdfs.server.namenode.FSDirAppendOp.appendFile(FSDirAppendOp.java:124)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:2635)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.append(NameNodeRpcServer.java:805)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.append(ClientNamenodeProtocolServerSideTranslatorPB.java:487)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:523)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:991)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:872)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:818)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2678)
        at io.confluent.connect.hdfs.wal.FSWAL.acquireLease(FSWAL.java:89)
        at io.confluent.connect.hdfs.wal.FSWAL.apply(FSWAL.java:106)
        at io.confluent.connect.hdfs.TopicPartitionWriter.applyWAL(TopicPartitionWriter.java:635)
        at io.confluent.connect.hdfs.TopicPartitionWriter.recover(TopicPartitionWriter.java:259)
        at io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:324)
        at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:375)
        at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:114)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
...
[2021-05-03 16:31:01,508] INFO Successfully acquired lease for hdfs://npay/user/test/warehouse/logs/test6/0/log (io.confluent.connect.hdfs.wal.FSWAL)
[2021-05-03 16:31:01,592] INFO Finished recovery for topic partition test6-0 (io.confluent.connect.hdfs.TopicPartitionWriter)
[2021-05-03 16:31:04,615] INFO committing files after waiting for rotateIntervalMs time but less than flush.size records available. (io.confluent.connect.hdfs.TopicPartitionWriter)

The errors are about failing acquiring, creating, or appending WAL things. But, then, as you can see in the logs above, it recovers. How does it work?

If there were no side effects, I wouldn't doubt it. But, unfortunately, after recovered, there is data loss; sometimes no loss at all...do u know why??
And EVEN worse, sometimes this self-recovery is not working at all.

So, I'm wondering how to recover errors on WAL things by itself.
OR is there a solution for those errors?

solved this with upper version, which is cp-5.5.4