Error deleting temp file during shutdown
c-hui opened this issue · comments
c-hui commented
When I shutdown kafka connect, has errors:
[2021-07-22 18:14:00,035] DEBUG Closing TopicPartitionWriter testtopic-37 (io.confluent.connect.hdfs.TopicPartitionWriter:464)
[2021-07-22 18:14:00,035] DEBUG Discarding in progress tempfile hdfs://10.10.106.70:8020//data/test//+tmp/testtopic/202107162050/9a603280-828d-42c8-95f4-87692589604f_tmp.txt for testtopic-37 202107162050 (io.confluent.connect.hdfs.TopicPartitionWriter:467)
[2021-07-22 18:14:00,035] ERROR Error deleting temp file hdfs://10.10.106.70:8020//data/test//+tmp/testtopic/202107162050/9a603280-828d-42c8-95f4-87692589604f_tmp.txt for testtopic-37 202107162050 when closing TopicPartitionWriter: (io.confluent.connect.hdfs.TopicPartitionWriter:489)
org.apache.kafka.connect.errors.ConnectException: java.io.IOException: Filesystem closed
at io.confluent.connect.hdfs.storage.HdfsStorage.delete(HdfsStorage.java:164)
at io.confluent.connect.hdfs.TopicPartitionWriter.deleteTempFile(TopicPartitionWriter.java:900)
at io.confluent.connect.hdfs.TopicPartitionWriter.close(TopicPartitionWriter.java:487)
at io.confluent.connect.hdfs.DataWriter.close(DataWriter.java:469)
at io.confluent.connect.hdfs.HdfsSinkTask.close(HdfsSinkTask.java:169)
at org.apache.kafka.connect.runtime.WorkerSinkTask.commitOffsets(WorkerSinkTask.java:397)
at org.apache.kafka.connect.runtime.WorkerSinkTask.closePartitions(WorkerSinkTask.java:591)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:484)
at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1612)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:882)
at org.apache.hadoop.hdfs.DistributedFileSystem$19.doCall(DistributedFileSystem.java:879)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:879)
at io.confluent.connect.hdfs.storage.HdfsStorage.delete(HdfsStorage.java:162)
... 14 more
When kafka connect exited, I found that there was only empty temporary directories, and the temporary files had been deleted. How can I remove these errors?
kafka version: 2.4.0
kafka-connect-hdfs: confluentinc-kafka-connect-hdfs-10.1.0