apache / hudi

Upserts, Deletes And Incremental Processing on Big Data.

Home Page:https://hudi.apache.org/

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

hoodie.properties.backup file does not exist

donghaihu opened this issue · comments

Env:
Hudi:0.14
Flink:1.6
CHD:6.32
HDFS:3.0.0
Hive:2.1.1
Action: Search hudi table
We are currently using version Hudi 0.14, which was upgraded from 0.13. We did not encounter this issue in version 0.13.
Specific Issue: I have a streaming task table that runs in production without making any changes to the table or the task, and the following exception is reported:

2024-06-25 01:28:10,582 WARN org.apache.hudi.common.table.HoodieTableConfig [] - Invalid properties file hdfs://10.0.5.131:8020/user/ods/ods_pom_in_transit_time_config/.hoodie/hoodie.properties: {}

2024-06-25 01:28:10,586 WARN org.apache.hudi.common.table.HoodieTableConfig [] - Could not read properties from hdfs://10.0.5.131:8020/user/ods/ods_pom_in_transit_time_config/.hoodie/hoodie.properties.backup: java.io.FileNotFoundException: File does not exist: /user/ods/ods_pom_in_transit_time_config/.hoodie/hoodie.properties.backup

The current streaming task deployment model is Session, and this issue occurs occasionally.

I have the following initial doubts:
1.Without involving any changes to table structure, index, partition, etc., why is the hoodie.properties file cleared?
2.Why was the corresponding hoodie.properties.backup file not generated before the hoodie.properties operation?

It's not just a single table experiencing this issue; currently, we have nearly 30 tables in production with similar problems. This issue frequently occurs in our development environment as well. We have not yet identified the specific cause of this problem.

Thanks!

Maybe this is what you need: #8609, but you are right, it looks like the table.properties has been updated frequently after the upgrade, can you add some log in HoodieTableConfig and let's see why the update is triggered.

Are these failing jobs uses separate compaction with Spark, or do they have concurernt write with Spark writers?

Spark enables the MDT while Flink does not, maybe that's the reason why the table properties are updated frequently.

Flink for writing.

enables

How can we configure it to avoid this issue?