pgpartman / pg_partman

Partition management extension for PostgreSQL

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

run_maintenance_proc() not creating new partitions

Shanky-21 opened this issue · comments

On dec-18-2023, we faced an issue regarding new partitions not being created as the default table had conflicting rows. We saw the below error :

FATAL Error while running TasksPartitionMaintenanceCron :::: errmsg: ERROR: updated partition constraint for default partition "tasks_others_default" would be violated by some row CONTEXT: SQL statement "ALTER TABLE public.tasks_others ATTACH PARTITION public.tasks_others_p2024_03_18 FOR VALUES FROM ('2024-03-18 00:00:00+00') TO ('2024-03-19 00:00:00+00')"

we sorted this out by calling this procedure before calling run_maintenance_proc.

CALL partman.partition_data_proc(${parent_table})

But, now we are facing an issue regarding premake, we had set premake as 30, and infinite_time_partitions = true, partition_interval='daily', but after the constraint issue resolved new partitions are not being created on its own as premake is specified to 30, we only have partition for 18-jan-2024 only.

our partman_version is 4.5.1 and postgres version is 13.

@keithf4 need your help here.

Can you please share the complete schema of the table by showing the output of \d+ on the parent table. Also please share the complete contents of the part_config table for this partition set
Example:

github599=# \d+ public.orders_new
                                    Partitioned table "public.orders_new"
    Column     |  Type  | Collation | Nullable | Default | Storage | Compression | Stats target | Description 
---------------+--------+-----------+----------+---------+---------+-------------+--------------+-------------
 id            | bigint |           | not null |         | plain   |             |              | 
 zdate         | date   |           |          |         | plain   |             |              | 
 id_brand      | bigint |           |          |         | plain   |             |              | 
 id_restaurant | bigint |           |          |         | plain   |             |              | 
Partition key: RANGE (zdate)
Indexes:
    "orders_new_brand_restaurant_zdate" btree (id_brand, id_restaurant, zdate)
    "unique_id_by_part" UNIQUE CONSTRAINT, btree (id, id_brand, zdate)
Partitions: orders_new_p2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'), PARTITIONED,
            orders_new_p2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'), PARTITIONED,
            orders_new_p2020 FOR VALUES FROM ('2020-01-01') TO ('2021-01-01'), PARTITIONED,
            orders_new_p2021 FOR VALUES FROM ('2021-01-01') TO ('2022-01-01'), PARTITIONED,
            orders_new_p2022 FOR VALUES FROM ('2022-01-01') TO ('2023-01-01'), PARTITIONED,
            orders_new_p2023 FOR VALUES FROM ('2023-01-01') TO ('2024-01-01'), PARTITIONED,
            orders_new_p2024 FOR VALUES FROM ('2024-01-01') TO ('2025-01-01'), PARTITIONED,
            orders_new_p2025 FOR VALUES FROM ('2025-01-01') TO ('2026-01-01'), PARTITIONED,
            orders_new_p2026 FOR VALUES FROM ('2026-01-01') TO ('2027-01-01'), PARTITIONED,
            orders_new_p2027 FOR VALUES FROM ('2027-01-01') TO ('2028-01-01'), PARTITIONED,
            orders_new_default DEFAULT

github599=# \x
Expanded display is on.
github599=# select * from partman.part_config where parent_table = 'public.orders_new';
-[ RECORD 1 ]--------------+-----------------------
parent_table               | public.orders_new
control                    | zdate
partition_type             | native
partition_interval         | 1 year
constraint_cols            | 
premake                    | 4
optimize_trigger           | 4
optimize_constraint        | 30
epoch                      | none
inherit_fk                 | t
retention                  | 
retention_schema           | 
retention_keep_table       | t
retention_keep_index       | t
infinite_time_partitions   | f
datetime_string            | YYYY
automatic_maintenance      | on
jobmon                     | t
sub_partition_set_full     | f
undo_in_progress           | f
trigger_exception_handling | f
upsert                     | 
trigger_return_null        | t
template_table             | public.orders_template
publications               | 
inherit_privileges         | f
constraint_valid           | t
subscription_refresh       | 
drop_cascade_fk            | f
ignore_default_data        | f

Are you sure there are no further errors showing up in the logs around maintenance time? If you manually call 'run_maintenance()`, do you get any errors back?

Hi, Thank you for your quick response @keithf4 . I'm attaching the details of schema of parent table

partition_table_tasks_others_description_8-jan-2023_1858.txt

and here is the part_config details of table :

part_config_details.txt

, We are not seeing any error logs after fixing the constraint violation issue when data was present in default table.

There is a future partition created for March 18th

 170             tasks_others_p2024_01_18 FOR VALUES FROM ('2024-01-18 00:00:00+00') TO ('2024-01-19 00:00:00+00'),
 171             tasks_others_p2024_03_18 FOR VALUES FROM ('2024-03-18 00:00:00+00') TO ('2024-03-19 00:00:00+00'),
 172             tasks_others_default DEFAULT

This is why no new partitions are being created based on "now" in January. If there is no data that you need in that child table, you are free to drop it. Otherwise, if you need to keep that child table, you could try running the partition_gap_fill() function to see if it fills in the missing partitions for now. Honestly, not quite sure how that will work with a child table that far in the future.

Looking back, that table was created because that appears to have been the data that was in your default, so when you ran partition_data_proc() it moved from the default and made that child table

Hey, thank you very much. partition_gap_fill() has filled the gap between partitions in our testing environment.

One thing which comes to my mind, when the gap is filled and suppose, I have partition tables ready till 18-march-2024,
now my premake is 30, for daily partitions. so when I hit today's date to around 19-feb-2024, will 19-march-2024 partition will be automatically created via run_maintainence_proc ?

That should be how it works yes. As soon as there's less than 30 days premade, it should start making new partitions again.