mkabilov / pg2ch

Data streaming from postgresql to clickhouse via logical replication mechanism

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Please support 'block_size' in clickhouse connection configuration

yjhatfdu opened this issue · comments

Initial sync for very large and wide(500columns) table will fail
DB::Exception: Memory limit (for query) exceeded: would use 9.31 GiB (attempt to allocate chunk of 8388608 bytes)
After discover, it cause by clickhouse driver default block_size=1000000.
When the table is too wide, 1million rows will execeed Memory limit 10G.
Add a setting for 'block_size' will solve this.

You can try to use just released prestable version which uses http protocol: https://github.com/mkabilov/pg2ch/releases/tag/v1.0.0

@mkabilov could you please guide me how to build the project from source?

Also what's your plan for production release?