ankane / pghero

A performance dashboard for Postgres

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

TypeError - best_index_structure

petrprikryl opened this issue · comments

Hi, I have this error on homepage / and /queries pages. Other pages are working fine.
Next, I have also pghero on staging environment where is DB with same structure. And there is no problem.

Processing by PgHero::HomeController#index as HTML
Completed 500 Internal Server Error in 3467ms (Allocations: 63228)

TypeError (wrong argument type nil (expected String)):

pg_query (2.2.1) lib/pg_query/parse.rb:3:in `parse_protobuf'
pg_query (2.2.1) lib/pg_query/parse.rb:3:in `parse'
pghero (3.3.1) lib/pghero/methods/suggested_indexes.rb:200:in `best_index_structure'
pghero (3.3.1) lib/pghero/methods/suggested_indexes.rb:99:in `block in best_index_helper'
pghero (3.3.1) lib/pghero/methods/suggested_indexes.rb:98:in `each'
pghero (3.3.1) lib/pghero/methods/suggested_indexes.rb:98:in `best_index_helper'
pghero (3.3.1) lib/pghero/methods/suggested_indexes.rb:17:in `suggested_indexes_by_query'
pghero (3.3.1) app/controllers/pg_hero/home_controller.rb:459:in `set_suggested_indexes'
pghero (3.3.1) app/controllers/pg_hero/home_controller.rb:68:in `index'
actionpack (7.0.4.3) lib/action_controller/metal/basic_implicit_render.rb:6:in `send_action'

Hey @petrprikryl, thanks for reporting. Are there any NULL queries in pghero_query_stats?

SELECT * FROM pghero_query_stats WHERE query IS NULL;

Yes, there is around 1% of NULL queries. I have checked pg_stat_statements and there are NULLs too.

I have this config:

shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.max = 50000
pg_stat_statements.track = all
track_activity_query_size = 2048

PostgreSQL version is 13.2.

I assume that increasing max could mitigate the problem. But I think it would be better to filter these queries on pghero side to not cause falling.

Thanks for help anyway.

Pushed a fix in the commit above, but the problem is pg_stat_statements.max is too large. From the Postgres docs:

The representative query texts are kept in an external disk file, and do not consume shared memory. Therefore, even very lengthy query texts can be stored successfully. However, if many long query texts are accumulated, the external file might grow unmanageably large. As a recovery method if that happens, pg_stat_statements may choose to discard the query texts, whereupon all existing entries in the pg_stat_statements view will show null query fields, though the statistics associated with each queryid are preserved. If this happens, consider reducing pg_stat_statements.max to prevent recurrences.