Using auto_explain with pgMustard



  • If you’re new to auto_explain, the Postgres docs (as usual) have a great introduction to it.

  • If you are using a hosting provider, please check they support auto_explain. It’s a popular and established extension, so there’s a decent chance they do. For example, RDS Postgres do, on 9.6.3 and higher.

  • It’s also worth noting that pgMustard supports Postgres 9.6 and higher, but we welcome issue reports on earlier versions as well.


Here are the parameters, with a setting that will work with pgMustard. Below we go through each one, with some context on the options:

auto_explain.log_min_duration = 200
auto_explain.sample_rate = 1
auto_explain.log_format = JSON
auto_explain.log_analyze = true
auto_explain.log_timing = true
auto_explain.log_buffers = true
auto_explain.log_verbose = true
auto_explain.log_triggers = true
auto_explain.log_nested_statements = true

Min duration

Measured in milliseconds, this determines which query plans to log. The default (-1) logs no query plans, and setting it to 0 would log every query plan. Something between 50 and 500 is likely sensible, depending on your use case. Another popular extension, pg_stat_statements, could help you make a well-informed decision here. It’s also worth considering using the Sample rate parameter (discussed below).

This needs to be set to log any plans.

Sample rate

This allows you to log a sample of the statements in each session. For example, a setting of 0.1 would log 10% of the query plans that exceed the Min duration. In the case of nested statements, it explains all or none.

While this parameter is 1 by default, it is worth including it for clarity.

Log format

The default format is TEXT, for human readability. There are also XML, YAML, and JSON options, for machine readability. Neither EXPLAIN nor auto_explain can output to more than one format at a time.

This needs to be set to JSON for use with pgMustard. We’re considering support for TEXT format plans, please get in touch if this is important to you.


This gives us EXPLAIN ANALYZE output, rather than EXPLAIN output.

This needs to be on for use with pgMustard.


This gives us per-plan-node timing. This can have a pretty big overhead. We’ve heard anecdotal reports between 5% and 20%, but we recommend doing your own testing. This is for all statements executed, regardless of Min duration. This parameter is on by default once Analyze is on, but we suggest including it explicitly for the benefit of others (and future you).

This needs to be on for use with pgMustard.


This adds buffer usage statistics to the plans logged, with negligible cost. This is so useful that there have been suggestions on the mailing lists to make it on by default.

pgMustard can give better advice with this on.


This adds more data (including columns returned) to the plans logged. It adds negligible overhead.

pgMustard can give better advice with this on.


This adds trigger execution statistics to the plans logged, with negligible cost.

pgMustard doesn’t currently use this information, but may in the future.

Nested statements

This adds statements executed inside a function. Interestingly, it is a parameter you cannot* request when manually using EXPLAIN.

pgMustard can give better advice with this on.

* If we’re wrong, or you know why, we’d love to hear from you!

Next steps

If you wish to group and order plans before analysing in pgMustard, here are some options we’ve come across:

  • pgBadger - a free and open-source log analyzer (Perl).

  • pgFouine - a free and open-source log analyzer (PHP).

  • PgHero - a performance monitoring tool with free and open-source, or commercial, options.

  • pganalyze - a commercial performance monitoring tool.

  • Perfalytics - a commercial performance monitoring tool.

We’re exploring whether “Open in pgMustard” might be possible from some of the options above. We're also considering allowing people to send plans directly to pgMustard via an API.

If you have any questions, corrections, or suggestions, we’d love to hear from you. The best ways are by email ( or via our issue tracker.