Per the PostgreSQL docs and our Omnibus setup this won't require a restart. However it may require running a gitlab-ctl reconfigure which will mess with our manual pgbouncer tweaks still running on our secondaries.
@jtevnan What's the best way to work around this? I'm thinking of these steps:
Commit changes to Chef
Run chef-client
Manually adjust runtime.conf to match the new changes
Reload PostgreSQL
This way the next time a reconfigure runs it won't change anything, and we still get our changes.
On 12:55 UTC I reverted checkpoint_completion_target to 0.7 to see if this changed the SQL timings. Thus far I'm not seeing a change in timings, suggesting it was just a coincidence. I'll change the setting back to 0.9 so we can monitor for a bit longer.
Looking at the last 24 hours of data I'm not really seeing any conclusive changes just yet. I'm going to keep this setting active for this week, then revert it next Monday. That way we can compare two entire work weeks and see if there's any difference.
Poking around the various statistics we have, including IOPS per disk I have been unable to find any change in disk related patterns by setting this value to 0.9. As such I'll revert it back to 0.7 and close this issue since it appears we don't need a value of 0.9.