@regisF Yes, the link would still be bound by the 60-second timeout. Basically, we have to precompute this data at some point. Perhaps we just need to store the last computed value since even doing this once per day could bog down the database.
I just realized that !779 (merged) will fix the case when the usage data takes longer than 60 seconds to generate, but it will not fix the GitLab.com case where one of the queries--the one to count the number of pushes in the events table--times out due to our 5-minute statement timeout. There is a Sentry log for this: https://sentry.gitlap.com/gitlab/gitlabcom/issues/12327/
We COULD disable (or lengthen) the statement timeout for these queries, but somehow that doesn't quite feel right. What do you think @pcarranza?
# explain SELECT COUNT(*) FROM "events" WHERE ("events"."author_id" IS NOT NULL) AND "events"."action" = 5; QUERY PLAN ------------------------------------------------------------------------- Aggregate (cost=4353389.58..4353389.59 rows=1 width=0) -> Seq Scan on events (cost=0.00..4300864.40 rows=21010072 width=0) Filter: ((author_id IS NOT NULL) AND (action = 5))(3 rows)
Whereas:
# explain SELECT COUNT(*) FROM "events" WHERE "events"."action" = 5; QUERY PLAN ------------------------------------------------------------------------------------------------------------- Aggregate (cost=2639146.76..2639146.77 rows=1 width=0) -> Index Only Scan using index_events_on_action on events (cost=0.00..2586621.58 rows=21010072 width=0) Index Cond: (action = 5)(3 rows)