WIP: Log running queries when a certain CPU load is reached
I still want to write some specs, move commands out of cli.rb
and do some clean-up, but let's see are we doing thus far.
Merge request reports
Activity
74 76 end 75 77 end 76 78 79 class DatabaseHeavyLoad 80 COMMAND_NAME = "db-heavy-load".freeze 81 82 def initialize(args) 83 @options = options(args) 84 @options.parse! 85 86 @collector = ::GitLab::Monitor::DatabaseActivityCollector.new(connection_string: @db_connection_string) 87 @warning_queries = File.open("/tmp/warning-queries", "a") 88 @critical_queries = File.open("/tmp/critical-queries", "a") - Resolved by username-removed-274314
131 elsif load_level >= @cpu_warning_level && load_level < @cpu_critical_level 132 write_snapshot_to_file(@warning_queries) 133 else 134 write_snapshot_to_file(@critical_queries) 135 end 136 137 sleep 60 138 end 139 end 140 141 def write_snapshot_to_file(file) 142 @collector.run 143 file.write(Time.now.utc.to_s + "\n") 144 @collector.store(file) 145 file.write("\n") 146 end Humm... no, these files and outputs should be scraped. What I mean is that if you are using the prometheus object to gather the metrics it will provide that timestamp already.
Also, consider the conversation about the other scraper where we made it work like the first one so they can be both used from a file or from a web page.
23 return unless result 24 25 rows_count = @result.ntuples 26 27 table = Terminal::Table.new do |t| 28 @result.each.with_index do |row, index| 29 row.each do |field| 30 t.add_row(field) 31 end 32 33 t.add_separator unless rows_count == index + 1 34 end 35 end 36 37 io.write(table.to_s) 38 end 1 1 #!/usr/bin/env ruby 2 2 3 $LOAD_PATH.unshift File.expand_path("../../lib", __FILE__) 4 Probably it doesn't even belong here @ahmadsherif
You know what, let's forget about this one for now.
Oh my, I think we dropped the ball completely here, eh, @ahmadsherif ?
Shall we close or pick it up again?