Skip to content

Push each change separately to Redis in PostReceive hook

username-removed-57314 requested to merge sufflope/gitlab-shell:master into master

Fix proposal for #10 (closed).

It fixes my problem pushing 1000+ tags (all changes get processed and taken into account in big numbers, tags, activites…) without adding much time/load overhead by spawning 1000+ sub-processes instead of one (although I only tested with famous "wet-finger" measuring process).

I tried adding a test case but faced some issues for now, like

  • how do I run another redis server listening on TCP to run tests, alongside my production one listening on a socket without messing with it? (I am self-hosted and only have one machine acting as server/desktop/router/whatever, and migrated to gitlab 7.3)
  • I also probably should write a redis queue consumer to check that all events are effectively queued
  • oh yeah I can test that the generated command runs, but it would be better to portably get the max command line size and check against it
  • maybe not do this in test, but in actual code : instead of going from "one process spawn with everything" to "one process spawn for each little thing no matter if it results in a fork bomb", portably get the max command line size, and build successive commands scheduling as many changes as possible while staying under the args size limit (although I think I read that system() is blocking and it probably won't "fork bomb", but might result in a DoS if hooks are monothread or whatever and I deliberately push like a billion tags?)
  • why not some throttle to be sure to not end with a fork bomb/DoS?
  • "BUT IT'S ALMOST 1AM HERE AND I JUST WANT TO PUSH LOTS OF TAGS LET ME PUSH MY TAGS OK"

Feel free to legitimately reject this on the basis of any item above! I hope it will at least inspire some ruby/redis/shell guru a perfect clean solution.

Merge request reports