Timeout on huge repository clone
We do have an issue if we are trying to clone a big repository (> 1 GB) over http (ngingx) over a slow line where the clone takes a long time. SSH might also be affected but we didn't tested it.
This is related to the timeout defined within the unicorn.rb (in our case we already increased it to 10 minutes, now we do 42 minutes)
# nuke workers after 30 seconds instead of 60 seconds (the default)
timeout 600 # 10 minutes
Output from the client:
Cloning into 'foobar'...
Username for 'http://gitlab...': (http://gitlab...) username
Password for 'http://username@gitlab...': (http://username@gitlab...:)
remote: Counting objects: 45809, done.
remote: Compressing objects: 100% (36934/36934), done.
fatal: The remote end hung up unexpectedly0 GiB | 410.00 KiB/s
fatal: early EOF
fatal: index-pack failed
Output from unicorn.stderr.log
E, [2014-04-25T08:21:29.571311 #1828] ERROR -- : worker=1 PID:15182 timeout (601s > 600s), killing
error: git-upload-pack died of signal 13
E, [2014-04-25T08:21:29.581546 #1828] ERROR -- : reaped #<Process::Status: pid 15182 SIGKILL (signal 9)> worker=1
I, [2014-04-25T08:21:29.595028 #2615] INFO -- : worker=1 ready
E, [2014-04-25T08:56:23.867716 #1828] ERROR -- : worker=3 PID:1871 timeout (601s > 600s), killing
E, [2014-04-25T08:56:23.878486 #1828] ERROR -- : reaped #<Process::Status: pid 1871 SIGKILL (signal 9)> worker=3
I, [2014-04-25T08:56:23.884657 #16523] INFO -- : worker=3 ready
error: git-upload-pack died of signal 13
Now the interesting part:
The clone process on client side stays alive for the full time. The client keeps receiving data bejond the server has already killed some unicorn stuff. There seems that only the final step, necessary for the complete git clone on the client side does fail due to the server now has killed the unicorn process. By this, the whole clone process will fail.