This is a meta issue to discuss possible regressions in this monthly release and any patch versions.
Please do not raise issues directly in this issue but link to issues that might warrant a patch release.
The decision to create a patch release or not is with the release manager who is assigned to this issue. The release manager will comment here about the plans for patch releases.
Designs
An error occurred while loading designs. Please try again.
Child items
0
Show closed items
GraphQL error: The resource that you are attempting to access does not exist or you don't have permission to perform this action
No child items are currently open.
Linked items
0
Link issues together to show that they're related or that one is blocking others.
Learn more.
@sammcj I'm sorry to hear that. Regarding #2336 (closed), this was the only report we had of corruption, it is not sure yet it is caused by the upgrade and multiple people looked into it so far. Regarding 294, CI is undergoing heavy development and unfortunately it will be less stable than GitLab itself for some time. It is quickly adding features and some major surgery is planned (integration of CI into CE in 8.0). But if you report an issue as a customer it should be fixed timely. We'll see what the problem is with 294 (did it ship, did it solve the problem) and if needed ensure a patch will come in 7.14.2 /cc @ayufan. Please email me at sytse@gitlab.com if you prefer a call to discuss.
@sytses Don't get me wrong, we love GitLab and we appreciate the product, it's just there are so many broken things every single release and they don't get picked up in the tests so it's really hard to measure a successful upgrade until people have been using it for a while. While I could understand this for custom installs, I would expect that the level of isolation and control that an Omnibus installer gives should assist with supporting the product and limiting the number of regressions through both major and minor releases.
Perhaps if there was a way that two GitLab servers could have their data and configuration clustered that was backwards compatible one major version or something along those lines you could have people easily switch back if problems were detected after an upgrade, this would also allow you to do A-B functional testing and even have one of the nodes located elsewhere for reliability. This is a complex change but something I would be considering for the ongoing development of the product.
@sammcj We love you too! We understand there are many regressions each release, we catch many with tests, QA and GitLab.com but there are always some that we need a patch release for, which we try to get out quickly. If you want less regressions consider updating after the major release has been out for a while. In the future the migration barometer should indicate how easy a rollback to the previous version is. In that case installing the old Omnibus package should be a quick and simple way to switch back.
@sytses Thaaankkkssss maaannnn :) Yeah, the other suggestion I had was to break up GitLab into component level releases, you could do this by having each component in a docker image, then when you want to send out an update for say just the CI or whatever part of GitLab, only the latest layer for that part of the application would need to be pulled and updated.
This has some advantages:
Rollbacks are easier (except for database rollbacks, they're always hard).
Your release cycles can increase in pace without as much disruption to the wider code base.
You no longer need to maintain packages for several different distributions.
Updates are differential as you only need to pull the latest layers for the base image.
Managing Docker image builds is (IMO) a lot easier that managing large complex packages from build systems.
You can easily load balance / online upgrade stateless components.
Standard components such as Sidekiq no longer need to be maintained and packaged by GitLab, you can just use an upstream Docker image for it and pass in the config to suit GitLab.
@sammcj Thanks for the suggestion. Breaking it up certainly makes upgrades more gradual, but we're afraid of the coordination costs and managing components and docker images is not acceptable to some of our customers and users.
@sytses All they need is docker installed yum install docker and then to run whatever you choose to use that manages the images and starts the container for example docker-compose up.
@sytses It's common to provide both. EG There are PostgreSQL containers which are extremely convenient for docker users, but it's not forced. For those that use docker though it makes it much easier to manage deploys! ( and testing for development as well! )
@brann Someday we might do a composed docker container (app/nginx/redis/db/file) but it is complex and not a priority for us right now. Feel free to contribute it.
merge requests do not show subsequent commits after the source branch changes: #2379 (closed)
I know it was working in 6.9.2 but I do not know at which point it stopped working (or if it is an issue with our install?).
-> it was an issue with our install..
This might be a dumb question, but I've been mostly successfully using omniauth-kerberos on a Gitlab CE instance as a custom omniauth-provider for a while now (since before full Kerberos support landed in EE; see the documentation here from a while ago). We tried to upgrade to 7.14.1 today and seem to have been hit by a version of this bug; namely that without ldap_enabled set to True in gitlab.rb, the login page throws an error 500, but with ldap_enabled set to True, the Kerberos auth provider doesn't seem to show up at all.
I'm not positive that the latter is a consequence of the former, but it seems not impossible.
Is this the same bug? Is there any chance of the fix landing in gitlab-ce, if it didn't? (If it did, I can confirm that the bug still exists on 7.14.1). Like I said, I understand that what I'm doing isn't officially supported, but it's still a regression of sorts.
To fix the crash, I would guess a similar change needs to be applied to this file in gitlab-CE; namely, the "ldap_servers.each_with_index do" should become (analogous to the gitlab-ee MR):
- if ldap_enabled? - @ldap_servers.each_with_index do |server, i|
As for getting the Kerberos sign-in form to show... a long time ago, yes (that is to say, around this time last year when I first set up our gitlab instance), but since gitlab... 7.10.x (at the latest, probably before), it just sort of worked. It showed up as an omniauth provider; see here as an example, now that I managed to downgrade our instance back to 7.13.x.
I am not certain, but I suspect this MR, or a similar change, might be related to the omniauth provider button for kerberos not showing up anymore? The comment "Renamed OauthHelper to AuthHelper since LDAP, SAML, Kerberos aren't OAuth" makes me somewhat suspicious-- because we were using omniauth-kerberos.
I don't know that I'll have much time to look into this in more detail in the near future, sadly.
@TC01 Can you maybe create an issue and link it here? From the description: "Please do not raise issues directly in this issue but link to issues that might warrant a patch release."