We've shipped Cycle Analytics in 8.12 and we'll make it better in 8.13.
Specifications for devs
Priority: The difference with the old flow is that the first commit that we track, should not be linked to something that has been pushed to production - in the given time range.
We need to measure everything that happened in the given time range, not only what's been pushed to production.
All stages except staging and production will basically remain the same.
The staging and production stages will only take into account the issues that have been pushed to production.
We should make sure that the explanation on the last two stages (staging and production) are clear.
Differences with the initial design
Still no chart parts: not sure we can ship it for 8.13
No code coverage at the top: not sure that this information belongs here
No Percent 95th: I don't find this information useful at this time
@regisF which issues / events do we show? Most recently completed in this step (this means that this doesn't contribute yet to the cycle analytics time as showed, right?)? Most recent of this event that has completed the entire cycle (then why would we need multiple lists?)? What changes if we change the range from 30 to 90 days?
which issues / events do we show? Most recently completed in this step (this means that this doesn't contribute yet to the cycle analytics time as showed, right?)? Most recent of this event that has completed the entire cycle (then why would we need multiple lists?)
Cycle Analytics will only show what's been pushed to production anyway. That means we start from the end: we see what's been pushed to production, then check which issues started this cycle. That means, we can have multiples issues that have led to something being pushed to production in the same range.
Each issue is followed by a dedicated commit, MR, test time, etc... Hence multiple items in each stage, again.
Considering this, each stage will show the totality of the events in it - what is used today to calculate the median. Let me know if I'm completely off here.
What changes if we change the range from 30 to 90 days?
The entire table is being recalculated with everything that has been pushed to production in the last 90 days, as it is now.
I think there's some confusion around "Ability to see events in each stage" vs showing X recent ideas that have been shipped to production. This was discussed on the original issue, but could use some more discussion, and even better, actual and target customer feedback. I'd like us to understand what the customer actually wants here, or what would benefit the customer most.
Related: Personally, now that I've seen this live for a few projects, and seen that some projects have completely empty analytics because they don't deploy to production (I'm looking at yougitlab-ce), I wonder if it's worth re-evaluating that decision too. Wouldn't it be more helpful to show people data for every stage they do have? It's a different type if analytics. e.g. Independent statistics vs cohort analysis. But may be just as valuable (and certainly more valuable for people with zero analytics today). That might make it easier to reason about the above question about seeing "events in each stage".
One obvious complication is that it's hard to know how many ideas haven't been turned into issues yet. And that statistics will be skewed for issues that never turn into code.
But I guess the overall message here is that now that we have an iteration out there, let's talk to customers and see what to ship next rather than following the design from before we had much customer feedback.
Sorry if this is off-topic, but I didn't see this anywhere: Is an MR counted as being "in review" if it's a WIP? In our workflow, we create the MR right away after the first commit, and tag it as WIP, so having that count as "in review" would muddy the metrics. Hopefully the reviewing time is time between the last time the WIP label is removed and the MR being merged.
@sytses we have a few concerns for the second iteration:
First of all, we haven't received a lot of feedback from the community for this feature. So it's a bit hard to figure out what the community wants at the moment. However:
One user has expressed a will to use Cycle Analytics even if they don't use the Deploy to production stage.
We also don't use our own tool to ship to production, so Cycle Analytics for gitlab-ce and gitlab-ee does not show any data.
As @markpundsack said above, "Wouldn't it be more helpful to show people data for every stage they do have?", not necessarily what's only been shipped to production? That way, every project will find value in this dashboard.
Just to provide some more feedback: The feature itself sounds great and I can't wait to use it. However in the current state it is not very useful. Making the analytics available for parts of the whole cycle would definitely help.
I would also like to point out my other issue #22581 (moved) - This could help to bring cycle analytics to a lot of people with several smaller repos. I just began transfering one of my projects (CxxProf) to Gitlab. The group will contain of about 5 to 10 repos in the end. Having Cycle analytics available for each repo separately will not provide much usable feedback. Having it available for the whole group would be a lot more useful.
At work we've set up a Gitlab instance with ~150 repos. Each repo contains a small component. Cycle Analytics does not provide anything useful at this stage. However being able to get the data for the whole repo or for repos where a specific user is participating would turn this around 180°.
We need to measure everything that happened in the given time range, not only what's been pushed to production.
The difference with the old flow is that the first commit that we track, should not be linked to something that has been pushed to production - in the given time range.
All stages except staging and production will basically remain the same.
The staging and production stages will only take into account the issues that have been pushed to production.
We should make sure that the explanation on the last two stages (staging and production) are clear.
The "events" per stage stay the same as previously defined.
Issue: list of issues created in the last XX days, that have been labeled or added to a milestone.
Plan: list of commits that reference for the fist time an issue from the last stage.
Code: list of MR created in this stage
Test: List of unique builds triggered by the commits.
Review: List of MR merged
Staging: List of deployed builds
Production: list of issues with the time from idea to production
@timothyandrew can you tell me if this makes sense to you?
@monsdar I understand your point, thanks for sharing.
I wonder though how unique your use case is compared to general use of the projects and repositories. I think we should work towards having a relevant Cycle Analytics at a project level first, then see how we can expand this to a group level. What do you think?
I agree - it's probably the easiest to first get it right for single repos and then find a way to scale it up to multiple repos at once.
I'm not sure how unique my way of doing things is. I can imagine that bigger projects that make use of the group-feature could be interested in aggregated cycle analytics. I guess there are differences between OSS and Enterprise too.
@regisF sure, I'll be happy to help. Where exactly do the tooltips will go? Over each stage? Only where I can see the question marks on the mockup above (Stage, Median, Delta, Related Issues, Total Time)? Somewhere else?
If they show up when hovering over each stage, how do they differ from the stage description?
Btw, I think we should fix the description for some cases, which are not clear enough.
For example, stage "Review": "Time between merge request creation and merge/close" -> if we are not measuring closed MRs, that "close" should not be there.
But we can go one by one if you think we could improve them.
We'll certainly need to understand exactly what's what to be able to describe everything accurately.
I don't see an MR for this issue yet, when the implementation will start? When should they get done, so we can estimate a timeframe to work on the marketing assets?
@marcia To prevent having a gazillion back and forth about the copy, I'll share a link to Google Doc with you and once it'll be final, I'll post the comment with our notes here.
@alfredo1 design is done, we just need to finalize the wording of the tooltips, so to me it's good to go. @hazelyang have you put the assets somewhere?
Very glad to see that CA is being changed to separately measure the individual stages rather than just issues that have been pushed to production.
With that change in mind, would it be advantageous to have specific empty states displayed for the stages you don't have? i.e. on the right hand side, if you try to click into a stage that isn't being measured yet. I think that could be beneficial to those of us who have only just come across gitlab flow.
From my initial impressions, the documentation for some of the stages aren't that easy to follow (Test -> Production), but I think it would still be helpful to link users through to the documentation for the respective stages. It may not be immediately clear to users why a specific stage isn't being tracked.
I was wondering how we could have some quick visualization of all of the stages, to see where the cycle is losing or gaining speed. I know you can read the medians, but it's not so quick and you have to establish the comparison on your own. I thought about horizontal stacked bars, where each stage is associated with a different color. Like GitHub does it for repo languages:
If this is not feasible for 8.13, maybe it can be considered for the next iteration?
@pedroms from a visual sense, it sounds great, but how would you do it? I don't see how it can be achieved. If you want to give more details about this, create a new issue, add the label cycle analytics and let's discuss it :-)
Are we going to keep this empty state? Since we are showing data based on stages I think the copy and steps should be updated to reflect the new approach more accordingly.
Also would be nice to add a link to our documentation or the pages where we can actually set up a CI, etc.
Also would be nice to add a link to our documentation or the pages where we can actually set up a CI, etc.
@alfredo1 perhaps would be better if we link to the webpage: https://about.gitlab.com/solutions/cycle-analytics/? The minimum information on how to set up everything is there, as well as links to the corresponding docs.
@regisF Can you clarify what scope is going to ship in 8.13? The description still lists everything, but at least the event list MR is marked as 8.14 and I don't see a MR for deltas.
Thanks @regisF! You don't need to update the wireframe, but can you make it clear that the events shown in the wireframe and original design are NOT part of this issue's scope?
@hazelyang is there a reason why the items on the sidebar for Code and Review are slightly different? Both seems to be Merge Requests. Shouldn't they be the same?
@hazelyang I don't think you need to have the commit hash (62f115dd) if the commit title on top is linked to the commit (“Merge branch 'cannonical-typo' …”). If we can make a link out of it, maybe the description could just read “First commit pushed by <username>”.
@alfredo1 indeed it was First commit. To avoid confusion we can probably use First commit instead of First <icon>, we have enough horizontal real estate anyway I think. By the way the current working issue is https://gitlab.com/gitlab-org/gitlab-ce/issues/23449 so any other discussion should be moved there.