Cycle analytics tracks the median time between idea => production steps. This iteration does not ship a Chat ops integration, so we leave that out for now.
Idea to production consists of several events, which we've labeled with a name and the time that is represented in that thing. E.g. Plan represents the time spend in the planning phase, so the time that the idea exists and is planned, but no one has started implementing it yet.
Issue (Tracker)
from issue creation until given a milestone or list label (first assignment, any milestone, milestone date or assignee is not required)
Plan (Board)
from given a milestone or list label until the first commit
Code (IDE)
from first commit until the merge request is created (this might exclude coding time if you use WIP, so be it)
Test (CI)
total test time for all commits/merges
Review (MR)
from merge request creation until MR merged (closed MR's won't be deployed)
Staging (CD)
from MR merge until deploy to production (production is last stage/environment)
Production (Total)
sum of the above excluding Test (CI) time
The dropdown at the top allows users to have the summary for the:
Last 30 days (the things that went to production the last 30 days.)
Last 90 days
We will add more items in this dropdown in the future - probably things like Current year and All time.
Differences with original design
no chat part => not possible without integration of Chatops, needs another release at least
no feed of events => to reduce scope of iteration
reduced information on top => ditto, but if code coverage is easy to add, we should. Collaborator info seems useless
no delta's => to reduce scope
less statistics => to reduce scope and reserve for EE enhancements
Questions
How do we track whether a branch corresponds to an issue?
Not sure. I'm thinking the best strategy is to use either the suggested branch name (when you create a new one in an issue) or by using that in a regex, for instance *5-branch-permissions*, so you can still do things like jobv/5-branch-permissions.
@JobV A few questions about how this will translate into reality.
Issue (Tracker) => from issue creation to existing on a board (having a list-label)
Shouldn't planning concern milestones in some way?
Is there any chance of an issue being assigned a board label for a long time (like we sometimes plan stuff for a specific release a long time beforehand)
Is this an interesting metric at all, since a user may request a feature in GitLab that we won't prioritize and plan until months later, for whatever reason?
Is this "Issue" time affected by longstanding issues that never get planned, like meta issues?
Plan (Board) => from ^ until branch_creation OR from ^ until IDE started (Run on Koding in Issue)
Are we gonna force users to use the ID-branch-name syntax for branches then if they want to use this feature? Isn't the opening of a WIP MR a better indicator, since in our flow we encourage people to that as early as possible?
How about using an assignee being set as an indicator of development having started?
Code (IDE) => from ^ until MR created_at
This is a problem, since we encourage people to create MRs soon, way before they move into "Test" or "Review"
How about looking at the time the MR is no longer marked WIP?
Test (CI) => CI Run time (not related to others) I propose to base this on the default branch
All of the other times are measured in days or weeks, and can vary a lot. This one will be measured in minutes or hours, and will be pretty stable. It seems to insignificant and fixed and external to include in this cycle analytics timing.
Review (MR) => from MR created_at until MR merged or MR closed
Review doesn't start when the MR is merged, review starts when the WIP label is removed, or when someone else than the developer is assigned to the MR
Staging (CD) => from MR merged until deploy to any enviroment
How do MRs that never get deployed affect this, like MRs that are reverted before they could be deployed? Or are issues and MRs only considered in these timings once they make a transition, not when the transition still has to happen?
Production (Total) => Total time from issue creation to production deploy (sum of the above)
@DouweM note that I got these specifications straight from @sytses, so I'll answer to what I think makes sense.
Shouldn't planning concern milestones in some way?
I agree that we should also cover this. I've added an "OR in a milestone".
Is there any chance of an issue being assigned a board label for a long time (like we sometimes plan stuff for a specific release a long time beforehand)
Yes. Doesn't matter, I think.
Is this an interesting metric at all, since a user may request a feature in GitLab that we won't prioritize and plan until months later, for whatever reason?
If we take the median time, we'll get an idea of how long most issues take to get implemented. This might be a little strange for gitlab-ce, but for other projects this makes more sense.
Is this "Issue" time affected by longstanding issues that never get planned, like meta issues?
It's median that we measure, so outliers shouldn't count heavily.
Are we gonna force users to use the ID-branch-name syntax for branches then if they want to use this feature? Isn't the opening of a WIP MR a better indicator, since in our flow we encourage people to that as early as possible?
I think we can just look at whether a branch with a commit mentioning the issue or a MR with the issue mentioning the issue exists. That's more inclusive than WIP MR, right?
How about using an assignee being set as an indicator of development having started?
Ah that's a nice idea as well! This would be easier to track, but less precise.
This is a problem, since we encourage people to create MRs soon, way before they move into "Test" or "Review"
How about looking at the time the MR is no longer marked WIP?
I think it's not a problem that something is quick. We do that, but not everyone does. Our metric would just be 'good'.
WIP merge request can be an extra layer to this.
All of the other times are measured in days or weeks, and can vary a lot. This one will be measured in minutes or hours, and will be pretty stable. It seems to insignificant and fixed and external to include in this cycle analytics timing.
I strongly disagree here. It looks insignificant in the total maybe, yes, but in itself is super important. Just imagine that in a future iteration we add graphs to historical timings or the delta. This is valuable information. The relative size doesn't really matter.
Review doesn't start when the MR is merged, review starts when the WIP label is removed, or when someone else than the developer is assigned to the MR
I'm OK with making it: from MR without WIP until merged or closed, but it seems complex to me. Same with assignee.
How do MRs that never get deployed affect this, like MRs that are reverted before they could be deployed? Or are issues and MRs only considered in these timings once they make a transition, not when the transition still has to happen?
I'd think we'd only count things that do get deployed.
I'd like to lock each of these stages down with a very clear and unambiguous start event and end event, so that actual development can start. "That's a nice idea as well" isn't quite precise enough.
Shouldn't planning concern milestones in some way?
I agree that we should also cover this. I've added an "OR in a milestone".
Do we take the first or last time? I think the first, please add this.
Is there any chance of an issue being assigned a board label for a long time (like we sometimes plan stuff for a specific release a long time beforehand)
Yes. Doesn't matter, I think.
Yes, in this case an issue spends a lot of time in the planning stage.
Is this an interesting metric at all, since a user may request a feature in GitLab that we won't prioritize and plan until months later, for whatever reason?
If we take the median time, we'll get an idea of how long most issues take to get implemented. This might be a little strange for gitlab-ce, but for other projects this makes more sense.
Do it spends a lot of time in the planning stage, I think that is good to know
Is this "Issue" time affected by longstanding issues that never get planned, like meta issues?
It's median that we measure, so outliers shouldn't count heavily.
If it doesn't get shipped it doesn't count for the cycle analytics.
Are we gonna force users to use the ID-branch-name syntax for branches then if they want to use this feature? Isn't the opening of a WIP MR a better indicator, since in our flow we encourage people to that as early as possible?
I think we can just look at whether a branch with a commit mentioning the issue or a MR with the issue mentioning the issue exists. That's more inclusive than WIP MR, right?
People might not use a WIP MR or start development way before opening that.
How about using an assignee being set as an indicator of development having started?
Ah that's a nice idea as well! This would be easier to track, but less precise.
People might not use assigning. Or assign someone weeks before starting to work on it. So I don't think we should use this.
This is a problem, since we encourage people to create MRs soon, way before they move into "Test" or "Review"
How about looking at the time the MR is no longer marked WIP?
I think it's not a problem that something is quick. We do that, but not everyone does. Our metric would just be 'good'.
WIP merge request can be an extra layer to this.
I don't think we should use WIP merge requests at all. Not everyone uses them.
All of the other times are measured in days or weeks, and can vary a lot. This one will be measured in minutes or hours, and will be pretty stable. It seems to insignificant and fixed and external to include in this cycle analytics timing.
I strongly disagree here. It looks insignificant in the total maybe, yes, but in itself is super important. Just imagine that in a future iteration we add graphs to historical timings or the delta. This is valuable information. The relative size doesn't really matter.
I agree with Job. There might very well be other stages that are also measured in minutes.
Review doesn't start when the MR is merged, review starts when the WIP label is removed, or when someone else than the developer is assigned to the MR
I'm OK with making it: from MR without WIP until merged or closed, but it seems complex to me. Same with assignee.
I also think it is too complex to do something else. I don't expect WIP to be used widely.
How do MRs that never get deployed affect this, like MRs that are reverted before they could be deployed? Or are issues and MRs only considered in these timings once they make a transition, not when the transition still has to happen?
I'd think we'd only count things that do get deployed.
Indeed, if it isn't deployed (yet) it doesn't matter.
I have the same question than @axil. I don't think Pipelines is the obvious choice for people to find this dashboard, especially if a workflow doesn't have CI. Moreover, everything under Pipelines is CI related whereas this report is something that is about the whole project.
One solution would be to rename the tab Graphs by Reports like so
That would also be a good change for future reports. Because really, a graph is kind of a report also.
I'm assuming that we're going to be using our regular Postgres / MySQL database for this to begin with, but I'd love to hear everyone's thoughts on this.
Postgres / MySQL doesn't have a built-in median function.
Do we have a reason to use median over average, given that it will add to development (and query) time?
median would allow us to not take into account numbers that are either too small or too big compared to the overall numbers. average would not give us that. And in the case of a project, we will probably have extreme data that are not representative to the overall data (like issues that are solved in one minute or that take 45 months to be solved). This site sums up better than how I could:
Let us say that there are nine students in a class with the following scores on a test: 2, 4, 5, 7, 8, 10, 12, 13, 83. In this case the average score (or the mean) is the sum of all the scores divided by nine. This works out to 144/9 = 16. Note that even though 16 is the arithmetic average, it is distorted by the unusually high score of 83 compared to other scores. Almost all of the students' scores are below the average. Therefore, in this case the mean is not a good representative of the central tendency of this sample. The median, on the other hand, is the value which is such that half the scores are above it and half the scores below. So in this example, the median is 8. There are four scores below and four above the value 8. So 8 represents the mid point or the central tendency of the sample.
@fatihacet: I've started working on this in !5986 (merged). We can probably work in parallel here: I can get the controller to set up some dummy data for you to use in the UI, and I can replace the dummy data with real data as I go along. WDYT? If this makes sense, I'll stop force-pushing to the 21170-cycle-analytics branch, so there are no conflicts.
@sytses@JobV Thanks for your thoughts! The problem I have with the current definitions of the stage transitions, is that it will not work for gitlab-ce:
Issue (Tracker)
from issue creation to existing on a board (having a list-label)
OR
from the first time being associated to a milestone
A milestone with an expiration date, maybe? We use the Backlog milestone as a kind of catch-all for ideas wel like.
Plan (Board)
from ^ until branch_creation with a commit mentioning the issue
OR
from ^ until IDE started (Run on Koding in Issue)
We don't use the 123-branch-name syntax, or use Koding. We do, however, use assigning and WIP MRs.
Code (IDE)
from ^ until MR created_at
We actively encourage people to create WIP MRs way before they start development. Coding doesn't end when the MR is created, it ends when the WIP flag is removed, or the MR is assigned to someone other than the MR author.
Test (CI)
CI Run time (not related to others) I propose to base this on the default branch
Review (MR)
from MR created_at until MR merged
OR
from MR created_at until MR closed
As above, review doesn't start when the MR is created, it starts when the WIP flag is removed, or the MR is assigned to someone other than the MR author.
Staging (CD)
from MR merged until deploy to any environment (only count MR that are deployed)
Production (Total)
Total time from issue creation to production deploy (sum of the above)
Of course not everyone uses our exact gitlab-ce workflow, but it's not a strange one at all, especially since we actively use GitLab features like WIP MRs and assigning reviewers, and I think that most companies will not follow the current transition definitions to the letter. I think we should have more heuristics to detect a transition, or maybe even allow it to be configurable.
I think it's a bad sign if we're developing something that we would like to use because the functionality is awesome, but won't be able to because it's "overfitted" to a specific workflow.
from issue creation to existing on a board (having a list-label)
OR
from the first time being associated to a milestone
@JobV@sytses I think we need something more concrete, because in my experience there would be a lot of things that would screw these numbers/measurements up.
We often add the milestone early.
Maybe it's when an assignee gets assigned. Often I find that is the most clear distinction that something is starting to be worked on... With a mixture of milestone and assignees you might get a clearer picture. Like if it has an assignee and the milestone is set then you can be sure. Sometimes people assign themselves to things without milestones because they don't know our flow perfectly yet.
Just worried there will be a lot of little things that people will do to issues that could mess it up. I know because it happens now. How do we account for that?
I think it's a bad sign if we're developing something that we would like to use because the functionality is awesome, but won't be able to because it's "overfitted" to a specific workflow.
That's the strength and the weakness of building features that are made to be used in a variety of different ways. Perhaps our software is not opinionated enough then. To solve this problem for this new feature, we can either encourage people use GitLab flow more, or let them customize their own definition of what Issue/Plan/Code/... should contain. The latter solution, although exciting, is also much more complex to put in place. So exciting though.
from issue creation to existing on a board (having a list-label)
OR
Changed: from the first time being associated to a milestone with an expiration date AND assigned to someone
Plan:
from ^ until branch_creation with a commit mentioning the issue (would work on GitLab CE if people mentions the issues in the commit)
OR
from ^ until IDE started (Run on Koding in Issue)
Code
Was: from ^ until MR created_at. Has to be: from ˆ until WIP flag is removed. (You are right on this one @DouweM. The code phase does not end when the MR is created. It ends when someone can review it. The only way we can measure that is the removal of the WIP flag. )
Review (MR)
Changed: from the moment the MR is assigned to someone other than the MR author until MR merged
OR
New: from the moment the WIP is removed until MR merged
OR
Changed: from the moment the WIP is removed until MR is closed without merge
The more I think about it, the more this should be configurable.
The more I think about it, the more this should be configurable.
Let's try as hard as we can to avoid that. :)
Issue:
Changed: from the first time being associated to a milestone with an expiration date AND assigned to someone
I don't see how we can require it to be assigned to someone before it's considered planned. We even recommend leaving the assignee blank until someone is starting work on it.
Plan:
from ^ until branch_creation with a commit mentioning the issue (would work on GitLab CE if people mentions the issues in the commit)
To be clear, it shouldn't just be commits that mention the issue, but any description or comment on the associated MR that says it fixes the issue.
Also to be clear, if someone adds the "fixes #X" later in the thread, we should retroactively count the time from the branch being created since that presumably most accurately reflects when work started.
Code
Was: from ^ until MR created_at. Has to be: from ˆ until WIP flag is removed.
More clarity: if the WIP flag was never present, then MR created_at should be used. That's sort of obvious as a degenerate case implying it was removed at creation. But the point is that we need to support flows that may or may not use WIP flags.
The only way we can measure that is the removal of the WIP flag.
Not exactly. If they're not using WIP, then the time between branch creation and MR creation would be the "code" time. For some flows, it won't be entirely accurate. e.g. some teams use labels to indicate when someone should review. Or they just @ mention someone to ask for a review. But for those flows, their code time will just skew shorter and their review time will be longer. Total cycle time will be accurate, and trends of each component changing over time will still be relevant.
Review (MR)
Changed: from the moment the WIP is removed until MR is closed without merge
If it's closed without merge, it's not included in analytics at all, is it?
from MR merged until deploy to any environment (only count MR that are deployed)
Hmm... the time spent in Staging would be the time after deploy to staging until it's deployed to production. Unfortunately, users are in control of their environment names, so not sure if we want to hard code any names there. But measuring until deploy to "any" environment feels very wrong.
Maybe we need people to declare which is their "production" environment, then we can measure first deploy to any environment until deploy to production. We can include any QA, UAT, etc. environment as staginq. But exclude Review Apps deploys from that.
I've implemented a basic set of heuristics (a mixture of the ones in the issue description and @DouweM's comment above). These are not intended to be final; it's better to get started with something than to wait for the final list of heuristics. I'll leave these as-is for now, until we all agree on that final list of heuristics for the first iteration.
More detailed progress information is in the MR description.
When we're filtering by date (Last 30 days, for example), how exactly is the filtering supposed to be done? I can think of two ways to do it:
Exclude all issues (and all other derivative data) created before 30 days ago, for all stages. If an issue was created 31 days ago, and a MR for it was created 29 days ago, the issue is not counted for any stage.
For a stage like "Issue", exclude all issues created before 30 days ago (since it is based on issue created_at). For a stage like "Review", exclude all merge requests created before 30 days ago (since it is based on MR created_at). If an issue was created 31 days ago, and a MR for it was created 29 days ago, the issue is not counted for "Plan", but the MR is counted for "Review".
Option 1 feels "right" to me, but I could be wrong. I'd like to hear your thoughts here!
How about filtering by MRs shipped to production within the last 30 days, then working backwards to get the related issues, whenever they were created, and using created_at only for time calculation, not filtering?
The more I think about it, the more this should be configurable.
@regisF I don't think that's a good idea. It'd make this harder to understand and review across projects.
@markpundsack I don't feel strongly about which labels are applied, but P1 is the highest we have.
@timothyandrew@markpundsack Interesting. If we'd only show what @markpundsack suggests, we'd be biasing all the times towards things that ship. What about things that don't ship? Are we not interested in those?
This would also imply that if you ship slowly, you won't see any statistics, unless the range is long enough.
(this just to play advocate of the devil, I can see the advantages as well)
Last 30 days means the things that went to production the last 30 days. We don't measure anything that is not in production yet.
Production can maybe be defined the last stage defined in .gitlab-ci.yml
I tried to simplify everything, it is not ideal but I think it is better to keep it simple:
Issue (Tracker) from issue creation until given a milestone or list label (first assignment, any milestone, milestone date or assignee is not required)
Plan (Board) from given a milestone or list label until the first commit
Code (IDE) from first commit until the merge request is created (this might exclude coding time if you use WIP, so be it)
Test (CI) total test time for all commits/merges
Review (MR) from merge request creation until MR merged (closed MR's won't be deployed)
Staging (CD) from MR merge until deploy to production (production is last stage/environment)
Production (Total) sum of the above excluding Test (CI) time
@sytses I like the simplifications. Sure, it means we're counting it as planned when we put it in Backlog, but so what? That's a form of planning. But, it conflicts with our recently released issue boards. Moving something from the backlog to a column in the issue board is planning, isn't it? Thinking myopically about how I'm going to demo this, it doesn't make sense to show off the "Plan (Board)" step, but then also have to assign something to a milestone, out of the flow.
@markpundsack good point. By the way, I think the backlog should not be a milestone, the backlog are all items without a milestone. But to accommodate the issue board maybe we can change the following:
until given a milestone => until given a milestone or list label
Off the cuff, to embrace the issue board, we should replace our Backlog milestone with a label... we can't call it Backlog because we use that for unlabeled issues, so maybe Pending or Good Idea or something. (Or rename the left-most list, perhaps to Inbox; not for everyone, but rename at a project level).
Issue Boards should interact with Milestones beyond just filtering based on them. Like have a column tied to a milestone instead of a label. e.g. be able to drag something from the inbox to a Backlog milestone, to an 8.12 milestone, a 9.0 milestone, etc. It's a different use of the boards, but a good flow to support; eventually.
@hazelyang actually, after having taken another look at your designs, I think the blank state should also indicate what the Cycle analytics does. That way, for first time users who have no issues in their projects, the blank state will be an opportunity to educate them on the goal of the feature.
We could have a section on the blank screen saying something like (totally open to receive help for the definition of this message):
Cycle Analytics gives an overview on how much time it takes to go from an idea to production in your project.
@markpundsack can you elaborate on why we should put this under Pipeline? I don't understand the reasoning behind this, as it's an overview on the entire project, not specifically something related to CI.
Primarily it's so it doesn't take another top-level menu item. Open to suggestions for better placement. Renaming Graphs to Reports and putting it there could make sense. Or heck, maybe even without the rename. But I have the sneaking suspicion that putting something under Graphs is similar to sweeping it under a rug; nobody looks there. Pipelines isn't a great fit, but if you're using CI/CD, then at least you're using it, and you kinda need to be doing CD for the cycle time analytics to to have meaning.
@regisF@hazelyang Good points about the empty state requiring more description. A realistic situation would be that people have issues, merge requests, etc., but they haven't set up deploys to environments, thus their report would be totally empty since nothing will have gone to "production" yet. You might want to focus the help in that direction.
@markpundsack about the positioning of this feature.
I think renaming Graphs to Reports makes sense because
We will need in a near future a place to store a bunch of reports: time tracking if we do it, code analytics reports, advanced analytics, ...
Graphs are reports.
I'm worried if we place this under Pipelines, it'll look bad to change the place of this feature next month because we will have a report section now. Perhaps it's not a big deal, what do you think @markpundsack@JobV ?
I know it's a bet on the future, so it's perhaps not ideal.
@hazelyang We need another thing from you. The cycle analytics will load in JS. That means, after page load, we'll have a loading symbol in place of the cycle analytics. Can you do something for this? Unless we simply load the default loading symbol we have, but it does not look that pretty to me. Thanks!
@regisF I'm fine with renaming. But, I'll also point out that I've complained elsewhere about the Graphs tab being kind of a crappy place to put stuff. People don't think, hey I want to look at graphs today, they have some job to be done, and the graph may help them complete that job, but likely, they'll have gone through some other path first. e.g. I'm looking at commits and contributors, then want to see a graph of contributor trends. We haven't really embraced that, and putting cycle analytics in Pipelines may not address the job-to-be-done any better anyway. But wanted to point it out. What do people want to accomplish when they look at this report? Maybe it's more closely related with issue boards...
@hazelyang actually we have two options for loading data:
either we load the whole view in JS. So after page load, we display a loading symbol covering the entire cycle analytics
or we have loading symbols in every row, waiting for data to load.
I'm unsure about what the best solution is in this case. @fatihacet, are data in each row loaded individually in each row, or will you receive all the data at once?
In any case, what do you think we should do UX wise, @hazelyang ? Then please, provide a design for this
I tried having loading symbols in every row, but it looks a bit busy in the page, so I think we can load the data in Pipeline Heathy first and then load the rest data in the second block. (I am not sure if this works well from the technical perspective.)
Graphs tab being kind of a crappy place to put stuff
@regisF@markpundsack let's try to deprecate the graphs page as soon as possible. There's no point of collecting all graphs together. Information should live near its context, graph or not.
@marcia: We don't have a staging environment set up right now. I've started performance testing by generating mock data, and we'll probably set up a staging-like environment to test it on once the performance looks acceptable with the mock data.
@JobV@regisF@markpundsack are we gonna ship it under Pipelines > Cycle Analytics eventually? I don't feel this is the right place and moving it in another location in 1-2 months is a bad user experience.
If we deprecate the Graphs page as we want to do in the future, it will make sense to relocate the Cycle Analytics at the same time. A kind of header cleanup.
We don't have anywhere else to put it at the moment. Removing the Graphs tab right now is way too much work. And we can't put cycle analytics in the header directly, we already have a lot of options.
For some projects who already have data on it, we won't show the blank state and people won't have any indications on what the Cycle analytics is.
Therefore, I propose to add something like this at the top of the Cycle analytics page in the case where we don't display the blank state. This message can be hidden forever by the user.
@axil Valid concern about putting it in one place, only to move it a few months later. I can't see any better place to put it currently, though, so I'm not sure we have much of a choice. Putting it under Pipelines isn't bad. It only works if you've set up Pipelines and Environments, so it's not as illogical as it sounds.
The current call to action "Set up" is unclear to me. What happens when that button is clicked @hazelyang? The copy should reflect what is going to happen.
For now, I think we can place it under pipelines. Eventually I think it would be nice if we revamped the project tab to be more of a smart dashboard (https://gitlab.com/gitlab-org/gitlab-ce/issues/19734#note_15571220) with useful information rather than just the readme (which is duplicated from the repository tab). Perhaps cycle analytics would be a good aspect of this since it provides a good overview.
@tauriedavis we decided to remove the Setup call to action entirely from the blank state. We don't have any call to action now. That being said, I'll ask to put a "Read more" button in the blank state, so we can link to the documentation.
@hazelyang@tauriedavis if I align analytics values to right and if we don't have available data, I think those dashes - are too much aligned to right and doesn't look good. Compare the screenshots below. Let me know what you think.
a) what exactly does the step 1 “Issue” measure (Time before an issue gets scheduled): it’s the time between creating an issue and:
adding a due date for that issue?
adding a label to the issue?
assigning it to someone?
creating a list in the issue board related to the label given to that issue?
b) what exactly does the step 2 “Plan” measure (Time before an issue starts implementation):
the time between the step 1 (whatever is the right answer above) and pushing the first commit? Right?
What if that commit does not contain fixes #xxx? Will this be taken into account when creating an MR that closes #xxx?
What if the MR does not contain closes/fixes #xxx? The process will not be tracked by Cycle Analytics?
UPDATE: another question: what if that commit closes an issue that lives in other project? How can we add the tag fixes/closes #xxx to the commit message? With the full URL to that issue? Example: an MR submitted to www-gitlab-com closes an issue for a blog post described in blog-posts.
c) what exactly does the step 4 “Test” measure (Total test time for all commits/merges):
the time that CI takes the whole pipeline in all branches except master?
or it's related to the stage: build and stage: test specifically defined in CI?
A Deployment record is created in the database, with an associated Environment named production. EDIT: The first deployment that's made for the MR's target branch after the MR is merged in.
If this is related to the production environment specifically, what if we don't have anyenvironments set up for our project?
a) It's the time between creating an issue and either one of these conditions (whichever comes first):
the first time a milestone was added to the issue
the first time a list label was added to the issue
b) It's the time between the condition above and the first time a commit is pushed mentioning the given issue. If the commit does not contain the "fixes #xx" string, it is not considered. I haven't tested the case of the issue being in a different project, but my hunch is that it will work as expected - I will test this and get back to you, though!
c) The start->finish time for all pipelines. master is not excluded. It does not attempt to track time for any particular stages.
d) The time between an MR being merged, and the very next deployment to production. If we don't have a 'production' environment, this is not tracked.
Please let me know if I can make any of this any clearer, or if you have any more questions.
Thanks @timothyandrew! Yeah, a few more questions, if I may:
a)
the first time a list label was added to the issue
What exactly is a "list label"? It's a label that already has been added to a Issue Board list?
b)
If the commit does not contain the "fixes #xx" string, it is not considered.
It won't be considered for this stage specifically or won't be considered at all? What if I commit multiple times and create an MR that closes/fixes #xxx? Those commits to that feature-branch in the MR will count then? Or not? How this is handled?
d)
If we don't have a 'production' environment, this is not tracked.
Hm. So every project must configure a production environment to make this stage work? Or to have any data tracked by Cycle Analytics?
Okay, a couple more questions:
The median is taken stage by stage? For all the 7 stages?
When you say "this is not tracked" or "not considered", the stage is not tracked, or the whole cycle is not tracked?
If I understood correctly, anything "loose" won't be tracked at all. Can you pls confirm this info? By "loose" I mean, an MR that doesn't close any issue, an issue not labeled/"milestoned", a project with no production environment won't present any data at all. Is this right? Sorry if this is redundant, I need to make sure we're describing Cycle Analytics accurately.
What exactly is a "list label"? It's a label that already has been added to a Issue Board list?
Yes, exactly.
It won't be considered for this stage specifically or won't be considered at all? What if I commit multiple times and create an MR that closes/fixes #xxx? Those commits to that feature-branch in the MR will count then? Or not? How this is handled?
Won't be considered for this stage. Even if the MR has "Fixes #xx" in the description, the commit will not be counted, because the heuristic for the stage (currently) only looks for commit messages.
So every project must configure a production environment to make this stage work? Or to have any data tracked by Cycle Analytics?
At the moment, yes.
The median is taken stage by stage? For all the 7 stages?
Yes, we take a separate median for each stage.
When you say "this is not tracked" or "not considered", the stage is not tracked, or the whole cycle is not tracked?
This depends on the case. For the commit message, it only pertains to the stage. For a merge request's "Fixes #xx" message, it pertains to the cycle.
Let me explain how this works behind the scenes - maybe that'll make it more clear:
We group issues and merge requests together in pairs, such that for each <issue, merge request> pair, the merge request has "fixes #xx" for the corresponding issue. All other issues and merge requests are not considered.
For the remaining <issue, merge request> pairs, we check the information that we need for the stages, like issue creation date, merge request merge time, etc.
If I understood correctly, anything "loose" won't be tracked at all.
Yes, this is right!
Please keep the questions coming, if you have any more!
there are some permission issues. While the project is public and the visibility is set to "Everyone with access", you cannot visit the cycle analytics page when signed out.
If people have disabled public viewing of CI/CD Pipelines ("Builds" in the settings currently) then perhaps we should disable public viewing of Cycle Analytics.
As we track the environment: production for the last stage, it means the "test" stage will track the time to run every script GitLab CI, except the job(s) related to the production environment, correct? Like, the script here:
We group issues and merge requests together in pairs, such that for each pair, the merge request has "fixes #xx" for the corresponding issue. All other issues and merge requests are not considered.
the "test" stage will track the time to run every script GitLab CI, except the job(s) related to the production environment, correct?
@marcia: A small correction: the "test" stage will track the time to run every build on CI, for every build run after the "closing merge request" is created.
so we don't take into account if a user has set up other keywords for closing issues?
@axil: We hook into the same issue closing code that is already in place, so this should work. I'll test it and confirm, though.
Edit: Confirmed that this works as expected.
Edit 2: I just realised that my original comment ("the merge request has "fixes #xx" for the corresponding issue") was confusing. I only intended "fixes #xx" to be an example there. To be clear, we count all merge requests that close issues, in any of the ways that is possible.