This includes the aggregate Conversational Development Index.
Design
Colors
Colors apply to the individual metrics (the boxes), and also the convdev index score itself, and the I2P icons.
[0, 33.33): Red
[33.33, 66.66): Orange
Top information appears on first visit until the user dismisses it:
View without top information:
Hovering over an item in the bottom timeline graphic expands the graphic to show I2P stage name. The color of the icon reflects the average of the features it relates to, per the color scheme above, and the averages per:
Idea - issues
Issue - average of issues and comments
Plan - average of milestones and boards
Code - merge requests
Commit - merge requests
Test - pipelines
Review - average of pipelines and environments
Staging - average of environments and deployments
Production - deployments
Feedback - average of monitoring and service desk
Timeline graphic will only appear on larger screens. Columns will wrap at smaller screen sizes and so the timeline graphic would no longer equate to the features correctly.
@regisF : Do you know if there are any legal considerations to use usage ping data like this? It is aggregated and totally anonymized. But I do we need to update our copy on the usage ping UI? Who should review this?
@smcgivern: For this, I see a few net new flows and processes. Any immediate concerns / blockers?
We would need a system to aggregate and crunch data. We already have version.gitlab.com that's collecting usage ping data. I suppose we would leverage that to do the work, and it would perform it on a schedule somehow?
The individual on-prem EE instances would need to get this data from us (from version.gitlab.com I guess)? Would this be a problem? I suppose we could send back data to the EE instances as part of the usage ping flow? Like piggybacking off that ping as a response back to the instance with updated cohort data?
These calculations seem pretty straightforward. We could calculate them on the instance or on version.gitlab.com right? If we start adding more functionality to version.gitlab.com, and start having more traffic hit it for more features, should we start planning for that now? What are the next steps in terms of architecting that?
@ernstvn@JobV@sytses : This data seems to be already readily available in some format on version.gitlab.com. (Might have to clean up some usage pings along the way).
Have we considered sending an email communication with this information to our customers, at least in the short-term, as we build out the infrastructure and features? That would allow us to iterate on the "feature" itself and get good feedback earlier, and would be I think great from a product development perspective.
Proposal:
Establish metrics we want to share with our customers. Create these reports. Largely manual process with lots of SQL queries.
Work with marketing, sales, and/or support teams to send these reports regularly to our customers. We can piggyback it off of any already existing communication channels. Let me them know we are building something into the product itself and ask for feedback. We can market this initiative as "free consulting" or however it should be branded. (Your monthly "GitLab Cohort Analysis Report" for e.g.)
Iterate on the above, with more metrics and automating different pieces of the process.
Use the learning above to build into the product iteratively.
This content could be recycled into other marketing materials like blog posts and webcasts.
What is "leader"? Is that just the highest among all customers who are sending data in?
What is "score"? Is that already a percentile (per the slides), or is that some other metric?
@victorwu , these are up for more rigorous definition, but indeed the leader would be highest (or best, e.g. shortest time) score from amongst the customers that send in data. The score is - in my mind - a percentile; but in any case should provide some sense of how you're doing w.r.t. the leader.
Have we considered sending an email communication with this information to our customers, at least in the short-term, as we build out the infrastructure and features?
@victorwu I don't like the sound of that... it seems like it would be a lot of work compared to initially just making the metrics of one's own instance available within one's own instance. But we can probably crunch some numbers on version.gitlab.com and have those crunched numbers display in a public manner; that would remove the manual SQL queries and remove the sending out reports...
I, however, don't see the purpose of comparing the usage of features between customers, from a customer point of view. What benefits would the customer have from knowing that the industry in general, has 26% more comments made on issues, or uses 67% more issue boards? I get and like the idea of indicating improvements here and there(like "You don't seem to use builds much, here are some tips"), but I don't think comparing to others is how we should influence them.
They should want to use GitLab more because of the positive impact it could make on their company, not to be on par with an industry standard.
@regisF : The metrics on how to achieve the above are yet to be defined in detail yet. I agree with the scenario you laid out is not helpful. But we should be able to establish some metrics that are useful in achieving the business goals I mentioned (https://gitlab.com/gitlab-org/gitlab-ce/issues/30469#note_26986758).
@ernstvn : After the discussion, I have further clarity:
Help customers see how they are doing with respect to themselves. This has to be built into the product because we have the most flexibility to build visibility into the data on their own instances. Everything is self contained. Let's focus on this first, iterating on top of https://gitlab.com/gitlab-org/gitlab-ce/issues/29551.
Help customers see how they are doing with respect to each other. This is more complex since GitLab has to aggregate the data from many different customers, and then disseminate it back to the customer. I'm not sure what's the best way to do this. But there might be an opportunity to build this outside of the product in parallel while we communicate the same information through email. The technical analysis wouldn't be wasted because it would be rolled into the product eventually. Let's worry about this later and focus on the first item.
I still don't really understand the business problem. Is 'using all the features of GitLab' really a customer goal? I see why it benefits us, but I don't really understand how it benefits the customer. For instance, I think GitLab issues are great - but many people can't use them, or won't use them, because we don't support something like custom fields, or because the rest of their company uses a single issue tracking system, or whatever. If I'm a customer in that situation, what benefit do I get from being told that other people are using GitLab issues? Similarly, if a team prefers bigger issues with task lists to smaller issues (as we don't support issue relationships), why is it a benefit to them?
It seems like:
Help our sales team see the above information.
Is actually the real goal, but it's just thrown on last, like it's no big deal. But that's what we actually want here.
To me this feels like putting quantitative data beyond where it can reasonably go, if the goal is to make people use GitLab better. If the goal is to help our sales team, then fine, but let's be direct about it, in line with our values
3
Sid Sijbrandijchanged title from Cohort analysis to Conversational Development Index
changed title from Cohort analysis to Conversational Development Index
Victor Wuchanged title from Conversational Development Index to Conversational Development Maturity
changed title from Conversational Development Index to Conversational Development Maturity
@JobV : Thoughts on that one big number at the top of the view? My thoughts + proposal:
It should be big, prominent, and memorable, so that a customer can see it and share with their organization, and a sales person can also use it in communications.
It should not be a ranking. Even though it is based on comparative analysis, we want this to imply objectiveness against some standard. So an absolute number is better.
It should not be a percentage. Because percentage implies some type of fraction / partial-ness. And we don't want to mislead customers to think this represents anything like that.
So we probably don't want to use out of 100 because it reminds people of percentages.
So it should be some score or rating.
A score out of 5 causes people to think of Yelp or general human reviews. So that's no good.
So it seems something out of 10 is what remains.
Proposal:
Take a set of metrics we care about. (All of them?)
For each metric, divide the customer value by the best value. The result is a number in [0, 1].
Average all these results over all metrics.
Multiple the result by 10 to get a number in [0, 10].
Just based on math, most customers will get a really low score. But then we also show their relative ranking next to that big number. So that helps convey you may have a low score, but you may be not that far off from the best.
Another strategy to bump up the rankings is to round the final number to some precision. Then you will get a lot of ties in your rankings. And so you can say that you are ranked 20th out of many many customers, but that there are really like 50 customers all ranked 20th, and the next customer would be ranked 71th.
I still don't really understand the business problem. Is 'using all the features of GitLab' really a customer goal? I see why it benefits us, but I don't really understand how it benefits the customer. For instance, I think GitLab issues are great - but many people can't use them, or won't use them, because we don't support something like custom fields, or because the rest of their company uses a single issue tracking system, or whatever. If I'm a customer in that situation, what benefit do I get from being told that other people are using GitLab issues? Similarly, if a team prefers bigger issues with task lists to smaller issues (as we don't support issue relationships), why is it a benefit to them?
GitLab offers something unique: a single place to go from idea to production and back to idea through monitoring. You're used to many of the powers that GitLab gives you now, but for many of our customers the world is still all about having many different applications, which do not integrate well, which require separate authentication, which do not share information yet duplicate so much, which all look, feel and work differently.
GitLab removes the thresholds of these steps and allows anyone to see and contribute to everything between idea and production. That's a super power that makes shipping faster - reducing the cycle time. This makes lives of devs, managers and executives easier, as everything happens in a single conversation. And we even show you this through cycle analytics.
But how would a customer know this? Sure, we can tell them, but what is more convincing than that? It's showing them. Showing them that their peers are doing better by adopting more of GitLab's tools. Showing them that some of their teams are working better because they've adopted more. etc.
We want to give organisations the power to see how they are performing, show them tools that will help them improve and give them actual data on how others have used this to do better than them.
Sure, not all of their needs might be met today, but let them try it and request those features. Let us build them and before you know it, it's not us competing with others - it's other trying to do what GitLab has done.
@JobV thanks, I appreciate the response. I want to be very clear that I'm not trying to be obtuse here, but I genuinely don't see how these statistics are:
Showing them that their peers are doing better by adopting more of GitLab's tools. Showing them that some of their teams are working better because they've adopted more. etc.
Unless we mean that they already know that their peers are doing better, and then they find out that their peers are using GitLab more. But then we seem very much a hostage to the possibility that their peers are doing better despite using GitLab less, or that they are doing better than their peers even though their peers are using GitLab more
Like I said, I think teams would benefit more from using more of GitLab's features - I know we would! But I don't understand the direct connection between this data and team performance that is presented here.
Unless we mean that they already know that their peers are doing better, and then they find out that their peers are using GitLab more. But then we seem very much a hostage to the possibility that their peers are doing better despite using GitLab less, or that they are doing better than their peers even though their peers are using GitLab more
Like I said, I think teams would benefit more from using more of GitLab's features - I know we would! But I don't understand the direct connection between this data and team performance that is presented here.
@smcgivern it's fair to say that if we aren't able to make teams ship more succesfully by using our features, we should be doing some to change that situation. That - and we might not be able to visualize that with incomplete data.
@JobV sure, my concern is just that there are too many confounding variables across different users of GitLab, so that it's hard to prove what I believe, that GitLab makes teams ship better software. Within a team, sure, I would expect to see the effects you mentioned
I worry about the 'embedded in GitLab'. I'd prefer it if we'd just replicate the view to version.gitlab.com rather than having the customer work through an iframe of some sort and needing to see a page within a page. Might also be harder and slower than a simple json body loaded in the UI of GitLab from version.gitlab.com
remove the CTA's like "create a board", I don't think that makes sense when we have no idea of scope / namespace
I think rank should be a single number without comparative. Won't look good to see 1/3 for the first users and gives you nothing of importance back and reduces our options in terms of showing you a particular rank (e.g. if we want to switch to showing it for similar companies or anything else). We also don't want to share all our customer numbers all the time.
not sure about the status bars, as this is not something that automatically fills up over time or something that progresses. Rather a single indicator to indicate whether something something is low, missing, high, on track or good might actually stand out more.
I think what can be better is focusing on the big picture first, then iterating towards detail. For instance having a big overview on what you're doing well and what not will enable to you explore the (eventual) detail that is below it. A quick example:
Then from those big lines, we could move forward with the detail as you showed in later iterations, allowing one to see what parts of their index could be improved. We could even float bad performing parts to the top with concrete examples and actionable content. E.g. if CI isn't used, it should be "Learn more", it should say "Users with CI deploy more often and build more resilient software, learn how to start using GitLab CI" in a line somewhere.
@JobV : I like your idea of focusing on big picture and using t-shirt sizing. It definitely aligns with the audience. See the description of the latest design:
I removed the ranking. The problems you mentioned are not worth the effort to get a version of it at the outset.
I'm showing the actual number for each feature usage metric. That's a well-defined number that the audience understand. So it makes sense to show it, as opposed to an index.
For a given metric, the audience should glance at the color/t-shirt size. And then if they care, they can look at the specific number and see it compared to the "best" number. This allows for comparison, but we don't need to give away granularity details of rank or percentiles for now.
For the entire view, there's an index value out of 10. That is an index, but also has a color
@victorwu I think the current mockup is missing a fundamental part of this. It's the flow from left to right in convdev. I think that is not clear at all from your mockup.
We have a sum, a total index (btw, I'd remove "Maturity"), which is the result of a number of other factors.
Visually, intuitively, this has to be clear from our design and that isn't the case currently. See my proposal above, I think that should be the start of this and everything should flow from there, so that if I see this simple little table (as seen in the slide), I immediately SEE where I have to improve. Not after scrollin through a longer page.
With vertical, you can still see everything without scrolling. You can include UI to the right, essentially because the text flows horizontally so this works very well per the existing design.
You can certainly fit 10 rows without scrolling. So you can see the entire screen right away.
This design is scalable in the future when you have multiple rows, to group them, so you can still see a summary or highlights without scrolling.
Vertical is better for mobile.
With horizontal, it is difficult to include other UI with each item. More difficult to display links to other pages/information. One design is to have horizontal, and it serving essentially as a top navigation with more info below. When you click on an item, you get different information/UI. This has the negative of not presenting everything at the same time.
I think the only benefit of horizontal is the inherent left-to-right metaphor of idea to production of ConvDev. I think that's pretty important. So I think it overrides the other concerns and we should use horizontal, and best try to make the other pieces work with that? Thoughts?
@victorwu I agree with the problems of a vertical layout, but my worry is that your current mockup is less clear than the one presented in the slides. You can imagine a combination:
Having this flow on top and details below. In the one in your comment, I worry that requiring a click is hiding information too much. Maybe just have a few simple things below each box (a simplification as you saw on the slides) and have a full detail section below.
We can optimize it so the for most screen sizes, you can can see everything in one horizontal line. The design is nice for responsive layouts. We just wrap each section as needed. And on mobile with a thin screen, it would just all stack.
Not sure what to do with that idea-to-production/feedback graphic there. I'm sure UX can help us design something that's not that ugly, but helps drive home the metrics are laid out horizontally and in accordance with the desired left to right flow. Maybe something more subtle but still communicates the flow.
We just had a call with: @sytses@victorwu@regisF@klawrence . We chatted about the problem we are trying to solve and how this is what our customers desire. We reviewed the existing design. As for next steps:
@victorwu will finalize the issue scopes by next Monday (April 24) for @sytses 's review. In particular:
Separate issues
ConvDev Index page inside GitLab with static leader score. No new data flows required. Static score generated from data in version.gitlab.com. (This issue)
Has anyone from Production been included in this discussion? Keep in mind that version.gitlab.com is currently a single VM hosted on AWS, so it's a bit of a snowflake and not at all prepared for a substantial increase in load, as far as I know. So when we say things like
The metrics are calculated on version.gitlab.com and returned back to the customer's GitLab instance to display.
without even talking to Production first, and moving straight to implementation, we're setting ourselves up for trouble.
But the plan is not to calculate those on every view: it's to return them in response to the usage ping. So this won't lead to an increase in requests to the version app, although processing those requests may take longer.
I didn't want to waste time coming up with an illustration so I'm using the Cycle Analytics graphic as a placeholder. I think @hazelyang could come up with something much better and faster than I could that represents the full development cycle.
Like Cycle Analytics, the message is dismissible since it is not important after viewing once.
The reason I did not include all columns in one row like the wireframe is because they would either be really thin and difficult to digest or the page would scroll horizontally which would be difficult to get an overview from. This way, the design can also be responsive and work at all screen sizes.
@tauriedavis I think what lacks here is the idea of it being a single "flow", which is why the mockups all insisted on the horizontal approach. Do you think we could introduce something that'd almost gamify having all these green / showing more of a flow or showing that these things together form the overall index score?
What is the importance of representing a single flow? How does that help the user get an overview of features, discover what they aren't using, and learn best practices? I'm not sure why it is necessary to show it as one flow.
As for gamification, I think we would need more data to provide call to actions. For example, we could rotate CTAs like Create a board in [Project] to increase your overall index score by #%
I get that is how I2P moves, but why is that a design restraint? What does it provide the user? It seems like there are other mediums to explain and sell the flow.
Restricted to our container size, 10 rows would look like this at the max screen size (and this is even using less padding than we usually use between columns). They would quickly wrap at anything less.
Alternatively, they could use the full screen width (expand outside of the container) and scroll horizontally like issue boards. But it does not feel more important to show a flow than to show an overview in this context.
@tauriedavis : That's looks great. I would say wrapping for smaller screens is fine, and so vertical scrolling is better than horizontal scrolling (as in issue boards).
The horizontal design is the preferred layout if at all possible, with the branding of I2P. So when you load the page, at a glance, you see the association with I2P. In the wireframe, we made that association really explicit where we just copied the homepage graphic. But perhaps there can be something more subtle and creative to make that association. Horizontal helps, and maybe some other arrows / graphics, etc.?
I've quickly added some of the UI to help demonstrate why a horizontal layout does not work. Using our base font size, the content does not fit. "Environments" and "Deployments" is a great example of this.
This layout also does not provide a means for adding to the I2P flow over time. At GitLab we are continually improving the flow by adding features that help improve the process. If we add more, do we decrease the width again? It simply isn't scalable.
For the graphic, I think that can be there when the user first comes to the page and is something that can replace the launch placeholder graphic. I'm not sure it is something the user needs to be reminded of every time they want to view their feature usage. But we can certainly keep it there if its helpful to sell the message.
I can work on adding some sort of arrows that show the flow if that is necessary to product.
@tauriedavis : Thanks. We're starting with just 10 metrics so that it is glance-able, and like @JobV said, it can be a quick gamified view. Not sure we will ever go beyond 10, because we want something that is glance-able ever over time. So we might swap out metrics or just have different pages of metrics.
Would it be really stretching it to simply shrink the font sizes as necessary to make it fit?
Thanks @tauriedavis : What do you think works best? The goal of the page is to allow the user (who is a senior manager / executive) to at a glance, see how their GitLab usage is performing, and within an I2P context, and quickly zoom into the problem. The horizontal flow immediately gives you that I2P context.
I think horizontal view is the easiest to do all this. So maybe even just using ellipsis to cut of section headings I think would be worth it just to satisfy it. But if you really don't think horizontal view is bad, can you explain why and how https://gitlab.com/gitlab-org/gitlab-ce/issues/30469#note_29129481 or a variation of it could better achieve the goal of the page?
+1 for even trying to reduce it further since this is supposed to be a glanceable thing... and the one word titles of the tiles aren't so clear... Maybe time to condense the I2P steps into 3-5 steps for easier conversation?
Yeah, I agree with @ernstvn. I'm not sure 10 is very glance-able (even within one row).
From these comments it sounds like there are two goals of the product.
Provide the following to the user:
- Customers view how they are using GitLab from a feature perspective.- View how they compare with other organizations, and see how far they are away from a better metric.- Discover features that they are not already using, or using incompletely.- Learn about best practices by visiting relevant blog posts and white papers.- Self-service inside their GitLab instance.
Communicate the idea of I2P by providing a glance-able overview in a single flow
The design I proposed speaks towards the list provided in the description and shows customers how they are using features, how they compare with others, whether or not they are not using a feature yet, and the ability to learn through blog posts/white papers.
Truncating the name of the feature seems like the opposite of the intention of this product - which is to provide insight into how customers are using features.
I will work on another design that represents the flow. I'm just trying to understand how showing a single flow benefits the user or helps them use this product.
Impact on the production server of version.gitlab.com:
As Sean said, we will perform the calculations only when receiving the usage ping. In the first iteration we will use static data for the leader score, so the only thing that we have to get from the database are usage ping records for one uuid. Considering that the usage ping happens weekly and that the time window for calculations is 35 days, it won't be more than 6 records to get from the database. We have to add an index on uuid.
We received 26371 usage pings in the last 7 days and they are evenly distributed. That's 2.62 requests per minute. I expect the impact of the changes to be negligible.
Thanks @tauriedavis . I know the design is not easy for this one. If you can take one more stab at it, that would be great. We already have a few versions from this comment thread that you've created. We can pick the best one.
@tauriedavis if you think a vertical design is better I'm open to that.
@victorwu Some suggestions from someone that is very experienced in this field.
Based on multiple feedback from customers, we're migrating the index from "feature usage" to "use cases". "Use cases" vary by customer and is thus harder to track. However, it is the job of Sales (when selling to a new customer) and the Customer Success team (for existing customers) to keep tabs on customers' use cases and how these evolve over time. This change would be a better enabler to this conversation, and would ensure that we're constantly looking at things from the perspective of the customer.
From a purely internal perspective: When viewed from the perspective of use cases, and compared with the relevant "Leader" within the same space, there is a more meaningful conversation about potential upsell and cross-sell.
The Leader needs to be identified with care; customers often ask us the criteria for selecting their group. Typically we go with a standard industry or sub-industry, with some size criteria (so that large firms are in the same bucket).
A time series of the overall index (or sub-index) tell a story whereas a number at a point in time is more static.
Let's not do it in this iteration but I saw a screen in software that showed the usage and the ROI. It would show you the number of FTE saved. I thought it was really clever.
I've come up with a horizontal version- placed the columns closer together to make room for copy.
First time viewers see the top section until they dismiss it:
Without top section:
The bottom "timeline graphic" shows the icons of each I2P step near the features it relates to. Hovering will show the state name, and the icon is the color of the median percentage of the features that relate to it (if possible).
I imagine the "timeline graphic" would only be viewable at the largest screen size since it wouldn't correlate to the features as soon as the columns begin to wrap.
Thanks @tauriedavis ! That looks really good! Could you update the description with the mockups and any assets @psimyn will need for the FE? Thanks!
We won't need the 30 days dropdown for this iteration (even if ever). We'll be able to give more information regarding the calculation when the user clicks ? and goes to the docs.
@tauriedavis : Do you mind adding two empty states to the design:
When the usage ping is not enabled, they see nothing on the screen, and should probably have a button/link to be re-directed to activate it. Right now on the user cohorts page we have something similar:
When the usage ping is enabled, but there is not yet any data for any calculations to appear. In that case, maybe some messaging to indicate that they will see some data in approximately 1 to 2 weeks.
@tauriedavis : I think if we have the two states above, we should include the top information panel Introducing Your Conversational Development Index, especially for the usage ping disabled scenario, because it incents people to turn on usage ping. Do you agree? Would that look weird though?
I think it makes sense to include the intro on these pages as well! Esp the usage ping like you mentioned.
The first graphic was very similar to the intro graphic, so I just removed the first column so it wouldn't look as duplicated. I've added the svgs and the comps to the description.
Please note that service desk is EE-only so the last number (Service desk issues created per active user) will be always 0 in CE, without any possibility to improve it other than upgrading to EE. I'm not sure if that's intentional or incidental.
I thought that we wanted to promote the whole Idea to Production flow as accessible both in CE and EE, so maybe that's worth another look @victorwu.
Thanks @adamniedzielski . That's totally fine. One of the use cases is for the user to exactly see that fact. So it is intentional. In the future some of these metrics may change / add / remove, and they would be impacted by different tiers in our product line up. So this would definitely help users see what they would lack and how they can improve by going for a higher tier. Service desk here is the first one. Thanks for calling it out.
Regarding I2P, I've never considered / heard it having to be complete in all tiers of our product. But definitely for this iteration, it should be fine. Thanks!
@victorwu That works for me. How about highlighting this fact in the design? In other words, if the current edition does not contain service desk communicate this fact visually right in this screen.
But for something else like boards, EES has multiple boards, so those users might get a higher score. And at some point we do want to promote CE users to use multiple boards (and thus upgrade to EES). Do you think there's still a chance (at least for this iteration), to start bringing that type of design in? It's messy because the metrics are not one to one mappings to our product tiers.
I do believe that I2P features are suppose to be available in CE. This is why we at first were going to put Cycle Analytics in EE, but then it became available in CE because it is a part of I2P.
should this be styled the same as the "Customize your experience" callout/popup? Currently they are almost the same except for font size and background color.
Cards and Stages
cards wrap at <1600px, and stages list is hidden
max width of everything is 1920px
where does 'index score' help link go?
what should the info/docs buttons on cards do on hover?
Are the svgs for the empty states just really messed up?
The link looks good for enabling usage pings
@victorwu is there a link to learn more about usage data for when we are still collecting information?
You are right. I thought the callout for Cycle Analytics and other features still used the white background, but it looks like they share CSS so lets keep this consistent with those and make it blue.
Isn't our container width 1280? With 16px padding, so the content is 1248? I think the columns are wider than the mockup. I made sure to keep everything within our normal container.
The doc for index score may not exist yet. @victorwu Is there a doc yet for the index score?
Hover state for info/docs buttons is like out btn-default styling:
This should link to the new docs page for convdev, as part of this feature. Again, should be in-app docs right? If you start a page in the docs for this feature, then you can at least link it to a new and existing page. And then @axil and I can work together to finish up the docs, probably in a separate merge request since this may take a little bit more work.
@axil : Any thoughts where the docs should live for this?
where does 'index score' help link go?
This should link to the same docs page for convdev. I think both this link and the Learn more link can go to the same docs page, where we can explain everything all in one location, including what this feature is, and why we need extra time to calculate it for new users.
@psimyn : Please re-ping me and @tauriedavis if we missed anything. Thanks!!!!!
@hazelyang are you able to re-export the empty state SVGs without the use attribute (maybe some sort of export settings). It's making them display strangely on the actual page, due to ID conflicts.
It looks great on white, but on the blue callout background - the grays blend in and it's difficult to look at.
Thanks @psimyn - from what I can tell from that screen shot its looking pretty good. The stages list icons look too big so some of them are touching. I would also be happy to check out your MR and make CSS tweaks if thats easiest!
Hovering over an item in the bottom timeline graphic expands the graphic to show I2P stage name. The color of the icon reflects the average of the features it relates to.
@victorwu Based on the mockups and positions of respective steps of Idea to Production I assume that the formulas for "the average of the features it relates to" are following:
@adamniedzielski : Thanks for asking. That's something that @tauriedavis designed that I didn't consider carefully when reviewing the design. Sorry for not looking at that in more detail. Yes what you wrote is exactly right. I updated the description per your spec.
@axil I think that I misunderstood https://gitlab.com/gitlab-org/gitlab-ce/issues/30469#note_30545854 a bit, because I didn't write anything (as opposed to writing something and letting you finish it). We link to user/admin_area/monitoring/convdev, but this page doesn't exist. We should write these docs before 9.3. Can you or @victorwu do that?