Skip to content
Snippets Groups Projects
Commit 47e35934 authored by GitLab Bot's avatar GitLab Bot
Browse files

Add latest changes from gitlab-org/gitlab@master

parent a1565a82
No related branches found
No related tags found
No related merge requests found
Showing
with 111 additions and 90 deletions
Loading
Loading
@@ -207,7 +207,7 @@ export default {
<gl-form-input class="hidden" name="issue[title]" :value="issueTitle" />
<input name="issue[description]" :value="issueDescription" type="hidden" />
<gl-form-input
:value="GQLerror.id"
:value="GQLerror.sentryId"
class="hidden"
name="issue[sentry_issue_attributes][sentry_issue_identifier]"
/>
Loading
Loading
---
title: Identify correct sentry id in error tracking detail
merge_request: 23280
author:
type: fixed
Loading
Loading
@@ -266,13 +266,13 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
 
1. SSH into your GitLab **secondary** server and login as root:
 
```
```sh
sudo -i
```
 
1. Stop application server and Sidekiq
 
```
```sh
gitlab-ctl stop unicorn
gitlab-ctl stop sidekiq
```
Loading
Loading
@@ -295,7 +295,7 @@ There is an [issue where support is being discussed](https://gitlab.com/gitlab-o
 
1. Create a file `server.crt` in the **secondary** server, with the content you got on the last step of the **primary** node's setup:
 
```
```sh
editor server.crt
```
 
Loading
Loading
Loading
Loading
@@ -13,13 +13,13 @@ developed and tested. We aim to be compatible with most external
 
1. SSH into a GitLab **primary** application server and login as root:
 
```sh
```bash
sudo -i
```
 
1. Execute the command below to define the node as **primary** node:
 
```sh
```bash
gitlab-ctl set-geo-primary-node
```
 
Loading
Loading
@@ -47,7 +47,7 @@ configures the **primary** node's database to be replicated by making changes to
`pg_hba.conf` and `postgresql.conf`. Make the following configuration changes
manually to your external database configuration:
 
```
```plaintext
##
## Geo Primary Role
## - pg_hba.conf
Loading
Loading
@@ -55,7 +55,7 @@ manually to your external database configuration:
host replication gitlab_replicator <trusted secondary IP>/32 md5
```
 
```
```plaintext
##
## Geo Primary Role
## - postgresql.conf
Loading
Loading
@@ -75,7 +75,7 @@ hot_standby = on
Make the following configuration changes manually to your `postgresql.conf`
of external replica database:
 
```
```plaintext
##
## Geo Secondary Role
## - postgresql.conf
Loading
Loading
Loading
Loading
@@ -357,7 +357,7 @@ is prepended with the relevant node for better clarity:
1. **(secondary)** Save the snippet below in a file, let's say `/tmp/replica.sh`. Modify the
embedded paths if necessary:
 
```
```bash
#!/bin/bash
 
PORT="5432"
Loading
Loading
Loading
Loading
@@ -37,10 +37,9 @@ service is already configured to accept the `GIT_PROTOCOL` environment and users
need not do anything more.
 
For Omnibus GitLab and installations from source, you have to manually update
the SSH configuration of your server:
the SSH configuration of your server by adding the line below to the `/etc/ssh/sshd_config` file:
 
```
# /etc/ssh/sshd_config
```plaintext
AcceptEnv GIT_PROTOCOL
```
 
Loading
Loading
@@ -69,7 +68,7 @@ GIT_TRACE_CURL=1 git -c protocol.version=2 ls-remote https://your-gitlab-instanc
 
You should see that the `Git-Protocol` header is sent:
 
```
```plaintext
16:29:44.577888 http.c:657 => Send header: Git-Protocol: version=2
```
 
Loading
Loading
@@ -105,7 +104,7 @@ GIT_SSH_COMMAND="ssh -v" git -c protocol.version=2 ls-remote ssh://your-gitlab-i
 
You should see that the `GIT_PROTOCOL` environment variable is sent:
 
```
```plaintext
debug1: Sending env GIT_PROTOCOL = version=2
```
 
Loading
Loading
Loading
Loading
@@ -208,7 +208,7 @@ Git operations in GitLab will result in an API error.
 
On `gitaly1.internal`:
 
```
```ruby
git_data_dirs({
'default' => {
'path' => '/var/opt/gitlab/git-data'
Loading
Loading
@@ -221,7 +221,7 @@ Git operations in GitLab will result in an API error.
 
On `gitaly2.internal`:
 
```
```ruby
git_data_dirs({
'storage2' => {
'path' => '/srv/gitlab/git-data'
Loading
Loading
@@ -519,7 +519,7 @@ To configure Gitaly with TLS:
To observe what type of connections are actually being used in a
production environment you can use the following Prometheus query:
 
```
```prometheus
sum(rate(gitaly_connections_total[5m])) by (type)
```
 
Loading
Loading
@@ -648,14 +648,14 @@ machine.
Use Prometheus to see what the current authentication behavior of your
GitLab installation is.
 
```
```prometheus
sum(rate(gitaly_authentications_total[5m])) by (enforced, status)
```
 
In a system where authentication is configured correctly, and where you
have live traffic, you will see something like this:
 
```
```prometheus
{enforced="true",status="ok"} 4424.985419441742
```
 
Loading
Loading
@@ -684,7 +684,7 @@ gitaly['auth_transitioning'] = true
After you have applied this, your Prometheus query should return
something like this:
 
```
```prometheus
{enforced="false",status="would be ok"} 4424.985419441742
```
 
Loading
Loading
@@ -730,7 +730,7 @@ gitaly['auth_transitioning'] = false
Refresh your Prometheus query. You should now see the same kind of
result as you did in the beginning:
 
```
```prometheus
{enforced="true",status="ok"} 4424.985419441742
```
 
Loading
Loading
@@ -870,7 +870,7 @@ gitaly-debug -h
 
### Commits, pushes, and clones return a 401
 
```
```plaintext
remote: GitLab: 401 Unauthorized
```
 
Loading
Loading
@@ -902,7 +902,7 @@ Assuming your `grpc_client_handled_total` counter only observes Gitaly,
the following query shows you RPCs are (most likely) internally
implemented as calls to `gitaly-ruby`:
 
```
```prometheus
sum(rate(grpc_client_handled_total[5m])) by (grpc_method) > 0
```
 
Loading
Loading
Loading
Loading
@@ -64,7 +64,7 @@ command to verify all server nodes are communicating:
 
The output should be similar to:
 
```
```plaintext
Node Address Status Type Build Protocol DC
CONSUL_NODE_ONE XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
CONSUL_NODE_TWO XXX.XXX.XXX.YYY:8301 alive server 0.9.2 2 gitlab_consul
Loading
Loading
@@ -80,8 +80,8 @@ check the [Troubleshooting section](#troubleshooting) before proceeding.
 
To see which nodes are part of the cluster, run the following on any member in the cluster
 
```
# /opt/gitlab/embedded/bin/consul members
```shell
$ /opt/gitlab/embedded/bin/consul members
Node Address Status Type Build Protocol DC
consul-b XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
consul-c XX.XX.X.Y:8301 alive server 0.9.0 2 gitlab_consul
Loading
Loading
@@ -127,7 +127,7 @@ By default, the server agents will attempt to [bind](https://www.consul.io/docs/
 
You will see messages like the following in `gitlab-ctl tail consul` output if you are running into this issue:
 
```
```plaintext
2017-09-25_19:53:39.90821 2017/09/25 19:53:39 [WARN] raft: no known peers, aborting election
2017-09-25_19:53:41.74356 2017/09/25 19:53:41 [ERR] agent: failed to sync remote state: No cluster leader
```
Loading
Loading
@@ -154,7 +154,7 @@ In the case that a node has multiple private IPs the agent be confused as to whi
 
You will see messages like the following in `gitlab-ctl tail consul` output if you are running into this issue:
 
```
```plaintext
2017-11-09_17:41:45.52876 ==> Starting Consul agent...
2017-11-09_17:41:45.53057 ==> Error creating agent: Failed to get advertise address: Multiple private IPs found. Please configure one.
```
Loading
Loading
@@ -181,10 +181,10 @@ If you lost enough server agents in the cluster to break quorum, then the cluste
 
By default, GitLab does not store anything in the Consul cluster that cannot be recreated. To erase the Consul database and reinitialize
 
```
# gitlab-ctl stop consul
# rm -rf /var/opt/gitlab/consul/data
# gitlab-ctl start consul
```shell
gitlab-ctl stop consul
rm -rf /var/opt/gitlab/consul/data
gitlab-ctl start consul
```
 
After this, the cluster should start back up, and the server agents rejoin. Shortly after that, the client agents should rejoin as well.
Loading
Loading
Loading
Loading
@@ -229,7 +229,7 @@ available database connections.
 
In this document we are assuming 3 database nodes, which makes this configuration:
 
```
```ruby
postgresql['max_wal_senders'] = 4
```
 
Loading
Loading
@@ -352,7 +352,7 @@ When installing the GitLab package, do not supply `EXTERNAL_URL` value.
to inform `gitlab-ctl` that they are standby nodes initially and it need not
attempt to register them as primary node
 
```
```ruby
# HA setting to specify if a node should attempt to be master on initialization
repmgr['master_on_initialization'] = false
```
Loading
Loading
@@ -396,7 +396,7 @@ Select one node as a primary node.
 
The output should be similar to the following:
 
```
```plaintext
Role | Name | Upstream | Connection String
----------+----------|----------|----------------------------------------
* master | HOSTNAME | | host=HOSTNAME user=gitlab_repmgr dbname=gitlab_repmgr
Loading
Loading
@@ -442,7 +442,7 @@ Select one node as a primary node.
 
The output should be similar to the following:
 
```
```plaintext
Role | Name | Upstream | Connection String
----------+---------|-----------|------------------------------------------------
* master | MASTER | | host=MASTER_NODE_NAME user=gitlab_repmgr dbname=gitlab_repmgr
Loading
Loading
@@ -463,7 +463,7 @@ gitlab-ctl repmgr cluster show
 
The output should be similar to:
 
```
```plaintext
Role | Name | Upstream | Connection String
----------+--------------|--------------|--------------------------------------------------------------------
* master | MASTER | | host=MASTER port=5432 user=gitlab_repmgr dbname=gitlab_repmgr
Loading
Loading
@@ -652,7 +652,7 @@ On secondary nodes, edit `/etc/gitlab/gitlab.rb` and add all the configuration
added to primary node, noted above. In addition, append the following
configuration:
 
```
```ruby
# HA setting to specify if a node should attempt to be master on initialization
repmgr['master_on_initialization'] = false
```
Loading
Loading
@@ -706,7 +706,7 @@ After deploying the configuration follow these steps:
gitlab-psql -d gitlabhq_production
```
 
```
```shell
CREATE EXTENSION pg_trgm;
```
 
Loading
Loading
@@ -804,7 +804,7 @@ consul['configuration'] = {
On secondary nodes, edit `/etc/gitlab/gitlab.rb` and add all the information added
to primary node, noted above. In addition, append the following configuration
 
```
```ruby
# HA setting to specify if a node should attempt to be master on initialization
repmgr['master_on_initialization'] = false
```
Loading
Loading
@@ -908,7 +908,7 @@ after it has been restored to service.
 
It will output something like:
 
```
```plaintext
959789412
```
 
Loading
Loading
@@ -1052,7 +1052,7 @@ Now there should not be errors. If errors still occur then there is another prob
You may get this error when running `gitlab-rake gitlab:db:configure` or you
may see the error in the PgBouncer log file.
 
```
```plaintext
PG::ConnectionBad: ERROR: pgbouncer cannot connect to server
```
 
Loading
Loading
@@ -1063,13 +1063,13 @@ You can confirm that this is the issue by checking the PostgreSQL log on the mas
database node. If you see the following error then `trust_auth_cidr_addresses`
is the problem.
 
```
```plaintext
2018-03-29_13:59:12.11776 FATAL: no pg_hba.conf entry for host "123.123.123.123", user "pgbouncer", database "gitlabhq_production", SSL off
```
 
To fix the problem, add the IP address to `/etc/gitlab/gitlab.rb`.
 
```
```ruby
postgresql['trust_auth_cidr_addresses'] = %w(123.123.123.123/32 <other_cidrs>)
```
 
Loading
Loading
Loading
Loading
@@ -11,7 +11,7 @@ these additional steps before proceeding with GitLab installation.
1. If necessary, install the NFS client utility packages using the following
commands:
 
```
```shell
# Ubuntu/Debian
apt-get install nfs-common
 
Loading
Loading
@@ -24,7 +24,7 @@ these additional steps before proceeding with GitLab installation.
to configure your NFS server. See [NFS documentation](nfs.md) for the various
options. Here is an example snippet to add to `/etc/fstab`:
 
```
```plaintext
10.1.0.1:/var/opt/gitlab/.ssh /var/opt/gitlab/.ssh nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
10.1.0.1:/var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/uploads nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
10.1.0.1:/var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-rails/shared nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
Loading
Loading
@@ -35,7 +35,7 @@ these additional steps before proceeding with GitLab installation.
1. Create the shared directories. These may be different depending on your NFS
mount locations.
 
```
```shell
mkdir -p /var/opt/gitlab/.ssh /var/opt/gitlab/gitlab-rails/uploads /var/opt/gitlab/gitlab-rails/shared /var/opt/gitlab/gitlab-ci/builds /var/opt/gitlab/git-data
```
 
Loading
Loading
Loading
Loading
@@ -132,7 +132,7 @@ For supported database architecture, please see our documentation on
Below is an example of an NFS mount point defined in `/etc/fstab` we use on
GitLab.com:
 
```
```plaintext
10.1.1.1:/var/opt/gitlab/git-data /var/opt/gitlab/git-data nfs4 defaults,soft,rsize=1048576,wsize=1048576,noatime,nofail,lookupcache=positive 0 2
```
 
Loading
Loading
@@ -149,7 +149,7 @@ Note there are several options that you should consider using:
It's recommended to nest all GitLab data dirs within a mount, that allows automatic
restore of backups without manually moving existing data.
 
```
```plaintext
mountpoint
└── gitlab-data
├── builds
Loading
Loading
Loading
Loading
@@ -83,7 +83,7 @@ In a HA setup, it's recommended to run a PgBouncer node separately for each data
 
The output should be similar to the following:
 
```
```plaintext
name | host | port | database | force_user | pool_size | reserve_pool | pool_mode | max_connections | current_connections
---------------------+-------------+------+---------------------+------------+-----------+--------------+-----------+-----------------+---------------------
gitlabhq_production | MASTER_HOST | 5432 | gitlabhq_production | | 20 | 0 | | 0 | 0
Loading
Loading
@@ -102,7 +102,7 @@ If you're running more than one PgBouncer node as recommended, then at this time
 
As an example here's how you could do it with [HAProxy](https://www.haproxy.org/):
 
```
```plaintext
global
log /dev/log local0
log localhost local1 notice
Loading
Loading
Loading
Loading
@@ -391,7 +391,7 @@ The prerequisites for a HA Redis setup are the following:
prevent database migrations from running on upgrade, add the following
configuration to your `/etc/gitlab/gitlab.rb` file:
 
```
```ruby
gitlab_rails['auto_migrate'] = false
```
 
Loading
Loading
@@ -439,7 +439,7 @@ The prerequisites for a HA Redis setup are the following:
 
1. To prevent reconfigure from running automatically on upgrade, run:
 
```
```shell
sudo touch /etc/gitlab/skip-auto-reconfigure
```
 
Loading
Loading
@@ -569,7 +569,7 @@ multiple machines with the Sentinel daemon.
 
1. To prevent database migrations from running on upgrade, run:
 
```
```shell
sudo touch /etc/gitlab/skip-auto-reconfigure
```
 
Loading
Loading
@@ -898,14 +898,14 @@ Before proceeding with the troubleshooting below, check your firewall rules:
You can check if everything is correct by connecting to each server using
`redis-cli` application, and sending the `info replication` command as below.
 
```
```shell
/opt/gitlab/embedded/bin/redis-cli -h <redis-host-or-ip> -a '<redis-password>' info replication
```
 
When connected to a `master` Redis, you will see the number of connected
`slaves`, and a list of each with connection details:
 
```
```plaintext
# Replication
role:master
connected_slaves:1
Loading
Loading
@@ -920,7 +920,7 @@ repl_backlog_histlen:1048576
When it's a `slave`, you will see details of the master connection and if
its `up` or `down`:
 
```
```plaintext
# Replication
role:slave
master_host:10.133.1.58
Loading
Loading
@@ -959,7 +959,7 @@ To make sure your configuration is correct:
1. SSH into your GitLab application server
1. Enter the Rails console:
 
```
```shell
# For Omnibus installations
sudo gitlab-rails console
 
Loading
Loading
@@ -985,7 +985,7 @@ To make sure your configuration is correct:
 
1. Then back in the Rails console from the first step, run:
 
```
```ruby
redis.info
```
 
Loading
Loading
Loading
Loading
@@ -96,7 +96,7 @@ request that have been performed and how much time it took. This task is
more useful for GitLab contributors and developers. Use part of this log
file when you are going to report bug. For example:
 
```
```plaintext
Started GET "/gitlabhq/yaml_db/tree/master" for 168.111.56.1 at 2015-02-12 19:34:53 +0200
Processing by Projects::TreeController#show as HTML
Parameters: {"project_id"=>"gitlabhq/yaml_db", "id"=>"master"}
Loading
Loading
@@ -151,7 +151,7 @@ installations from source.
It helps you discover events happening in your instance such as user creation,
project removing and so on. For example:
 
```
```plaintext
October 06, 2014 11:56: User "Administrator" (admin@example.com) was created
October 06, 2014 11:56: Documentcloud created a new project "Documentcloud / Underscore"
October 06, 2014 11:56: Gitlab Org created a new project "Gitlab Org / Gitlab Ce"
Loading
Loading
@@ -167,7 +167,7 @@ installations from source.
 
It contains information about [integrations](../user/project/integrations/project_services.md) activities such as Jira, Asana and Irker services. It uses JSON format like the example below:
 
``` json
```json
{"severity":"ERROR","time":"2018-09-06T14:56:20.439Z","service_class":"JiraService","project_id":8,"project_path":"h5bp/html5-boilerplate","message":"Error sending message","client_url":"http://jira.gitlap.com:8080","error":"execution expired"}
{"severity":"INFO","time":"2018-09-06T17:15:16.365Z","service_class":"JiraService","project_id":3,"project_path":"namespace2/project2","message":"Successfully posted","client_url":"http://jira.example.com"}
```
Loading
Loading
@@ -276,7 +276,7 @@ installations from source.
GitLab Shell is used by GitLab for executing Git commands and provide
SSH access to Git repositories. For example:
 
```
```plaintext
I, [2015-02-13T06:17:00.671315 #9291] INFO -- : Adding project root/example.git at </var/opt/gitlab/git-data/repositories/root/dcdcdcdcd.git>.
I, [2015-02-13T06:17:00.679433 #9291] INFO -- : Moving existing hooks directory and symlinking global hooks directory for /var/opt/gitlab/git-data/repositories/root/example.git.
```
Loading
Loading
@@ -294,7 +294,7 @@ serving the GitLab application. You can look at this log if, for
example, your application does not respond. This log contains all
information about the state of Unicorn processes at any given time.
 
```
```plaintext
I, [2015-02-13T06:14:46.680381 #9047] INFO -- : Refreshing Gem list
I, [2015-02-13T06:14:56.931002 #9047] INFO -- : listening on addr=127.0.0.1:8080 fd=12
I, [2015-02-13T06:14:56.931381 #9047] INFO -- : listening on addr=/var/opt/gitlab/gitlab-rails/sockets/gitlab.socket fd=13
Loading
Loading
Loading
Loading
@@ -57,14 +57,14 @@ repository.
 
To use this repository you must first clone it:
 
```
```shell
git clone https://gitlab.com/gitlab-org/influxdb-management.git
cd influxdb-management
```
 
Next you must install the required dependencies:
 
```
```shell
gem install bundler
bundle install
```
Loading
Loading
@@ -139,7 +139,7 @@ echo "0" > /var/opt/gitlab/grafana/CVE_reset_status
 
To reinstate your old data, move it back into its original location:
 
```
```shell
sudo mv /var/opt/gitlab/grafana/data.bak.xxxx/ /var/opt/gitlab/grafana/data/
```
 
Loading
Loading
Loading
Loading
@@ -48,7 +48,7 @@ upcoming InfluxDB releases.
 
Make sure you have the following in your configuration file:
 
```
```toml
[data]
dir = "/var/lib/influxdb/data"
engine = "tsm1"
Loading
Loading
@@ -60,7 +60,7 @@ Production environments should have the InfluxDB admin panel **disabled**. This
feature can be disabled by adding the following to your InfluxDB configuration
file:
 
```
```toml
[admin]
enabled = false
```
Loading
Loading
@@ -71,7 +71,7 @@ HTTP is required when using the [InfluxDB CLI] or other tools such as Grafana,
thus it should be enabled. When enabling make sure to _also_ enable
authentication:
 
```
```toml
[http]
enabled = true
auth-enabled = true
Loading
Loading
@@ -85,7 +85,7 @@ admin user](#create-a-new-admin-user)._
GitLab writes data to InfluxDB via UDP and thus this must be enabled. Enabling
UDP can be done using the following settings:
 
```
```toml
[[udp]]
enabled = true
bind-address = ":8089"
Loading
Loading
@@ -138,7 +138,7 @@ allowing traffic from members of said VLAN.
If you want to [enable authentication](#http), you might want to [create an
admin user][influx-admin]:
 
```
```shell
influx -execute "CREATE USER jeff WITH PASSWORD '1234' WITH ALL PRIVILEGES"
```
 
Loading
Loading
@@ -168,7 +168,7 @@ influx -execute 'SHOW DATABASES'
 
The output should be similar to:
 
```
```plaintext
name: databases
---------------
name
Loading
Loading
Loading
Loading
@@ -43,7 +43,7 @@ while the method name is stored in the tag `method`. The tag `action` contains
the full name of the transaction action. Both the `method` and `action` fields
are in the following format:
 
```
```plaintext
ClassName#method_name
```
 
Loading
Loading
Loading
Loading
@@ -22,7 +22,7 @@ settings outlined in
 
First we define a shell function with the proper Redis connection details.
 
```
```shell
rcli() {
# This example works for Omnibus installations of GitLab 7.3 or newer. For an
# installation from source you will have to change the socket path and the
Loading
Loading
@@ -37,7 +37,7 @@ rcli ping
Now we do a search to see if there are any session keys in the old format for
us to clean up.
 
```
```shell
# returns the number of old-format session keys in Redis
rcli keys '*' | grep '^[a-f0-9]\{32\}$' | wc -l
```
Loading
Loading
@@ -45,7 +45,7 @@ rcli keys '*' | grep '^[a-f0-9]\{32\}$' | wc -l
If the number is larger than zero, you can proceed to expire the keys from
Redis. If the number is zero there is nothing to clean up.
 
```
```shell
# Tell Redis to expire each matched key after 600 seconds.
rcli keys '*' | grep '^[a-f0-9]\{32\}$' | awk '{ print "expire", $0, 600 }' | rcli
# This will print '(integer) 1' for each key that gets expired.
Loading
Loading
Loading
Loading
@@ -53,7 +53,7 @@ Add the following to your `sshd_config` file. This is usually located at
`/etc/ssh/sshd_config`, but it will be `/assets/sshd_config` if you're using
Omnibus Docker:
 
```
```plaintext
AuthorizedKeysCommand /opt/gitlab/embedded/service/gitlab-shell/bin/gitlab-shell-authorized-keys-check git %u %k
AuthorizedKeysCommandUser git
```
Loading
Loading
@@ -117,7 +117,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
 
1. First, download the package and install the required packages:
 
```
```shell
sudo su -
cd /tmp
curl --remote-name https://mirrors.evowise.com/pub/OpenBSD/OpenSSH/portable/openssh-7.5p1.tar.gz
Loading
Loading
@@ -127,7 +127,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
 
1. Prepare the build by copying files to the right place:
 
```
```shell
mkdir -p /root/rpmbuild/{SOURCES,SPECS}
cp ./openssh-7.5p1/contrib/redhat/openssh.spec /root/rpmbuild/SPECS/
cp openssh-7.5p1.tar.gz /root/rpmbuild/SOURCES/
Loading
Loading
@@ -136,7 +136,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
 
1. Next, set the spec settings properly:
 
```
```shell
sed -i -e "s/%define no_gnome_askpass 0/%define no_gnome_askpass 1/g" openssh.spec
sed -i -e "s/%define no_x11_askpass 0/%define no_x11_askpass 1/g" openssh.spec
sed -i -e "s/BuildPreReq/BuildRequires/g" openssh.spec
Loading
Loading
@@ -144,19 +144,19 @@ the database. The following instructions can be used to build OpenSSH 7.5:
 
1. Build the RPMs:
 
```
```shell
rpmbuild -bb openssh.spec
```
 
1. Ensure the RPMs were built:
 
```
```shell
ls -al /root/rpmbuild/RPMS/x86_64/
```
 
You should see something as the following:
 
```
```plaintext
total 1324
drwxr-xr-x. 2 root root 4096 Jun 20 19:37 .
drwxr-xr-x. 3 root root 19 Jun 20 19:37 ..
Loading
Loading
@@ -170,7 +170,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
with its own version, which may prevent users from logging in, so be sure
that the file is backed up and restored after installation:
 
```
```shell
timestamp=$(date +%s)
cp /etc/pam.d/sshd pam-ssh-conf-$timestamp
rpm -Uvh /root/rpmbuild/RPMS/x86_64/*.rpm
Loading
Loading
@@ -179,7 +179,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
 
1. Verify the installed version. In another window, attempt to login to the server:
 
```
```shell
ssh -v <your-centos-machine>
```
 
Loading
Loading
@@ -191,7 +191,7 @@ the database. The following instructions can be used to build OpenSSH 7.5:
sure everything is working! If you need to downgrade, simple install the
older package:
 
```
```shell
# Only run this if you run into a problem logging in
yum downgrade openssh-server openssh openssh-clients
```
Loading
Loading
@@ -47,8 +47,12 @@ An epic's page contains the following tabs:
 
## Adding an issue to an epic
 
Any issue that belongs to a project in the epic's group, or any of the epic's
subgroups, are eligible to be added. New issues appear at the top of the list of issues in the **Epics and Issues** tab.
You can add an existing issue to an epic, or, from an epic's page, create a new issue that is automatically added to the epic.
### Adding an existing issue to an epic
Existing issues that belong to a project in an epic's group, or any of the epic's
subgroups, are eligible to be added to the epic. Newly added issues appear at the top of the list of issues in the **Epics and Issues** tab.
 
An epic contains a list of issues and an issue can be associated with at most
one epic. When you add an issue that is already linked to an epic,
Loading
Loading
@@ -64,6 +68,19 @@ To add an issue to an epic:
If there are multiple issues to be added, press <kbd>Spacebar</kbd> and repeat this step.
1. Click **Add**.
 
### Creating an issue from an epic
> [Introduced](https://gitlab.com/gitlab-org/gitlab/issues/5419) in GitLab 12.7.
Creating an issue from an epic enables you to maintain focus on the broader context of the epic while dividing work into smaller parts.
To create an issue from an epic:
1. On the epic's page, under **Epics and Issues**, click the arrow next to **Add an issue** and select **Create new issue**.
1. Under **Title**, enter the title for the new issue.
1. From the **Project** dropdown, select the project in which the issue should be created.
1. Click **Create issue**.
To remove an issue from an epic:
 
1. Click on the <kbd>x</kbd> button in the epic's issue list.
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment