Skip to content
Snippets Groups Projects
Commit 41a8b111 authored by John Northrup's avatar John Northrup
Browse files

Merge branch 'add-design' into 'master'

Add storage and networking documentation with our last agreements

This is what I mean, that instead of keeping the infrastructure documentation in the chef repo we can use the infrastructure repo which can be public.

We could just make it all public this way and keep the documents up to date by writing markdown and just using git.

cc/ @stanhu

See merge request !3
parents 3d1a0f3d 6ccb9536
No related branches found
No related tags found
No related merge requests found
# GitLab Infastructure Design
[Storage](design/storage.md)
[Networking](design/networking.md)
# Networking
## Edge Routing
We will take delivery of two diverse 1GB network connections, each recieving a full BGP feed.
Routers will need to terminate 1GB ethernet handoff w/ uplink into core network.
## Core Routing & Switching
We will have a two node collapsed core architecture comprised of 40GB Open Network Switch
hardware running Cumulus Networks OS. The ASIC Chipset should be a Broadcom Tomahawk or Broadcom Trident2+.
## Host Connectivity
Hosts will be dual connected to each of the core switches by 40 GB interconnects.
Hosts will run Cumulus Quagga for end-to-end L3 connectivity and dynamic routing.
# Storage
## CephFS in hardware
This is our general plan and reasoning
* We are moving forward with the CephFS cluster in hardware.
* Our general architecture goes in the way of using
* 12 cores processors
* 2 sockets
* 96 GB of RAW storage
* Minimum spindle count of 16 drives
* Minimum HBA count of 2
* 2 drives for the OS as a RAID 1
* NVMe drive on the PCIe bus for Ceph Journal and Frequently Used Ceph PGs
* 40GB nic card for general networking
* 1GB nic card for management
* To reduce blast radious we will have 4 independent clusters accessed through the sharding feature.
* If one goes down we still have 3 left.
* This is particularly useful to reduce load and improve recovery times.
* This can be also useful to migrate CephFS data into BlueStore when it's stable and available.
* As a backup for git repos we will use GitLab GEO feature pushing into a secondary node hosted at Amazon with an EFS drive (we don't care if it's slow)
* This makes Amazon DirectConnect a critical feature for our colo as we will need to have high bandwidth to it.
* We will start backfilling this Amazon instance as soon as we finish draining CephFS, so when we are done we can start moving from Amazon to the colo.
* To prevent a total loss in the case of another MDS meltdown we will create snapshots periodically so we can recover (hourly, daily, whatever makes sense)
* We will push forward with the GEO feature to use an object storage, in which case we will use RADOS as the object storage to simplify our installation.
## CephFS in the cloud
** Don't do it. **
* Latencies will kill you.
* Random hosts going down at any time will double your workload.
* Network attached storage, as premium as it is, is shared and slow.
* CephFS will lock when it can't write to the journal.
The good side:
* CephFS survives locking and injecting latencies remarkably well.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment