Reports until 15:45, Tuesday 27 January 2026
H1 CDS
jonathan.hanks@LIGO.ORG - posted 15:45, Tuesday 27 January 2026 (88919)
WP 12998 cont Installing the new VM cluster

Test stand log book entries that document the setup and testing prior to production

Physical layout

The following components are racked together in rack 12.

  1. sw-msr-pve0 - switch that connects the nodes to each other and to the core.  It runs the internal cluster traffic, and contains the links to the outside world.
    1. Connected to the core via a lag.  I have spare ports in both the core and the switch so that I can increase the bandwidth if needed
  2. pve-node[0,1,2] - the nodes

Basic setup

When I updated the ipmi address in the bios I also did the install of the proxmox nodes.  I follow the basic routine in reworking the proxmox setup for pve-node0.

Repeat this for pve-node1 & 2, incrementing the ip addresses.

Cluster network configuration creation

Configured the niccluster0 interface on each node

Did the same on each node, and a ping test.

Cluster creation

On pve-node0 go to the datacenter / cluster and click on create cluster

Then on the other nodes to to datacenter / cluster and click join cluster

Support subscription activation and updates

Now that the basic pieces are in place and will not need a reinstall, it is time to enable the subscription on each node.

Go to each node to the subscription section and upload the subscription key.

Now run updates against the enterprise repo and reboot each machine.

Configure the ceph network

Repeat for all nodes

Install Ceph

Go to datacenter / ceph page.  When prompted, select install ceph

Clean up disks

We had used these machines for other testing in the test stand, we will go through and clean up the disks prior to use.

This was done on /dev/sdc and /dev/sdd on the systems

Create the Ceph OSD (Object Storage Deamon)

On each machine go to ceph/OSD

Repeat for /dev/sdd.

After a minute they should show up on the ui.

Add more Ceph monitors

Add pve-node1 and pve-node2 as ceph monitors and managers.

Add a Ceph pool

pve-node0 /ceph / pool

Setting up the VM data links, a bonded bridged network

Go to network / Create / Linux Bond

Create a data bridge, network / Create / Linux Bridge

Do this on each of the nodes.

Notes

When creating VMs we want to connect the network to vmbr1 and specify the vlan tag that should be used as this setup gives us access to more than one vlan.