§ Wiki · Wiki entry

RMU Build: Best Practices and Reference

Recommended hardening and operational add-ons for the Gen-1.5 RMU — Proxmox firewall, RACADM, Dell OpenManage Enterprise, SuperMicro firmware updates, BMC console access, OPNsense GUI, and network/cabling reference tables.

This page covers section V (best practices) and section VI (reference information) of the Gen-1 to Gen-1.5 RMU build runbook.

The best-practice steps significantly improve the operator experience — particularly the ability to update firmware on every server remotely.

[!WARNING] These steps cover foundational hardening only. They do not constitute comprehensive security hardening, nor do they replace ongoing system maintenance. Each node provider is responsible for their own secure and well-maintained environment.

Enable Proxmox firewall — Datacenter level

  1. In the Proxmox WebUI, select Datacenter in the left panel.
  2. In the middle panel, scroll to Firewall > Options and select the top Firewall configuration option in the main panel. Click Edit at the top of the list.
  3. In the dialog, tick the Firewall box and click OK.

Enable Proxmox firewall — RMU level

  1. Select the RMU in the left panel.
  2. In the middle panel, scroll to Firewall > Options and select the top Firewall configuration option in the main panel. Click Edit.
  3. In the dialog, tick the Firewall box and click OK.

Disable RPCBIND

  1. Select the RMU in the left panel.

  2. Click Shell in the top right.

  3. Run these commands one at a time:

    systemctl disable rpcbind.target
    systemctl disable rpcbind.socket
    systemctl disable rpcbind.service
    systemctl stop rpcbind.target
    systemctl stop rpcbind.socket
    systemctl stop rpcbind.service
    

B. Install RACADM tool on RMU (Optional, Dell nodes)

This section applies to sites with Dell node machines. Installing RACADM lets the RMU access the iDRAC interface on Dell servers — for example, to update iDRAC settings or open an emergency console shell.

  1. In the Proxmox WebUI, select the RMU in the left panel and click Shell in the top right.

  2. Execute the following commands:

    echo 'deb http://linux.dell.com/repo/community/openmanage/11010/jammy jammy main' | sudo tee -a /etc/apt/sources.list.d/linux.dell.com.sources.list
    wget https://linux.dell.com/repo/pgp_pubkeys/0x1285491434D8786F.asc
    apt-key add 0x1285491434D8786F.asc
    apt-get update
    apt install gpg libssl-dev -y
    apt install srvadmin-idracadm8 -y
    
  3. To allow the Teleport server to forward the iDRAC web interface, disable the SSL header check on each Dell node. Run this once per Dell node from a shell on the RMU:

    racadm -r 10.10.100.55 -u root -p <password> set idrac.webserver.HostHeaderCheck 0
    

    [!NOTE] <password> is the iDRAC/BMC password for the Dell node. If the password has not been reset, it is on the pull-out tab on the front face of the server. If it has been reset and the current password is unknown, you must use a physical console ("crash cart") and boot the server into iDRAC recovery mode to reset it. See iDRAC Access and TSR Logs.

Dell OpenManage Enterprise (OME) communicates with the Baseboard Management Controllers (BMCs) of your nodes. It can discover the iDRACs of Dell servers and apply firmware updates remotely.

Create the OME VM

  1. From the main Proxmox page, click Create VM.

  2. Tick Advanced and Start at boot. Configure:

    • VM ID: 123
    • Name: <dc>-ome
    • Start/Shutdown Order: 10

    Click Next.

  3. On the OS tab, select Do not use any media. Set Type to Other. Click Next.

  4. On System: ensure Graphic card is Default, SCSI Controller is VirtIO SCSI single, Machine is Default (i440fx), BIOS is Default (SeaBIOS). Click Next.

  5. On Disks: Bus/Device IDE and 0, Storage local-zfs, Disk size (GiB) 8, tick Backup, Async IO Default (io_uring). Click Next.

  6. On CPU: Sockets 1, Type X86-64-v2-AES, Cores 8. Click Next.

  7. On Memory: Memory (MiB) 16767, Minimum memory (MB) 2048. Click Next.

  8. On Network: Bridge vmbr1, Mode Intel E1000, untick Firewall. Click Next.

  9. Do not tick Start after created — the OME image still needs to be mapped to a disk. Click Finish.

Download and import OME

  1. Open a shell console on the RMU and download the OME virtual image:

    wget --user-agent="Mozilla" -O ome.zip 'https://dl.dell.com/FOLDER07474001M/1/openmanage_enterprise_kvm_format_3.6.1.zip?uid=c802e350-6536-4f28-7a66-93b4f844cd30&fn=openmanage_enterprise_kvm_format_3.6.1.zip'
    
  2. Unzip the download:

    apt update && apt install unzip -y
    unzip ome.zip
    
  3. Import the QCOW2 image into the OME VM's storage:

    cd appliance/qemu-kvm/
    qm importdisk 123 openmanage_enterprise.qcow2 local-zfs
    
  4. Configure the imported disk as the boot disk:

    • Select the OME VM in the left panel.
    • Click Hardware.
    • Double-click Unused Disk.
    • Set Bus/Device to VirtIO Block. Click Add.
    • Click Options, then double-click Boot Order.
    • Move the device described as local-zfs:vm-123-disk-1… to position 1 and tick Enable next to it.
    • Untick the enable box for all other boot devices. Click OK.
  5. Select the OME VM and start it.

Configure OME

  1. Select the OME VM and click Console. Allow time for the first boot.

  2. On EULA, select Accept.

  3. Select your keyboard type.

  4. Set a non-trivial password and store it in your password manager. Tab to Apply and press Enter.

  5. Use the arrow keys to scroll to Set Networking Parameters and press Enter.

  6. Press Enter on the available network adapter.

  7. Enter the password you just created. Tab to Continue and press Enter.

  8. Configure the network:

    • IPv4 Address (Static): 10.10.100.23
    • Static Gateway: 10.10.100.1
    • Static Subnet Mask: 255.255.255.0
    • Static Preferred DNS server: 1.1.1.1

    Use the arrow keys to navigate to DHCP, then Tab into the editable text fields.

  9. Select Apply.

  10. Re-enter your non-trivial password and select Continue.

Share OME with Teleport

  1. Open a shell on the RMU server and edit /etc/teleport.yaml.

  2. Add the following section under the app_service -> apps stanza, matching the indentation of the previous name: bo1-rmu section. Replace bo1 and .dfinity.network with the placeholders used earlier in the runbook:

    - name: bo1-ome
        uri: https://10.10.100.23:443
        public_addr: "bo1-ome.teleport.bo1.dfinity.network"
        insecure_skip_verify: true
        rewrite:
          redirect:
          - "10.10.100.23"
          - "bo1-ome.teleport.bo1.dfinity.network"
        labels:
          dc: "bo1"
    
  3. Reload the Teleport service:

    [!WARNING] If you are accessing the RMU shell via Teleport, your connection will be disconnected when the service restarts.

    systemctl restart teleport
    

Configure the OME web GUI

  1. From the Teleport Resources page, find the OME tile and select it to open the web UI.
  2. Sign in. The default user is admin with the password set during OME installation.
  3. Click Initial Settings under Step 1.
  4. Expand Time Configuration and set the timezone — UTC is recommended (DFINITY uses UTC; if you operate more than one data center, a single timezone keeps cross-DC reasoning simpler).

Discover your nodes

  1. On Step 2 click Discover Devices to open the Create Discovery Job dialog.

  2. Under Device Type, select Server. Ensure Dell iDRAC is selected, click OK.

  3. Under IP/Hostname/Range, enter 10.10.100.53-10.10.100.99.

  4. Under Services API Credentials, enter the iDRAC username and password. If iDRACs use different passwords, click Add and add each IP individually.

  5. Click Finish.

    [!TIP] If not all nodes are discovered on the first attempt, narrow the range or add each host separately with its own credentials.

  6. Monitor the discovery from Monitor > Jobs in the top navigation bar.

Create a baseline

  1. Click Configuration > Firmware/Driver Compliance.
  2. Click Create Baseline.
  3. In the dialog, click Add next to Catalog.
  4. Enter a name for the catalog (e.g. Dell Catalog).
  5. Ensure Latest Component versions on Dell.com is enabled. Click Finish.
  6. Enter a baseline name. Including the <dc> code is recommended. Click Next.
  7. Click Select Devices.
  8. Click the small box and tick Across all pages to select all devices. Click OK.
  9. Click Finish.

Applying firmware updates

Single server

  1. Click Devices.
  2. Click the IP address in the Name column to open a single device.
  3. Click Firmware/Drivers.
  4. Select the baseline you created earlier in the drop-down.
  5. Tick the firmware updates you wish to apply.
  6. Click Update, then Update Now.
  7. Set Reboot Server immediately to Graceful Reboot with Forced Shutdown.
  8. Tick Reset idrac.
  9. Tick Clear Job Queue.
  10. Click Update.

Multiple servers

  1. Click Configuration > Firmware/Driver Compliance.
  2. Tick the box next to the baseline you created earlier.
  3. Click Check Compliance.
  4. Click View Report.
  5. Tick the boxes next to the servers you wish to update.
  6. Click Make Compliant, then Update Now.
  7. Set Reboot Server immediately to Graceful Reboot with Forced Shutdown.
  8. Tick Reset idrac.
  9. Tick Clear Job Queue.
  10. Click Update.
  1. Open and log in to the BMC of the SuperMicro node you wish to update.

  2. Click System > FRU Reading.

  3. Note the Board Product Name on this page.

  4. Search the SuperMicro firmware page for the latest BMC firmware for that board: supermicro.com/support/resources/bios_ipmi.php?type=BMC.

  5. Once you locate your motherboard, click Resources.

  6. Under Software, click BMC Firmware.

  7. Download and extract the firmware archive.

  8. In the BMC, click Maintenance > Firmware Update.

  9. Click Enter Update Mode.

  10. Click Yes in the confirmation dialog.

  11. Click Upload Firmware.

  12. Click Yes in the confirmation dialog.

  13. Untick Preserve Configuration (unchecking this restores the BMC's factory default settings).

  14. Click Start Upgrade.

    [!NOTE] After the firmware update completes, clear the cache of any browser that previously connected to that BMC web console.

These steps depend on Dell OpenManage Enterprise being installed.

Allow iDRACs to be browsable through Teleport

  1. In OME, click Devices in the top navigation bar.

  2. Click the checkbox in the middle of the screen, then Across all pages to select every node.

  3. Click the More Actions drop-down and select RACADM CLI.

  4. In the Arguments box, enter:

    set idrac.webserver.HostHeaderCheck 0
    
  5. Confirm that all your nodes are listed under Selected devices.

  6. Click Finish.

  7. Click Monitor in the top navigation bar, then Jobs.

  8. Find the Remote command line job and click it. Click View Details. Selecting any individual node shows execution details on the right.

Add Teleport entries for BMC/iDRAC access

  1. (No longer needed for the idrac.sh script) On the MaaS server, in a shell (access via Teleport MaaS SSH access as user admin), configure the maas command:

    sudo apt install jq -Y
    
    sudo maas apikey --username dfnadmin
    stuff:stuff:stuff
    
    maas login maas http://localhost:5240/MAAS
    API key (leave empty for anonymous access):  <enter the api key above>
    
  2. Rename the existing Teleport configuration to a base file:

    sudo mv /etc/teleport.yaml /etc/teleport.yaml-base
    
  3. Save the following script as idrac.sh on the MaaS server:

    #! /usr/bin/env bash
    
    # Start with the base /etc/teleport.yaml file and add
    # the iDRAC device entries to it.
    
    if [ -r /etc/teleport.yaml-base ] ; then
      cp /etc/teleport.yaml-base /etc/teleport.yaml
    else
      echo "/etc/teleport.yaml-base not found.  Aborting."
      exit 1
    fi
    
    # Add a comment at the end of the base configuration to
    # indicate where the automagic script configuration starts.
    echo "### Automation added below ###" >> /etc/teleport.yaml
    
    echo "Finding iDRAC devices..."
    
    # Only pull out the DHCP Dynamic Range (from MaaS)
    grep '10.10.100.[56789][0-9]' /var/log/syslog | grep DHCPACK |
    
    while read a a a a a a a IP a a NAME stuff
    do
      echo "$NAME $IP"
    done |
    
    # Get rid of the duplicates
    sort -u |
    
    # Walk the list of NAME/IP and clean up the values
    while read NAME IP
    do
      # Sanitize the NAME variable by removing the '()' and lower casing it
      NAME="$(echo $NAME | sed -e 's/[()]//g' | tr [:upper:] [:lower:])"
      echo "$NAME $IP"
    done |
    
    # Add the host to the /etc/teleport.yaml
    while read NAME IP
    do
      echo "Adding ${NAME} to /etc/teleport.yaml ..."
      tee -a /etc/teleport.yaml <<EOF
      - name: ${NAME}-idrac
        uri: https://${IP}:443
        #public_addr: ""
        public_addr: "${NAME}-idrac.teleport.<dc>.dfinity.network"
        insecure_skip_verify: true
        rewrite:
          headers:
          - "Host: ${NAME}-idrac.teleport.<dc>.dfinity.network"
          - "Origin: https://${NAME}-idrac.teleport.<dc>.dfinity.network"
        labels:
          dc: "<dc>"
          type: "bmc"
    EOF
    done
    
  4. Edit <dc> in the script to match your data center code, save, and execute it:

    chmod +x ./idrac.sh
    sudo ./idrac.sh
    
  5. When the script completes, reload Teleport:

    sudo systemctl reload teleport
    
  6. Within a few minutes, your iDRAC devices appear in the Teleport web UI.

This section adds Teleport entries for managing the OPNsense firewalls through the web GUI.

Add IPv6 host entries

Add the IPv6 address of each OPNsense device to the end of the /etc/hosts file:

<IPv6_of_1st_OPNsense_device> br2-fw01
<IPv6_of_2nd_OPNsense_device> br2-fw02

Add Teleport entries for the OPNsense firewalls

  1. Edit /etc/teleport.yaml on the RMU and add:

    - name: <dc>-fw01
      uri: https://[<IPv6_of_1st_OPNsense_device>]:443
      public_addr: "<dc>-fw01.teleport.<dc>.<domain>"
      insecure_skip_verify: true
      rewrite:
        redirect:
        - "[<IPv6_of_1st_OPNSense_device>]"
        - "<dc>-fw01.teleport.<dc>.<domain>"
      labels:
        dc: "<dc>"
    
    - name: <dc>-fw02
      uri: https://[<IPv6_of_2nd_OPNSense_device>]:443
      public_addr: "<dc>-fw02.teleport.<dc>.<domain>"
      insecure_skip_verify: true
      rewrite:
        redirect:
        - "[<IPv6_of_2nd_OPNsense_device>]"
        - "<dc>-fw02.teleport.<dc>.<domain>"
      labels:
        dc: "<dc>"
    
  2. Reload the Teleport service:

    sudo systemctl reload teleport
    
  3. The OPNsense GUIs are now visible in Teleport, and clicking each one opens the OPNsense login page.

Reference: internal network layout

All Gen-1 data centers are recommended to use the following private static IP addresses for the management network.

The MaaS DHCPv4 BMC range is 10.10.100.50-10.10.100.99. This allows for an RMU, fw01, fw02, and 46 IC BMC addresses.

Name"Management" Private IPv4 10.10.100.0/24"BMC" IPv4 10.10.100.0/24Management Public IPv4Public IPv4 Uplink (/28)Public IPv6 Uplink"Delegated" IPv6 Prefix (::/64)Notes
Upstream Provider IPv6 Uplink Default Router${MGMTIPV4}+1${PUBLICIPV4}+1${UPLINKIPV6}::1Upstream provider may use something like VRRP on :2/:3
RMU10.10.100.110.10.100.50${MGMTIPV4}+2${DELEGATED}::EIP64
fw01/router01?10.10.100.51${UPLINKIPV6}::4${DELEGATED}::1
fw02/router02 or bn01?10.10.100.52${UPLINKIPV6}::5${DELEGATED}::2
msw0110.10.100.10
sw0110.10.100.11
sw0210.10.100.12
sw0310.10.100.13
sw0410.10.100.14
teleport10.10.100.22${DELEGATED}::EIP64
maas10.10.100.20
monitoring10.10.100.21
ome10.10.100.23
iDRAC / BMC10.10.100.53-10.10.100.99${DELEGATED}::EIP64
Unallocated10.100.100.100-10.100.100.254

Reference: RMU cabling

If this is a Gen-1 site that already has a SuperMicro RMU, the cabling should already be in place.

[!NOTE] This cabling is not mandatory, but it is the recommended layout.

Device IDNetwork/PortDevice IDPort
CoreSite Fiber InternetWANmm1-MSW0151
mm1-RMUWANmm1-MSW012
mm1-MSW01 (Dell 3048)Managementmm1-MSW013
mm1-SW02 (Dell 4148)Managementmm1-MSW014
mm1-RMULOMmm1-MSW016
mm1-FW01Managementmm1-MSW017
mm1-FW02Managementmm1-MSW018
mm1-RMUManagementmm1-MSW0110
mm1-RMUVLAN 66mm1-SW0220
mm1-RMULANmm1-SW0252
mm1-FW01LANmm1-SW0253
mm1-FW02LANmm1-SW0254

Reference: checking number of internal drives

  1. Ensure the RMU is powered off.

  2. Connect a crash cart to the RMU. Connect both the VGA and keyboard.

  3. Power it on and press the Del key repeatedly to enter System Setup (BIOS).

  4. Use the arrow keys to scroll over to Save and Exit.

  5. Under Boot Override, count the number of hard drive entries.

    [!NOTE] Normally an RMU has one or two drives in total.

  6. Power off the server.