px1-silverstone Hardware Inventory¶
Node: px1-silverstone (REDACTED_IP) Role: Primary cluster node - VM/CT host + Ceph OSD.1 Last Updated: 2026-01-18
System Specifications¶
| Property | Value |
|---|---|
| Model | Dell OptiPlex 3000 |
| CPU | Intel 12th Gen Core i3-12100T @ 3.1 GHz (4 cores, 4 threads) |
| RAM | 32 GB DDR4 |
| BIOS Version | 1.37.0 |
| Dell Service Tag | GTVGWP3 |
| TPM | 2.0 (Nuvoton, firmware 7.2.2.0) |
| Motherboard | Dell 0PW9RR (A00 revision) |
| Boot Mode | UEFI |
| SecureBoot | ✅ Enabled |
BIOS & Firmware Management¶
fwupd - Firmware Update Daemon¶
px1 has fwupd 2.0.8-3 installed from Proxmox repositories, enabling automated BIOS and firmware updates via UEFI capsule updates.
Installed packages:
fwupd 2.0.8-3+pmx1 Firmware update daemon
fwupd-amd64-signed 1:1.7+1+pmx1 UEFI firmware tools
libfwupd3 2.0.8-3+pmx1 Firmware library
Capabilities: - System firmware (BIOS) updates - TPM firmware updates - NVMe/SSD firmware updates - UEFI security database updates (Microsoft UEFI dbx) - Cryptographic verification of all updates
Checking Firmware Status¶
# Check all firmware devices and current versions
ssh px1 fwupdmgr get-devices
# Check for available updates
ssh px1 fwupdmgr get-updates
Current Status: - BIOS 1.37.0 ✅ (current) - TPM 7.2.2.0 (current) - NVMe/SSD firmware (up to date)
BIOS Update History¶
⚠️ Before updating: 1. Migrate HA resources if running on px1 (CT1112, CT1113, CT1118, CT3102) 2. Ensure continuous power supply during update 3. Keep SSH access available - update will reboot
Update procedure:
# Step 1: Migrate HA resources (if applicable)
ha-manager status | grep px1
ha-manager migrate ct:1112 px3-suzuka
ha-manager migrate ct:1113 px3-suzuka
ha-manager migrate ct:1118 px3-suzuka
# Step 2: Check available update
ssh px1 "fwupdmgr get-updates"
# Step 3: Apply update (schedules UEFI reboot)
ssh px1 "fwupdmgr update"
# Step 4: Monitor - system will reboot into firmware updater (~5 min)
# System reboots back to Proxmox automatically
# Step 5: Verify update success
ssh px1 "fwupdmgr get-devices | grep -A 2 'System Firmware'"
# Should show: Current version: 1.37.0
# Step 6: Migrate resources back (if applicable)
ha-manager migrate ct:1112 px1-silverstone
Manual BIOS Configuration¶
To access BIOS settings manually (if needed):
1. Power on node
2. Watch for Dell splash screen
3. Press F2 (BIOS Setup) or F12 (Boot Menu) during POST
4. Configure settings as needed
5. Save & Exit (F10)
Storage Drives¶
px1 has 3 physical storage drives with specific purposes:
1. Primary NVMe (Boot + Local Storage)¶
| Property | Value |
|---|---|
| Device | /dev/nvme0n1 |
| Model | Samsung SSD 990 PRO 2TB |
| Serial | S7DNNU0X736656V |
| Type | NVMe M.2 (PCIe 4.0) |
| Interface | Direct to motherboard |
| Total Capacity | 1.8 TiB |
| Health | ✅ PASSED |
Partitioning:
nvme0n1p1 1007K EFI boot partition
nvme0n1p2 1.0G /boot/efi
nvme0n1p3 1.8T LVM pve (system + local-nvme)
Logical Volumes:
pve-swap 8GB Swap space
pve-root 250GB Root filesystem (/)
pve-local--nvme 1.6TB Local backup storage (local-nvme)
Mount Points:
/ pve-root
/boot/efi nvme0n1p2
/mnt/local-nvme pve-local--nvme (backup-nvme storage)
Proxmox Storage: local (host system), backup-nvme (backups)
Use Case: System boot, primary VM/CT disks, local backup cache
2. Ceph OSD Drive (SATA SSD)¶
| Property | Value |
|---|---|
| Device | /dev/sda |
| Model | Samsung SSD 870 EVO 2TB |
| Serial | S754NX0XB07933J |
| Type | SATA SSD |
| Interface | Direct SATA connection |
| Total Capacity | 1.8 TiB |
| Health | ✅ PASSED |
Partitioning:
sda is used entirely for Ceph OSD (no partitions)
Logical Volumes:
ceph-<id>-osd-block 1.8TB Ceph OSD.1 data storage
Mount Points:
/var/lib/ceph/osd/ceph-1 (Ceph daemon)
Proxmox Storage: ceph-pool (cluster shared storage)
Use Case: Persistent storage for all cluster VMs/CTs (replicated 3x across px1, px2, px3)
Important: Do NOT partition or modify this drive - Ceph manages it entirely.
3. Backup Drive (USB-Connected SSD)¶
| Property | Value |
|---|---|
| Device | /dev/sdb |
| Model | Micron CT2000X9SSD9 |
| Serial | 2334E8D61929 |
| Type | SSD (actual drive) |
| Connection | USB 3.0 Enclosure (ID 0634:5605) |
| Total Capacity | 1.8 TiB |
| Health | ⚠️ Cannot read via SMART (USB bridge) |
Partitioning:
sdb1 1.8TB ext4 backup archive
Mount Points:
/mnt/sdb1-backup (backup-storage Proxmox storage)
Current Usage: 736 GB (40% full)
Proxmox Storage: backup-storage (currently disabled - see Backup Strategy)
Use Case: Legacy backup location (being phased out)
Status: ⚠️ DEPRECATED - Moved to NVMe for speed and reliability
Storage Summary Table¶
| Drive | Model | Interface | Size | OSD | Purpose | Status |
|---|---|---|---|---|---|---|
| nvme0n1 | Samsung 990 PRO | NVMe | 1.8T | — | System + backups | ✅ Active |
| sda | Samsung 870 EVO | SATA | 1.8T | OSD.1 | Cluster storage | ✅ Active |
| sdb | Micron | USB 3.0 | 1.8T | — | Legacy backups | ⚠️ Disabled |
Ceph OSD Details¶
OSD Configuration on px1:
# Check OSD status
ssh root@REDACTED_IP
ceph osd tree | grep px1
ceph osd metadata | grep -A 10 '"id": 1'
Current State:
OSD.1:
Hostname: px1-silverstone
Weight: 1.81940
Status: up
Device: Samsung SSD 870 EVO 2TB (S754NX0XB07933J)
Cluster Pool: - Pool: ceph-pool - Replication: 3x (px1, px2, px3) - Total Capacity: 1.73 TiB usable - Current Usage: 201.88 GiB (11.39%)
LVM Layout¶
Physical Volume:
/dev/nvme0n1p3 (1.8T)
Volume Group:
pve (1.8T total)
Logical Volumes:
pve/root 250GB /
pve/swap 8GB Swap
pve/local--nvme 1.6TB /mnt/local-nvme (backup pool)
Backup Storage Status¶
backup-nvme (Active Primary)¶
| Property | Value |
|---|---|
| Storage | Local backup-nvme |
| Device | /mnt/local-nvme/backup-pool |
| Capacity | 1.6 TiB |
| Used | 51 GB (3.26%) |
| Free | 1.41 TiB |
| Retention | 3 daily, 2 weekly, 1 monthly |
| Schedule | Daily at 02:00 UTC |
| Type | Primary backups (speed optimized) |
backup-storage (Disabled Legacy)¶
| Property | Value |
|---|---|
| Storage | USB backup-storage |
| Device | /mnt/sdb1-backup |
| Capacity | 1.8 TiB |
| Used | 736 GB (40%) |
| Free | 1.0 TiB |
| Schedule | DISABLED |
| Type | Legacy (being phased out) |
pikvm-backup (Off-site)¶
| Property | Value |
|---|---|
| Storage | Remote NFS |
| Host | pikvm (REDACTED_IP, France) |
| Export | /mnt/external |
| Capacity | 2.7 TiB |
| Used | 1.39 TiB (51.6%) |
| Retention | 5 daily, 2 monthly, 3 weekly |
| Schedule | Backup at 03:00 + rsync at 04:00 UTC |
| Type | Off-site disaster recovery |
Proxmox Storage Configuration¶
# View all storage
pvesm status
# Check storage.cfg
cat /etc/pve/storage.cfg | grep -A 5 'backup\|ceph-pool'
Current Storage Pool Summary:
| Storage | Type | Nodes | Content | Status |
|---|---|---|---|---|
| ceph-pool | RBD | All 3 | VM disks | ✅ Active |
| backup-nvme | dir | px1 | Backups | ✅ Active |
| backup-storage | dir | px1 | Backups | ⚠️ Disabled |
| pikvm-backup | NFS | All | Backups | ✅ Active |
| local | dir | px1 | System | ✅ Active |
| px3-nas | NFS | All | Shared | ✅ Active |
Health & Monitoring¶
SMART Status¶
| Drive | Status | Last Check |
|---|---|---|
| nvme0n1 (990 PRO) | ✅ PASSED | 2026-01-16 |
| sda (870 EVO) | ✅ PASSED | 2026-01-16 |
| sdb (Micron) | ⚠️ Unavailable | (USB bridge) |
Check drive health:
ssh root@REDACTED_IP
smartctl -a /dev/nvme0n1
smartctl -a /dev/sda
Disk Space Monitoring¶
Check current usage:
ssh root@REDACTED_IP
df -h /mnt/local-nvme /mnt/sdb1-backup
ceph df
Future Optimization¶
Current Issues¶
- ⚠️ USB-connected sdb (backup-storage) is legacy and not ideal for frequent I/O
- Backup operations should prioritize NVMe for speed
- USB drive better suited for cold archive (if needed)
Options Under Review¶
- Keep sdb disabled - rely on NVMe + France off-site
- Use sdb as monthly cold archive - slow but cheap
- Repurpose sdb elsewhere in infrastructure