px2-monza Hardware Inventory¶
Node: px2-monza (REDACTED_IP)
Role: Secondary cluster node - VM/CT host + Ceph OSD
Status: ✅ MIGRATION COMPLETE
Last Updated: 2026-01-18
System Specifications¶
| Property | Value |
|---|---|
| Model | Dell OptiPlex 3060 |
| CPU | Intel Core i3-8100T @ 3.1 GHz (4 cores, 4 threads) |
| RAM | 32 GB DDR4 (2x 16GB Micron 8ATF2G64HZ-3G2E2 @ 2400 MT/s) |
| BIOS Version | 1.32.0 (2024-09-03) |
| TPM | 2.0 (Nuvoton, firmware 7.2.0.1) |
| Motherboard | Dell 03KWTV (A00 revision) |
| Serial Number | 1RH0BW2 |
| Boot Mode | UEFI |
| SecureBoot | ✅ Enabled |
BIOS & Firmware Management¶
fwupd - Firmware Update Daemon¶
px2 has fwupd 2.0.8-3 installed from Proxmox repositories, enabling automated BIOS and firmware updates via UEFI capsule updates.
Installed packages:
fwupd 2.0.8-3+pmx1 Firmware update daemon
fwupd-amd64-signed 1:1.7+1+pmx1 UEFI firmware tools
libfwupd3 2.0.8-3+pmx1 Firmware library
Capabilities: - System firmware (BIOS) updates - TPM firmware updates - NVMe/SSD firmware updates - UEFI security database updates - Cryptographic verification of all updates
Checking Firmware Status¶
# Check all firmware devices and current versions
ssh px2 fwupdmgr get-devices
# Check for available updates
ssh px2 fwupdmgr get-updates
Current Status: - BIOS 1.32.0 ✅ (current) - TPM 7.2.0.1 (current) - NVMe/SSD firmware (all up to date)
Updating BIOS (When Available)¶
If future BIOS updates become available:
⚠️ Before updating: 1. Migrate HA resources if running on px2 2. Ensure continuous power supply during update 3. Keep SSH access available - update will reboot
Update procedure:
# Check for available update
ssh px2 "fwupdmgr get-updates"
# If updates exist, apply
ssh px2 "fwupdmgr update"
# Monitor - system will reboot into firmware updater (~5 min)
# Then reboot back to Proxmox automatically
# Verify update success
ssh px2 "fwupdmgr get-devices | grep -A 2 System Firmware"
Manual BIOS Configuration¶
To access BIOS settings manually (if needed):
1. Power on node
2. Watch for Dell splash screen
3. Press F2 (BIOS Setup) or F12 (Boot Menu) during POST
4. Configure settings as needed
5. Save & Exit (F10)
Storage Drives (Post-Migration)¶
px2 now has 2 storage drives with optimized architecture matching px1:
1. Primary NVMe (Boot + System + Ceph OSD)¶
| Property | Value |
|---|---|
| Device | /dev/nvme0n1 |
| Model | Crucial P2 2TB (CT2000P2SSD8) |
| Serial | 2113E590A7F5 |
| Type | NVMe M.2 (PCIe 3.0) |
| Interface | Direct to motherboard |
| Total Capacity | 1.8 TiB |
| Health | ✅ PASSED |
| Available Spare | 100% |
| Percentage Used | ~8% |
Partitioning:
nvme0n1p1 1007K EFI boot (biosgrub)
nvme0n1p2 1G /boot/efi (vfat)
nvme0n1p3 1.8T LVM pve
Logical Volumes (Production):
pve-root 100GB Root filesystem (/)
pve-swap 16GB Swap space
pve-ceph-osd ~1.71TB Ceph OSD.1 data ✅ ACTIVE
Mount Points:
/ pve-root (5.5% used)
/boot/efi nvme0n1p2
Proxmox Storage: local (host system)
Ceph Storage: ceph-pool via OSD.1 (NVMe-based, up/in)
Use Case: High-performance system boot and Ceph OSD storage
Status: ✅ Production - Fast, low-latency OSD for cluster storage
2. Secondary SATA (Backup Mirror)¶
| Property | Value |
|---|---|
| Device | /dev/sda |
| Model | Samsung SSD 870 EVO 2TB |
| Serial | S6P4NL0T804366J |
| Type | SATA SSD |
| Interface | Direct SATA connection |
| Total Capacity | 1.8 TiB |
| Health | ✅ PASSED |
Filesystem (Post-Phase 4):
sda ext4, labeled: backup-secondary
Mount Point:
/mnt/backup-secondary (ext4, 1.7T available)
Proxmox Storage: backup-secondary (backup content type)
Synchronization:
Daily rsync at 03:30 UTC from px1-backup-primary → px2-backup-secondary
Log: /var/log/backup-mirror.log (on px1)
Use Case: Off-site backup mirror for disaster recovery
Status: ✅ Production - Syncing backups via nightly cron job
Storage Summary Table¶
| Drive | Model | Interface | Size | OSD | Ceph | Purpose | Status |
|---|---|---|---|---|---|---|---|
| nvme0n1 | Crucial P2 | NVMe | 1.8T | OSD.1 ✅ | active | System + fast OSD | ✅ Production |
| sda | Samsung 870 EVO | SATA | 1.8T | — | — | Backup mirror | ✅ Production |
Migration Summary: - ✅ Phase 1: Removed 1.66TB local-lvm, created 1.71TB Ceph OSD on NVMe - ✅ Phase 2: Rebalanced 208 GiB data to OSD.1 (1h 15min) - ✅ Phase 3: Removed old SATA OSD.2, zeroed drive - ✅ Phase 4: Formatted SATA as ext4 backup mirror - ✅ Phase 5: Integrated px2 mon into 3-mon quorum - ✅ Phase 6: Enabled nightly backup sync cron job
Ceph Configuration¶
Current OSD Cluster¶
Name OSD Location Device Type Weight Status
px1-silverstone osd.3 Host 3 Crucial P2 NVMe SSD 1.705 Ti up/in
px2-monza osd.1 Host 2 Crucial P2 NVMe SSD 1.705 Ti up/in ✅
px3-suzuka osd.0 Host 1 Samsung SATA SSD 1.819 Ti up/in
Cluster Pool: ceph-pool (3x replication)
Total Raw: ~5.2 TiB
Usable: ~1.73 TiB (3x replication)
Usage: 208 GiB (12%)
Monitor Cluster¶
px1-silverstone: [v2:REDACTED_IP:3300/0,v1:REDACTED_IP:6789/0]
px2-monza: [v2:REDACTED_IP:3300/0,v1:REDACTED_IP:6789/0] ✅
px3-suzuka: [v2:REDACTED_IP:3300/0,v1:REDACTED_IP:6789/0]
Status: 3-mon quorum (px1, px2, px3), HEALTH_OK
Election: leader px1-silverstone
Quick Health Check¶
ssh px2
# Cluster status
ceph -s
# OSD tree
ceph osd tree
# Monitor status
ceph mon stat
# Detailed OSD stats
ceph osd metadata | grep -A 10 '"id": 1'
LVM Layout¶
Physical Volume:
/dev/nvme0n1p3 (1.8 TiB)
Volume Group:
pve (1.8 TiB total)
Logical Volumes:
pve/root 100 GB /
pve/swap 16 GB Swap
pve/ceph-osd ~1.71 TB Ceph OSD.1 (NVMe-based, high-performance)
NO local-lvm: ✅ Removed in migration Phase 1 (no longer needed)
Proxmox Storage Configuration¶
# View all storage
pvesm status
# Check storage definitions
cat /etc/pve/storage.cfg | grep -E '^\w|path'
Active Storage Pools:
| Storage | Type | Status | Total | Used | Purpose |
|---|---|---|---|---|---|
| ceph-pool | RBD | ✅ Active | 1.73T | 12% | Cluster VM storage (OSD.1 rebalancing) |
| local | dir | ✅ Active | 98G | 5.5% | System files |
| backup-secondary | dir | ✅ Active | 1.8T | <1% | Backup mirror (px2 SATA) |
| pikvm-backup | NFS | ✅ Active | 2.7T | 52.9% | Off-site backups |
| px3-nas | NFS | ✅ Active | 1.8T | 16.1% | Shared NAS storage |
Health & Monitoring¶
SMART Status¶
| Drive | Status | Last Check |
|---|---|---|
| nvme0n1 (Crucial P2) | ✅ PASSED | 2026-01-18 |
| sda (870 EVO) | ✅ PASSED | 2026-01-18 |
Check drive health:
ssh px2
smartctl -a /dev/nvme0n1
smartctl -a /dev/sda
Disk Space Monitoring¶
Check current usage:
ssh px2
df -h /
pvesm status
ceph df
Final Status (Post-Migration)¶
# Cluster is HEALTH_OK with:
# - 3 mons (px1, px2, px3) in quorum
# - 3 OSDs (osd.0, osd.1, osd.3) all up/in
# - 65 PGs all active+clean
# - 208 GiB data fully redundant on fast SSDs
# - Backup mirror syncing nightly
Memory Configuration¶
| Slot | Size | Type | Speed | Part Number |
|---|---|---|---|---|
| DIMM A | 16 GB | DDR4 | 2400 MT/s | Micron 8ATF2G64HZ-3G2E2 |
| DIMM B | 16 GB | DDR4 | 2400 MT/s | Micron 8ATF2G64HZ-3G2E2 |
| Total | 32 GB |
Performance Characteristics¶
| Component | Spec | Improvement |
|---|---|---|
| CPU | Intel i3-8100T 4-core 3.1GHz | Sufficient for medium workloads |
| RAM | 32GB DDR4-2400 | Adequate for cluster node |
| NVMe | Crucial P2 2TB | ⬆️ OSD now on fast NVMe (was on SATA) |
| SATA | Samsung 870 EVO 2TB | ⬇️ Backup storage (was OSD) |
Migration Impact: - ✅ OSD Performance: ~3-5x faster (NVMe vs SATA) - ✅ Cluster Latency: Reduced (faster OSD operations) - ✅ Reliability: Improved (3-mon quorum vs 2) - ✅ Backup Safety: Enhanced (dedicated mirror drive)
Related Documentation¶
- px1-silverstone Hardware Inventory
- px3-suzuka Hardware Inventory
- Ceph Cluster Architecture
- Network Layout
Migration Completion Summary¶
Phases Completed¶
Phase 1: Remove local-lvm, Create Ceph OSD ✅ COMPLETE - Removed pve/data (1.66TB local-lvm thin pool) - Created pve/ceph-osd (1.71TB LV on NVMe) - OSD.1 created and initialized
Phase 2: Rebalance Ceph Data ✅ COMPLETE - Rebalanced 208 GiB from px3/px1 → px2 - Duration: 1 hour 15 minutes - Result: All 65 PGs active+clean, HEALTH_OK
Phase 3: Remove Old SATA OSD ✅ COMPLETE - Purged OSD.2 from cluster - Zapped SATA drive clean - Cluster stable with 3 OSDs
Phase 4: Setup Backup Storage ✅ COMPLETE - Formatted SATA as ext4 - Mounted as /mnt/backup-secondary (1.7T available) - Added to Proxmox storage config
Phase 5: Fix px2 Monitor Integration ✅ COMPLETE - Created 3-mon monmap (px1, px2, px3) - Initialized px2 mon database - Injected monmap on px3 - px2 mon joined quorum
Phase 6: Enable Backup Mirroring ✅ COMPLETE - Created cron job on px1 (daily 03:30 UTC) - rsync: /mnt/backup-primary → /mnt/backup-secondary - Test rsync confirmed working
Verification Results¶
✅ Cluster Health: HEALTH_OK
✅ OSD Tree: 3 OSDs up/in (osd.0, osd.1, osd.3)
✅ Monitors: 3 daemons in quorum (px1, px2, px3)
✅ PGs: 65 active+clean (0% degraded)
✅ Storage:
- NVMe root (100GB) + swap (16GB) + OSD.1 (1.71TB)
- SATA backup (1.7T) ext4
- NO local-lvm
✅ Proxmox: All nodes online, quorate
✅ Backups: Mirror sync configured and tested
Update History¶
| Date | Phase | Update | Status |
|---|---|---|---|
| 2026-01-16 | — | Initial hardware inventory | ✅ |
| 2026-01-18 | 1 | Removed local-lvm, created OSD.1 on NVMe | ✅ |
| 2026-01-18 | 2 | Rebalanced 208 GiB data to OSD.1 | ✅ |
| 2026-01-18 | 3 | Removed OSD.2, zapped SATA drive | ✅ |
| 2026-01-18 | 4 | Formatted SATA as ext4 backup storage | ✅ |
| 2026-01-18 | 5 | Integrated px2 mon into 3-mon quorum | ✅ |
| 2026-01-18 | 6 | Enabled nightly backup mirror sync | ✅ |
| 2026-01-18 | — | MIGRATION COMPLETE | ✅ |
Last verified: 2026-01-18 18:05 UTC - All 6 phases complete, cluster HEALTH_OK