Lab Build · HPE DL380 G7 + MSA 2050 · Proxmox VE 8.x
Most home labs are one crash
away from starting over.
// I built mine to run like production — bare metal to 2TB iSCSI SAN, zero hand-waving.
2TB
iSCSI Pool
8
Build Stages
2x
MSA Controllers
DL380
HPE G7 Host
DL380 G7
HPE Host
Proxmox VE
Hypervisor
iSCSI
Block Storage
MSA 2050
SAN Array
LVM / VG
Volume Mgmt
devlab
Storage Pool
// Network Architecture — iSCSI Topology
DL380 G7 Proxmox Host
→ ens2f0 → 192.168.1.40 →
iSCSI NIC 1
MSA Ctrl A 192.168.1.50
DL380 G7 same host
→ ens3f1 → 192.168.1.41 →
iSCSI NIC 2
MSA Ctrl B 192.168.1.51
// NO DEFAULT GATEWAY ON iSCSI NICS · /32 HOST ROUTES · rp_filter=2 · DIRECT CABLE, NO SWITCH
Full Build — Step by Step
01
Bare-Metal Install of Proxmox VE on HPE DL380 G7
Download the latest Proxmox VE ISO from the official site. Write it to USB (Rufus, Ventoy, Etcher, or dd). Boot the DL380 from USB, select "Install Proxmox VE", choose local disks (keep this separate from the MSA), set root password and management IP on your LAN — never on the iSCSI subnet. After install, log in at https://<mgmt-ip>:8006.
Foundation
02
Update Proxmox and Adjust Repositories
On a lab without a subscription, swap the enterprise repo for the no-subscription repo, then fully patch the host before touching storage or drivers.
// switch repos + upgrade + reboot
sed -i 's/enterprise/no-subscription/' /etc/apt/sources.list.d/pve-enterprise.list
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" \
  > /etc/apt/sources.list.d/pve-no-sub.list

apt update && apt full-upgrade -y
reboot
Patch First
03
Install Dependencies and Verify NICs
On older hardware like the G7, confirm NIC visibility and install iSCSI tooling before layering any storage config on top. Install firmware packages if NIC or HBA behavior is abnormal.
// install iscsi + multipath, confirm interfaces
apt install -y open-iscsi multipath-tools firmware-linux-free firmware-linux-nonfree
systemctl enable --now open-iscsi

ip a
lspci | grep -i ethernet
# target NICs: ens2f0 (→ Ctrl B .51) and ens3f1 (→ Ctrl A .50)
Verify First
04
Design the iSCSI Network Layout Before Touching Interfaces
Map out IPs, cabling, and routing rules before editing a single config file. The critical rules: iSCSI NICs carry no default gateway, /32 host routes keep traffic on the correct interface, and rp_filter is relaxed to prevent Linux from dropping asymmetric return traffic.
InterfaceIPRoute ToPurpose
ens2f0192.168.1.40/24→ MSA Ctrl B (.51)iSCSI Path 1
ens3f1192.168.1.41/24→ MSA Ctrl A (.50)iSCSI Path 2
MSA Ctrl A192.168.1.50Primary Controller
MSA Ctrl B192.168.1.51Secondary Controller
Plan It Out
05
Configure iSCSI NICs in /etc/network/interfaces
With the layout locked, apply static IPs, per-interface host routes, and rp_filter settings. After writing the file, restart networking or reboot, then validate every route lands on the expected NIC.
// /etc/network/interfaces — iSCSI NIC config
auto ens2f0
iface ens2f0 inet static
    address 192.168.1.40/24
    post-up ip route replace 192.168.1.51/32 dev ens2f0 src 192.168.1.40
    post-up sysctl -w net.ipv4.conf.ens2f0.rp_filter=2

auto ens3f1
iface ens3f1 inet static
    address 192.168.1.41/24
    post-up ip route replace 192.168.1.50/32 dev ens3f1 src 192.168.1.41
    post-up sysctl -w net.ipv4.conf.ens3f1.rp_filter=2
// validate — both pings and routes must pass
ping 192.168.1.50  # → should reply
ping 192.168.1.51  # → should reply
ip route get 192.168.1.50  # → must show ens3f1
ip route get 192.168.1.51  # → must show ens2f0
Wire It Up
06
Configure the HPE MSA 2050 — LUN, Pool, and Host Mapping
On the MSA side: confirm controller IPs, subnet mask, and link health. Create your RAID-backed disk group and a ~2TB volume. Then create a host entry using the Proxmox iSCSI initiator IQN and map the LUN to that host. The IQN must match exactly what Proxmox reports — one character off and discovery returns nothing.
// get proxmox initiator IQN to use on MSA
cat /etc/iscsi/initiatorname.iscsi
# copy this value into MSA host entry verbatim
SAN Side
07
iSCSI Discovery and Login — Bring the LUN Online
Discover targets from both controllers, then log in. If discovery hangs, the problem is almost always routing or rp_filter — go back to Step 5. Once logged in, the kernel should report a new block device. Confirm with dmesg and lsblk before touching LVM.
// discover + login + verify block device
iscsiadm -m discovery -t sendtargets -p 192.168.1.50
iscsiadm -m discovery -t sendtargets -p 192.168.1.51
# both should return the MSA IQN

iscsiadm -m node --login

dmesg | tail       # look for new scsi device
lsblk              # new /dev/sdX should appear
fdisk -l           # confirm 2TB size
Go Live
08
Build LVM Volume Group and Register devlab in Proxmox
Initialize the iSCSI block device as a physical volume, create the vg_iscsi volume group, then wire it into the Proxmox GUI as the devlab storage target. Future expansion is non-destructive: extend the LUN on the MSA and resize LVM in place.
// LVM setup — replace sdX with your device
pvcreate /dev/sdX
vgcreate vg_iscsi /dev/sdX
vgs && pvs
# should show ~2TB free in vg_iscsi
In the Proxmox web UI: Datacenter → Storage → Add → LVM. Set ID to devlab, volume group to vg_iscsi, enable for disk images and containers. Save — devlab is now live and backed by the MSA 2050.
Done