具有OPNsense的Proxmox作为pci-passthrough设置,用作防火墙/路由器/IPsec/PrivateLAN/MultipleExtIPs [英] Proxmox with OPNsense as pci-passthrough setup used as Firewall/Router/IPsec/PrivateLAN/MultipleExtIPs

查看:262
本文介绍了具有OPNsense的Proxmox作为pci-passthrough设置,用作防火墙/路由器/IPsec/PrivateLAN/MultipleExtIPs的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

此设置应基于proxmox,位于Proxmox本身托管的opnsense VM的后面,这将保护proxmox,为VM提供防火墙,私有LAN和DHCP/DNS,并提供与LAN的IPsec连接.访问所有未NAT的VM/Proxmox. 该服务器是典型的Hetzner服务器,因此仅在NIC上,但在此NIC上有多个IP或/子网.

  1. 具有1个NIC(eth0)的Proxmox服务器
  2. 3个公用1IP,IP2/3由MAC在数据中心路由到eth0
  3. eth0通过PCI传递到OPNsense KVM
  4. vmbr30上的专用网络10.1.7.0/24
  5. 一个IPsec移动客户端将(172.16.0.0/24)连接到LAN

为了更好地概述设置,我创建了此[绘图] [1] :(不确定其完美之处,请告诉我要改进的地方)

问题:

如何使用PCI-Passthrough而不是桥接模式来设置这种情况.

跟进

I) Why i cannot access PROXMOX.2 but access VMEXT.11 (ARP?)

II)是为什么我需要从*到* IPSEC链规则来使ipsec运行.这很可能是与操作感相关的问题.

III) I tried to handle the 2 additional external IPs by adding virtual ips in OPNsense, adding a 1:1 nat to the internal LAN ip and opening the firewall for the ports needed ( for each private lan IP ) - but yet i could not get it running. The question is, should each private IP have a seperate MAC or not? What is specifically needed to get a multi-ip setup on WAN

解决方案

一般的高级观点

添加pci直通

有点超出范围,但您需要的是

  • proxmox主机的串行控制台/LARA.
  • 从opnsense(在我的情况下为vmbr30)到proxmox private(10.1.7.2)之间有效的LAN连接,反之亦然.当您只有tty控制台并且需要重新配置opnsense接口以将em0添加为新的WAN设备时,将需要此功能
  • 您可能在通行之前已经建立了IPsec连接,或者在通行之后打开了WAN ssh/gui以进行opnsense的进一步配置

总体而言,其本指南-简而言之

vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

update-grub

vi /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

然后重新启动并确保您有一个iommu表

find /sys/kernel/iommu_groups/ -type l
  /sys/kernel/iommu_groups/0/devices/0000:00:00.0
  /sys/kernel/iommu_groups/1/devices/0000:00:01.0

现在找到您的网卡

lspci -nn

就我而言

00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)

执行此命令后,将eth0与proxmox分离,并断开网络连接.确保您拥有tty!请用您的pci插槽替换"8086 15b7"00:1f.6(请参见上文)

echo "8086 15b7" > /sys/bus/pci/drivers/pci-stub/new_id && echo 0000:00:1f.6 > /sys/bus/pci/devices/0000:00:1f.6/driver/unbind && echo 0000:00:1f.6 > /sys/bus/pci/drivers/pci-stub/bind

现在编辑您的VM并添加PCI网卡:

vim /etc/pve/qemu-server/100.conf

并添加(替换为00:1f.6)

machine: q35
hostpci0: 00:1f.6

在tty proxmox主机上使用ssh root@10.1.7.1启动opnsense连接,编辑接口,将em0添加为WAN接口并在DHCP上进行设置-重新启动opnsense实例,它应该再次启动.

将串行控制台添加到您的opnsense

如果需要快速灾难恢复或opnsense实例出现问题,则基于CLI的串行非常方便,特别是如果使用LARA/iLO进行连接.

完成此操作,添加

vim /etc/pve/qemu-server/100.conf

并添加

serial0: socket

现在在您的opnsense实例中

vim /conf/config.xml

并添加/更改此内容

<secondaryconsole>serial</secondaryconsole>
<serialspeed>9600</serialspeed>

请确保将当前的串行速度替换为9600.不要重新启动opnsense vm,然后

qm terminal 100

再次按Enter键,您应该会看到登录提示

提示:您还可以将主控制台设置为串行控制台,帮助您进入启动提示以及更多内容并进行调试.

有关更多信息,请参见 https://pve.proxmox.com/wiki/Serial_Terminal

Proxmox上的网络接口

auto vmbr30
iface vmbr30 inet static
    address  10.1.7.2
    address  10.1.7.1
    netmask  255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    pre-up sleep 2
    metric 1

OPNsense

  1. WAN是外部IP1,已连接em0(eth0 pci-passthrough),DHCP
  2. LAN是10.1.7.1,已附加到vmbr30

多IP设置

但是,我仅涵盖ExtraIP部分,而不涉及额外的Subnet-Part.为了能够使用额外的IP,您必须为机械手中的每个IP禁用单独的MAC-因此所有额外的IP都具有相同的MAC(IP1,IP2,IP3)

然后,在OPN中,为每个外部IP在防火墙VirtualIP中添加一个虚拟IP(对于每个其他IP,而不是将WAN绑定到的主IP).给每个虚拟IP一个很好的描述,因为它将在以后的选择框中.

现在,您可以为每个端口转到防火墙"->"NAT"->转发"

  • 目的地:您要从(IP2/IP3)转发的ExtIP
  • 目标端口范围:您要转发的端口,例如ssh
  • 重定向目标IP:您要映射的LAN VM/IP,例如10.1.7.52
  • 设置重定向端口,例如ssh

现在您有两个选择,第一个选择的比较好,但是可能需要更多的维护.

对于访问IP2/IP3服务的每个域,应在本地DNS中定义覆盖".映射到实际的专用IP上.这样可以确保您可以从内部与服务进行通信,并且可以避免自从以前使用NAT以来遇到的问题.

否则,您需要关心NAT反射-否则您的LAN盒将无法访问外部IP2/IP3,这至少会导致Web应用程序中的问题.执行此设置并激活出站规则和NAT反射:

正在做什么:

  • OPN可以路由a] 5 ] 5 访问Internet并在WAN上拥有正确的IP
  • OPN可以访问LAN中的任何客户端(VMPRIV.151和VMEXT.11和PROXMOX.2)
  • 我可以将IPSec移动客户端连接到OPNsense,从而可以从虚拟IP范围172.16.0.0/24访问LAN(10.1.7.0/24)
  • 与IPsec连接时,我可以访问10.1.7.1(opnsense)
  • 我可以使用IPsec客户端访问VMEXT
  • 我可以将端口或1:1NAT从额外的IP2/IP3转发到特定的私有VM

底线

此设置比使用桥接的替代方案要好得多我描述的模式.不再需要异步路由,不需要在proxmox上建立岸墙,也不需要在proxmox上进行复杂的网桥设置,并且由于我们可以再次使用校验和离线化,因此它的性能要好得多.

缺点

灾难恢复

对于灾难恢复,您需要更多的技能和工具.您需要一个proxmox hv的LARA/iPO串行控制台(因为您没有Internet连接),并且您将需要配置opnsense实例以允许串行控制台(如此处所述),因此您可以在完全没有VNC连接的情况下访问opnsense和现在的SSH连接(甚至可以从本地LAN连接,因为网络可能会断开).它运作良好,但需要像其他替代品一样快地进行训练

集群

据我所知,此设置无法在群集proxmox env中使用.您可以首先设置群集,我通过使用单独的群集在proxmox hv上本地使用了tinc-switch设置来完成网络.首先设置很容易,没有中断.第二个联接需要已经进入LARA/iPO模式,因为您需要关闭并删除用于联接的VM(因此网关将关闭).您可以通过使用eth0 NIC临时连接Internet来做到这一点.但是,加入后,再次移入虚拟机,将无法启动虚拟机(因此将无法启动网关).您无法启动VMS,因为您没有仲裁-而且您没有仲裁,因为您没有可以加入群集的互联网.因此,最终我无法克服母鸡蛋问题.如果应该解决这个问题,那么实际上只有一个KVM不是proxmox VM的一部分,而是一个独立的qemu-我现在不希望这样做.

This setup should be based on a proxmox, being behind a opnsense VM hosted on the Proxmox itself which will protect proxmox, offer a firewall, a privat LAN and DHCP/DNS to the VMs and offer a IPsec connection into the LAN to access all VMs/Proxmox which are not NATed. The server is the typical Hetzner Server, so only on NIC but multiple IPs or/subnets on this NIC.

  1. Proxmox Server with 1 NIC(eth0)
  2. 3 Public 1IPs, IP2/3 are routed by MAC in the datacenter (to eth0)
  3. eth0 is PCI-Passthroughed to the OPNsense KVM
  4. A private network on vmbr30, 10.1.7.0/24
  5. An IPsec mobile client connect (172.16.0.0/24) to LAN

To better outline the setup, i create this [drawing][1]: (not sure its perfect, tell me what to improve)

Questions:

How to setup such a scenario using PCI-Passthrough instead of the Bridged Mode.

Follow ups

I) Why i cannot access PROXMOX.2 but access VMEXT.11 (ARP?)

II) is why do i need a from * to * IPSEC chain rule to get ipsec running. That is most probably a very much opnsense related question.

III) I tried to handle the 2 additional external IPs by adding virtual ips in OPNsense, adding a 1:1 nat to the internal LAN ip and opening the firewall for the ports needed ( for each private lan IP ) - but yet i could not get it running. The question is, should each private IP have a seperate MAC or not? What is specifically needed to get a multi-ip setup on WAN

解决方案

General high level perspective

Adding the pci-passthrough

A bit out of scope, but what you will need is

  • a serial console/LARA to the proxmox host.
  • a working LAN connection from opnsense (in my case vmbr30) to proxmox private ( 10.1.7.2 ) and vice versa. You will need this when you only have the tty console and need to reconfigure the opnsense intefaces to add em0 as the new WAN device
  • You might have a working IPsec connection before or opened WAN ssh/gui for further configuration of opnsense after the passthrough

In general its this guide - in short

vi /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"

update-grub

vi /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Then reboot and ensure you have a iommu table

find /sys/kernel/iommu_groups/ -type l
  /sys/kernel/iommu_groups/0/devices/0000:00:00.0
  /sys/kernel/iommu_groups/1/devices/0000:00:01.0

Now find your network card

lspci -nn

in my case

00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-LM [8086:15b7] (rev 31)

After this command, you detach eth0 from proxmox and lose network connection. Ensure you have a tty! Please replace "8086 15b7" and 00:1f.6 with your pci-slot ( see above)

echo "8086 15b7" > /sys/bus/pci/drivers/pci-stub/new_id && echo 0000:00:1f.6 > /sys/bus/pci/devices/0000:00:1f.6/driver/unbind && echo 0000:00:1f.6 > /sys/bus/pci/drivers/pci-stub/bind

Now edit your VM and add the PCI network card:

vim /etc/pve/qemu-server/100.conf

and add ( replace 00:1f.6)

machine: q35
hostpci0: 00:1f.6

Boot opnsense connect using ssh root@10.1.7.1 from your tty proxmox host, edit the interfaces, add em0 as your WAN interface and set it on DHCP - reboot your opnsense instance and it should be up again.

add a serial console to your opnsense

In case you need a fast disaster recovery or your opnsense instance is borked, a CLI based serial is very handy, especially if you connect using LARA/iLO whatever.

Do get this done, add

vim /etc/pve/qemu-server/100.conf

and add

serial0: socket

Now in your opnsense instance

vim /conf/config.xml

and add / change this

<secondaryconsole>serial</secondaryconsole>
<serialspeed>9600</serialspeed>

Be sure you replace the current serialspeed with 9600. No reboot your opnsense vm and then

qm terminal 100

Press Enter again and you should see the login prompt

hint: you can also set your primaryconsole to serial, helps you get into boot prompts and more and debug that.

more on this under https://pve.proxmox.com/wiki/Serial_Terminal

Network interfaces on Proxmox

auto vmbr30
iface vmbr30 inet static
    address  10.1.7.2
    address  10.1.7.1
    netmask  255.255.255.0
    bridge_ports none
    bridge_stp off
    bridge_fd 0
    pre-up sleep 2
    metric 1

OPNsense

  1. WAN is External-IP1, attached em0 (eth0 pci-passthrough), DHCP
  2. LAN is 10.1.7.1, attached to vmbr30

Multi IP Setup

Yet, i only cover the ExtraIP part, not the extra Subnet-Part. To be able to use the extra IPs, you have to disable seperate MACs for each ip in the robot - so all extra IPs have the same MAC ( IP1,IP2,IP3 )

Then, in OPN, for each extern IP you add a Virtual IP in Firewall-VirtualIPs(For every Extra IP, not the Main IP you bound WAN to). Give each Virtual IP a good description, since it will be in the select box later.

Now you can go to either Firewall->NAT->Forward, for each port

  • Destination: The ExtIP you want to forward from (IP2/IP3)
  • Dest port rang: your ports to forward, like ssh
  • Redirect target IP: your LAN VM/IP to map on, like 10.1.7.52
  • Set the redirect port, like ssh

Now you have two options, the first one considered the better, but could be more maintenance.

For every domain you access the IP2/IP3 services with, you should define local DNS "overrides" mapping on the actually private IP. This will ensure that you can communicate from the inner to your services and avoids the issues you would have since you used NATing before.

Otherwise you need to care about NAT reflection - otherwise your LAN boxes will not be able to access the external IP2/IP3, which can lead to issues in Web applications at least. Do this setup and activate outbound rules and NAT reflection:

What is working:

  • OPN can route a]5]5ccess the internet and has the right IP on WAN
  • OPN can access any client in the LAN ( VMPRIV.151 and VMEXT.11 and PROXMOX.2)
  • i can connect with a IPSec mobile client to OPNsense, offering access to LAN (10.1.7.0/24) from a virtual ip range 172.16.0.0/24
  • i can access 10.1.7.1 ( opnsense ) while connected with IPsec
  • i can access VMEXT using the IPsec client
  • i can forward ports or 1:1NAT from the extra IP2/IP3 to specific private VMs

Bottom Line

This setup works out a lot better then the alternative with the bridged mode i described. There is no more async-routing anymore, there is no need for a shorewall on proxmox, no need for a complex bridge setup on proxmox and it performs a lot better since we can use checksum offloding again.

Downsides

Disaster recovery

For disaster recovery, you need some more skills and tools. You need a LARA/iPO serial console the the proxmox hv ( since you have no internet connection ) and you will need to configure you opnsense instance to allow serial consoles as mentioned here, so you can access opnsense while you have no VNC connection at all and now SSH connection either ( even from local LAN, since network could be broken ). It works fairly well, but it needs to be trained once to be as fast as the alternatives

Cluster

As far as i can see, this setup is not able to be used in a cluster proxmox env. You can setup a cluster initially, i did by using a tinc-switch setup locally on the proxmox hv using Seperate Cluster Network. Setup the first is easy, no interruption. The second join needs to already taken into LARA/iPO mode since you need to shutdown and remove the VMs for the join ( so the gateway will be down ). You can do so by temporary using the eth0 NIC for internet. But after you joined, moved your VMs in again, you will not be able to start the VMs ( and thus the gateway will not be started). You cannot start the VMS, since you have no quorum - and you have no quorum since you have no internet to join the cluster. So finally a hen-egg issue i cannot see to be overcome. If that should be handled, only by actually a KVM not being part of the proxmox VMs, but rather standalone qemu - not desired by me right now.

这篇关于具有OPNsense的Proxmox作为pci-passthrough设置,用作防火墙/路由器/IPsec/PrivateLAN/MultipleExtIPs的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆