Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chaining ProxyVMs between NetVM and AppVMs to forward traffic #10

Open
tlaurion opened this issue Apr 14, 2020 · 4 comments
Open

Chaining ProxyVMs between NetVM and AppVMs to forward traffic #10

tlaurion opened this issue Apr 14, 2020 · 4 comments

Comments

@tlaurion
Copy link

From #1 (comment)

A PoC by @Rudd-O for QubesOS 4.0.

In regard of QubesOS 3.2 version of this project and going forward:

The previous code also did an ungodly thing that the current code does not do: it recursed through the connected VMs to add routes and rules for the specific IP/VIF.

It did so in a very poor way (really only supporting this behavior upon NetVM/ProxyVM domain start).

Fundamentally what happened if you had a Net->Proxy->App chain, the previous code would walk that path from Net to App, and would add on each running domain hard-coded routes / firewall rules to the IP/VIF assigned to the App, so that traffic coming in from outside the Net would know which interface to send the traffic to, along each hop, all the way up to the App .

This should be relatively easy to re-do now that we have an admin core extension to inform the running domains of what's actually up. At the moment, the admin core code assumes that only the NetVM to which the AppVM is attached to, needs to know that there's an IP which needs routing. This is the fundamental source of the current limitation that forces you to attach your networked AppVM to the NetVM instead of to a ProxyVM. That IP information should instead probably be fanned down the chain from AppVM to final NetVM, so that each VM along the path knows where to route the traffic, and sets the correct routes.

@marmarek: Any collaboration in making this go forward for the 4.0 and 4.1 branches?

@Rudd-O
Copy link
Owner

Rudd-O commented Nov 16, 2020

I would love to re-add that feature. If someone has working code, I am happy to review and merge it.

@Rudd-O
Copy link
Owner

Rudd-O commented Feb 6, 2024

This is very complicated to do because it requires chaining ARP / NDP between VMs. Id est if you have this:

physlaptop <-> netvm <-> another netvm <-> appvm

To get physlaptop to talk to appvm, not only does another netvm need to proxy ARP appvm, netvm also needs to proxy ARP appvm, add firewall rules to permit traffic towards appvm, and add routing table entries that point to appvm to the right network interfaces.

By design in Qubes OS, netvm does not have any visibility into where netvm is attached. Nothing is preventing the code in the qubesnetworkserver module (installed in dom0) from telling each NetVM proxy ARP these addresses in addition to your immediate attached VIFs, or telling those same NetVMs hey, you also need to add forwarding rules for the following IPs in addition to your immediate attached VIFs... but this sort of thing is actually quite tricky to get right.

Ultimately, it all comes down to making the qubesnetworkserver qubesd plugin boss the qubesroutingmanager on each NetVM around. So if dom0 wanted to tell the entire downstream topology to each and every NetVM that needed to know it, sure, that would work.

Oh, and there's routing tables to manage too.

It's not easy.

@tlaurion
Copy link
Author

This is very complicated to do because it requires chaining ARP / NDP between VMs. Id est if you have this:

physlaptop <-> netvm <-> another netvm <-> appvm

To get physlaptop to talk to appvm, not only does another netvm need to proxy ARP appvm, netvm also needs to proxy ARP appvm, add firewall rules to permit traffic towards appvm, and add routing table entries that point to appvm to the right network interfaces.

By design in Qubes OS, netvm does not have any visibility into where netvm is attached. Nothing is preventing the code in the qubesnetworkserver module (installed in dom0) from telling each NetVM proxy ARP these addresses in addition to your immediate attached VIFs, or telling those same NetVMs hey, you also need to add forwarding rules for the following IPs in addition to your immediate attached VIFs... but this sort of thing is actually quite tricky to get right.

Ultimately, it all comes down to making the qubesnetworkserver qubesd plugin boss the qubesroutingmanager on each NetVM around. So if dom0 wanted to tell the entire downstream topology to each and every NetVM that needed to know it, sure, that would work.

Oh, and there's routing tables to manage too.

It's not easy.

@marmarek: guidance in doing so?

@tlaurion
Copy link
Author

@Rudd-O what are the challenges here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants