On 9/22/20 8:54 AM, Steffen Nurpmeso wrote:
My understanding also is about MAC changing, but it
seems there are
drivers which can do something about it. Not my area, sorry.
That matches my understanding.
This is exactly the nice thing about the approach i
use, the one
side of the veth pair that is not in the namespace simply plugs into
whatever network environment there is on the host, as long as the
host as a route to it.
Sorry, I was asking for more clarification on what you do with the host
end of the veth to connect it to the rest of your environment.
The other end, jailed in the namespace, can regulary
be used by a
bridge device.
Yes.
I'm wondering why you are attaching the NetNS end of the veth to a
bridge instead of just using the veth directly.
And is constant and self-sufficient. I use fixed
addresses, for
example. (Parsed from a hosts.txt that is also read in by dnsmasq(8)
so that the host and the VMs find each other. All i have to adjust
is the hosts.txt. Having said that, i can improve this by deriving
the MAC address from that file, too.)
Sure.
I'm about 98% certain that all of that applies equally as well to the
veth interface inside of the network namespace as it does to the bridge
inside of the network namespace.
So ... why use a bridge inside of the network namespace?
I completely get why you use a bridge on the host and the veth interface
outside of the network namespace. I just don't understand why you are
using a bridge /inside/ the network namespace.
Yes. It is just that when you search the internet for
Linux and
bridges you will find mostly brctl or systemd things. (Generally
spoken the amount of let me say whisked crap is near hundred percent
in fact.)
The other thing that I find is how to configure bridging in distro init
scripts.
Yes.
You have seen all the configuration there is. It is isolated, it is
not affected by the firewall rules of the host, the firewall rules of
the host do not take care of this thing at all, attack surface is thus
only kernel bugs, i thing, and anything on the inside can be hardwired.
No worries.
Depending on how you are connecting the host side veth to the network,
there is a very real chance that the host firewall will influence what
goes into / comes out of the emulator in the network namespace
(~container). Particularly if you are routing. Less so if you are
bridging. But bridging can still be effected by the firewall.
Yes it is cool. The "Linux Advanced Routing
& Traffic Control HOWTO"
(twenty years ago such great things were written and said "Welcome,
gentle reader", unfortunately that all stopped when the billions
came over Linux i think, but, of course, today the manual pages are
great) says
Ya. The Linux Documentation Project and their How-To's were (arguably
still is) great. It's not /quite/ timeless. But much of the stuff
there is still viable. Some of it is woefully out of date though.
/proc/sys/net/ipv4/conf/DEV/proxy_arp If you set this
to 1, this
interface will respond to ARP requests for addresses the kernel has
routes to. Can be very useful when building 'ip pseudo bridges'. Do
take care that your netmasks are very correct before enabling
this! Also be aware that the rp_filter, mentioned elsewhere, also
operates on ARP queries.
*nod*
Compared to proxy_arp it was 40 percent here.
However, this was
with kernel 4.19 and the network driver (R8822BE) was in staging, now
with 5.8 it is not (RTW88) but terribly broken or buggy or whatever.
It must be said this driver seems to be very complicated, the R8822BE
had a megabyte of code infrastructure iirc. But it is terrible,
and now i know that audio via bluetooth and wlan throughput are
highly dependent. (Or can.)
Sounds like you've got some issues that I typically don't run into with
more traditional RTL 8129 / 8139 / 8169 drivers.
I did not know that, will try it out.
I figured that you would appreciate it.
I assign an address to the interface, and make that
interface routable
from the host.
The fact that the prefix is on a directly attached network should be
sufficient to make it routable to the host.
Unless you are also using the same 10.0.0.0/8 on the other
{wired,wireless} network that your system is connected to.
This is where all the VMs plug into.
I disagree.
The VMs plug into the host bridge.
I'm asking about why you have a bridge /inside/ of each of the VMs.
That is the other side of the veth interface pair of
course.
Yes.
But you can use the v_i interface /directly/. I'm not seeing any /need/
for the bridge /inside/ the network namespace.
No. The purpose is to be able to create a network of
any number
of VMs somewhere, so those go via the bridge, no? This network is
self-sufficient
ANY HOST HOWEVER ONLINE <---> VETH -|- VETH IN NAMESPACE
^
ANY NUMBER OF VMS <-> BRIDGE <
Why would you want to do this? I do not understand.
+---------------------------------------+
|host +------+ +-----------+|
| | +---v_ns1---+v_i v_ns1||
| | | +-----------+|
| | | +-----------+|
(LAN)---+eth0---+ bri0 +---v_ns2---+v_i v_ns2||
| | | | +-----------+|
| | | | +-----------+|
| | | +---v_ns3---+v_i v_ns3||
| | +------+ +-----------+|
| +---------------------------------------+
|
| +---------------+
(LAN)---+eth0 notebook|
+---------------+
Each network namespace (v_ns#) has it's own vEth pair. The host side of
each vEth pair is connected to the bridge on the host. The bridge on
the host is connected to the host's eth0 interface. Thus, each of the
network namespaces have a layer 2 network connection to the LAN.
Meaning that each of the network namespaces are proper members of the
LAN. No routing is needed. No proxy ARP is needed. Notebook, host,
v_ns1, v_ns2, v_ns3 can all be on the same subnet without doing anything
fancy.
Yeah, this is a leftover from the proxy_arp based
pseudo-bridge.
Not to forget it, maybe. I should have removed it before posting.
I am not a network expert ok, especially not so Linux-specific.
*nod*
I was just trying to confirm that's historic. Seeing as how it's
commented out.
Trying to deduce what, and more so why, can be non-trivial at times.
This is just the startup of a VM, it registers at the
bridge.
Yep. Adding the host end of the vEth pair to the host bridge.
I am _not_ using proxy_arp no more. This is a
leftover, at the
beginning i made it configurable and could switch in between the
different approaches via a setting in /x/vm/.run.sh. I should have
removed it before posting.
It's cool. Methods, scripts there of, evolve.
That one is isolated, i can reach it from the host,
and they can talk
to the host, each other and the internet. I never placed servers
addressable from the outside in such a thing, this is only a laptop
without fixed address. I guess in order to allow a public accessible
server to live in the namespace i would maybe need an ipfilter rule
on the host.
Or bridging, as depicted above. ;-)
Yes --bind mounting is cool also. But i definetely do
not want to
give the VM an entire /dev, it only needs u?random and that _only_
because libcrypt (linked into qemu) needs it, even though it is not
actually used (the Linux getrandom system call is used instead).
You can copy the /dev/urandom device, or make a new one, or bind mount
the device inside the network namespace. ;-)
Even if it's not used for anything other than to make the kernel happy,
you are going to need it.
Yes. But no, i do not really need it, i use it only
for qemu
instances. Interesting would be imposing hard CPU and memory
restrictions, especially if i would use this approach for creating
servers, too. This is a future task, however.
Now you're hedging on cgroups.
Yeah like i said, i could impose more restrictions on
programs running
inside that namespace, qemu also shows some security flaws at times,
but this is really just for testing purposes etc.
IMHO /everything/ has security flaws at one point or another. It's just
a matter of when.
--
Grant. . . .
unix || die