How to configure networking for a Linux Application Container?

So I'm trying to begin learning about Linux Containers and tinkering with them.

I'm running a Fedora 25 Work station with the required LXC packages installed. i.e. this:

sudo dnf install lxc lxc-templates lxc-extra debootstrap libvirt debootstrap perl

From here: https://fedoraproject.org/wiki/LXC Although that page looks a bit out of date.

So, after doing that and starting the libvirtd service, I opened Virtual Manager and connected to the LXC session. Then I proceeded to create the container using Virtual Manager's GUI (as a first step to tinkering with containers).

The container is up and running. I had selected Application Container as I don't want to virtualize an entire OS. Just a single service to run on my machine. In this case, a MySQL Database (because I'm also learning SQL stuff and that'd be useful).

The default binary the container loaded was a shell, and the default storage mount point was the root file system to the container's root file system. So even the file system isn't "virtualized" (or rather, remade on it's own). Meaning I can access my home directory and the entire host OS directly through the container.

Not only that, but the network is down within the container, and I can't find a guide on how to configure a container network for an application container. I find many for OS containers though.

Any advice would be appreciated.

Note: My goal here is to have mysql run inside it's own container as a service. From my host and other containers' perspectives, it should be like connecting to another computer using an IP address. That is my goal anyway.

Maybe Application Containers aren't for that? But then what are they for?

You can bridge the host's net to an lxc container with lxc-net (rather than lxc) iirc.
Also since ur on Fedora you should already have selinux sandbox installed, just run a program with 'sandbox -n program' (-n (iirc lol, check the man page) allows net access)

2 Likes

So I have a QEMU VM running Windows atm which accesses the internet through a bridge called br_0. Creating the LXC created a new network interface called vnet0@if8. The container has the network connection eth0@if9 when I run ip link.

eth0@if9 is in a DOWN mode while vnet0@if8 is in a LOWERLAYERDOWN mode and it's master is br_0.

I assume I just need to assign one of those interfaces a static IP address because DHCP will not work (it's disabled on my network for varying reasons). I had to do the same thing for my QEMU VM connection.

I'm just not sure which I would configure in this instance. The host's virtual interface or the guest's.

Attempting to use iproute2 within the container to assign eth0@if9 a static IP address results in "Cannot find device eth0@if9". I can't set it to UP mode either.

The same happens for vnet0@if8 on the host if I try there.

Normally I'd go to the container configuration file and apply network settings there, but it seems that using Virtual Manager to create a Linux Container doesn't create a config file for it. It's as bare bones as can be somehow.

Running brctl show gives me this:

bridge name	bridge id		STP enabled	interfaces
br_0		<bridge_id>  	  no		enp7s0
							            vnet0

That's good to know. Thanks.

Yup, it's basically lxd instead of lxc, although I think lxd was a development name, the packages are still lxc, qemu and selinux, but the feature you're using is "unprivileged lxc" aka "lxd" aka linux sandboxing.

So can you clarify something for me?

If the linux container has access to the entire host file system, how is it sandboxed?

If it requires usage of packages installed on the host machine, how can I container-ize the process?

Edit:

After some reading, I'm seeing that you can limit the container's rootfs to anywhere and it will still be able to run as that's how containers work. Running the host's kernel and using the resources given to it through namespaces and cgroups.

So regarding the specific LXC I'm talking about, I need to just point the rootfs elsewhere. I'd also need to mount the desired process to the container to run it. i.e. mysql in this case.

So I still can't figure this out.

This is the current setup:
Arch-host
- Arch-containers

Arch-host has NetworkManager installed and the networking is configured using that. The Arch-containers only have netctl for configuring networking.

I've disabled lxc-net because it creates a NAT'd network and I'm going for a host-bridged network such that the containers are accessible from the LAN.

NAME                 UUID                                  TYPE             DEVICE      
br0                  4a132a0b-cfe8-4248-b91c-941e8ce85cad  bridge           br0         
br0-httpd            42275e1e-3ef1-49fe-8bbe-7d3cdf9fe6b0  802-3-ethernet   httpd.veth0 
br0-tether           921dd1df-d65e-451b-8711-f8972e3330fd  802-3-ethernet   enp10s0u1

That's my current networking setup. I used nmcli (NetworkManager's CLI interface) to create bridge br0. I then added each of my containers' Virtual Ethernet devices to the bridge as slaves as well as my main internet connections (tethering, ethernet, and WiFi).

# Distribution configuration
lxc.include = /usr/share/lxc/config/archlinux.common.conf
lxc.arch = x86_64

# Container specific configuration
lxc.rootfs = /var/lib/lxc/httpd/rootfs
lxc.rootfs.backend = btrfs
lxc.utsname = httpd

# Network Configuration
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = br0
lxc.network.hwaddr = 00:16:3e:49:17:f1
lxc.network.veth.pair = httpd.veth0
lxc.network.name = veth0

That is my config file for one of the containers as an example.

Description='A basic dhcp ethernet connection'
Interface=httpd.veth0
Connection=ethernet
IP=dhcp

That's my netctl profile within the container itself. I've enabled netctl as a service and attempted to start the profile.

I believe the issue is that the ethernet device within the container is a Virtual Ethernet device. If I try to list the network device within the container, it is listed as veth.

I read that these require two sides. Meaning they have to be attached to a bridge usually.

I'm not sure how to continue with configuring this how I want. Nothing covers using NetworkManager for this. The Arch Wiki just says "use the GUI" but the GUI doesn't let me create bridge devices.

veth won't work.
it means there is no adapter found.
there are multiple ways to configure this.
look what connectors are available on your system, if you configure a bridged adapter, the bridged connection will be listed with the system name for it. that name is the config name. it will get the veth designation from user side inside the lxc, but you can't config the lxc for that name.

1 Like

So I'm not sure what you mean here.

For a host-bridged LXC network? That's what all the guides I find say to do.

That makes sense since there is no bridge device within the container itself. Just a veth device.

Yes, I've found multiple examples, just none for my specific configuration. Most utilize /etc/network/interfaces because they're using Ubuntu LXCs but Arch doesn't use that by default. So I'm trying to functionally replace that with netctl (in the containers) and Network Manager (on the host).

The guides I've found usually say configuring the device from inside the container works with DHCP, but those aren't using netctl so I don't know if that matters or not.

I'm not sure what you mean here. I've configured a bridge and added the host's veth for the containers to it along with my physical ethernet connection. I figured that's all I needed but it seems not.

Not sure what you mean.

I'm not sure if that's assuming I use lxc-net, but I'm not using that because it creates a NAT'd network. I need host-bridged so the services in the containers are accessible from the LAN.

The /car/lib/lxc/CONTAINER/config file has the following line in it:
lxc.network.veth.pair = CONTAINER.veth0

From here https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html#lbAM :
you can tell lxc to set a specific name with the lxc.network.veth.pair option (except for unprivileged containers where this option is ignored for security reasons).

Essentially my requirements are the following:

  1. Non-NATd connection to LAN/WAN from the containers.
  2. DHCP used to get IP Address in containers.
  3. Container to Container host name resolution.

The thing is that in my current setup, #3 is working.

If I ping one container by host name while in another container, I get a response. It's via MAC address though.

@Zoltan

So, I got networking to work by doing the following:

  1. Go into each container and give the veth0's in them a static IP address and gateway.
  2. Go into each container and edit /etc/hosts adding the first mirrors for Arch packages manually.
  3. Install NetworkManager and whatever else packages were needed.
  4. Start Network Manager service.
  5. Assign DNSes to the veth0 connections using nmcli.

So now my containers can see the outside world, have their own static IP address, and get full WAN DNS resolution (not LAN though yet, but I think I know why that's failing).

So basically I did it all manually by hand.

This doesn't solve my problem of wanting the containers to just use DHCP to get all that information, but it means I can start using the containers for what they were made for (at least for now).

2 Likes

I was looking into that last night.

I'm not really using Arch in a network application for the moment so I haven't been following up on how they do things, but there are a few things that intrigued me about the networkmanager, as I've had some weird issues with it myself.

It seems that nm has a few bugs that makes normally straightforward functionality a bit weird, so that it's better to go about them carefully, and to set up critical routing and internal DNS and DHCP settings by hand basically. It seems like these bugs have been going on for a while too.

The problem that I was having on SuSE and Fedora, basically comes down to the fact that nm does not report or interact with systemd like it should with respect to certain functionality. It's really odd though, because on Fedora for instance it looks like it works, but crosschecking the system logs reveals another story. On SuSE, it was quite obvious that for instance setting some parameters in Yast, did not change the same parameter in nm, so it was clear that there was a problem there. Because there is a difference between Yast and for instance firewall-config used by Fedora, I decided to install firewalld and firewall-config on an Ubuntu system instead of the ufw. Turned out I get the exact same problem there as in SuSE. So for instance, something that is really visible and easy to see, when I change a zone for an adapter in firewall-config on a system with nm, that zone - which is a firewalld/systemd thing - should then copy that setting and report back on the setting, because it is not set directly (like when you would set it manually), but through nm. What happened is that the setting was actually copied by nm, to show up in the GUI, but it was not implemented, even after full reboot (which is overkill because restarting the services, and restarting the connection should suffice). So that was sneaky lol. On Fedora, that exchange was not flawless either, but only after adding an extra layer of settings, e.g. manually configuring a firewalld service and implementing it as the only enabled service in a zone.

When I checked the bug reports, I found several bug reports that point in the same direction, but in different contexts. It seems like this has been going on for 5-6 months in different releases of the concerned packages.

Basically, I think that it might be a systemd problem. Some communication channel between services and applications seems to be broken or at least unreliable. I guess it just takes a while for the devs to trace the issue upstream and in the right package.

When you told me your containers configure a connection, like they should, but then that connection doesn't connect, I thought it might be caused by this problem. But I haven't found any reports of exactly that problem in Fedora, probably because Fedora is less affected by it, or at least less visibly affected by it.

In conclusion, probably the best way is to configure manually like you did. You could re-DHCP internally over the internal connector for comfort, but I wouldn't do that because overhead and more work to roll back after the bugs are fixed or nm is upgraded with a solution.

I don't have precise info on what's going on in your system of course, but it seems to me that maybe the issue could be related, as your containers are not copying the host's settings they should be copying.

1 Like

That sounds accurate.

Doing the same process on my Fedora machine as my Arch machine, I get the same results. i.e. the containers work fine once I've manually setup the bridge -> started the containers -> set the bridged connections to "up" with nmcli -> manually set the veth settings within the container.

lxc-net works in and of itself and I don't know how it'd be different. So yeah that seems to be pointing to NetworkManager, though I'm not sure if that functionality has been added with lxc.

That's a bigger thing I think. Since most of LXC support seems focused on Ubuntu distros, the arch templates (as an example) aren't fully setup to be as "ready to go" as the Ubuntu ones. lxc-net is fine as long as you are fine with NAT.

I could modify the templates to do the manual setup for me, but that's just useful for on creation.

Right now, it's annoying to use the containers because I have to manually reverse the process otherwise they'll hang when I try to lxc-stop them.

Specifically, if the veth on the host for the containers is still in the bridged connection, or the containers still have a static IP set using iproute2, they will not stop using that command, even if I use --kill. I've even tried sending SIGTERM and SIGKILL to the container process but it won't stop running until it times out the connection (presumably).

So that means each time I want to restart my containers or reboot my machine, I have to manually undo the bridge and IP stuff (just remove the address at least).

I could make a script to do that, but I'd have to look into how attaching to a container would complicate that. I guess I could just set it as a job to run on container start. However, the reason I needed DHCP to work was because I have different networks I connect to as this is a laptop. So sometimes I need my containers to use a 192.168.X.X IP and other times I need them to use a 10.X.X.X IP.

Overall I'll probably just rough it out until they fix this and try to restart my machine as little as possible. That or finally setup my personal server and get that going so restarts are unnecessary.

You could add a USB ethernet adapter and reserve that for DHCP addressing external of only the containers, and use the internal for static addressing on the 192.168.x.x. That would simplify things a bit.

But yeah, things point in the direction of some systemd communications issue. Ubuntu might have done some systemd workarounds in LXD even that break things. Systemd integration should be complete in Fedora, Arch and SuSE, but that's not the case in Ubuntu. That's also why I crosschecked the issues I was having on Ubuntu, usually making things work with systemd on Ubuntu reveals stuff because they haven't addressed many things yet that have been addressed in Arch and RPM distros.

1 Like