Lxc

From Wiki [en] OpenMandriva
Revision as of 16:14, 25 August 2015 by Fedya (Talk | contribs) (Created page with "'''LinuX Containers''' ('''LXC''') is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC ho...")

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

LinuX Containers (LXC) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host). It does not provide a virtual machine, but rather provides a virtual environment that has its own CPU, memory, block I/O, network, etc. space. This is provided by cgroups features in Linux kernel on LXC host. It is similar to a chroot, but offers much more isolation.

Setup

Required software

Install LXC from main repository.

Verify that the running kernel is properly configured to run a container:

$ lxc-checkconfig

You must see something like

--- Namespaces ---
Namespaces: enabled
Utsname namespace: enabled
Ipc namespace: enabled
Pid namespace: enabled
User namespace: enabled
Network namespace: enabled
Multiple /dev/pts instances: enabled
--- Control groups ---
Cgroup: enabled
Cgroup clone_children flag: enabled
Cgroup device: enabled
Cgroup sched: enabled
Cgroup cpu account: enabled
Cgroup memory controller: enabled
Cgroup cpuset: enabled
--- Misc ---
Veth pair device: enabled
Macvlan: enabled
Vlan: enabled
File capabilities: enabled

Host Network Configuration

LXCs support different virtual network types. A bridge device on the host is required for most types of virtual networking. The examples of creating a bridge provided below are not meant to be limiting, but illustrative. Users may use other programs to achieve the same results. A wired and wireless example is provided below, but other setups are possible.

Example for a wired network

/etc/sysconfig/network-scripts/ifcfg-lxcbr0
DEVICE="lxcbr0"
TYPE="Bridge"
BOOTPROTO="static"
IPADDR=10.0.3.1
NETMASK=255.255.255.0
ONBOOT="yes"}}
*mangle
:PREROUTING ACCEPT [372373:331272879]
:INPUT ACCEPT [117810:59572113]
:FORWARD ACCEPT [254563:271700766]
:OUTPUT ACCEPT [128818:17713660]
:POSTROUTING ACCEPT [383387:289416394]
-A POSTROUTING -o lxcbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill
COMMIT
*nat
:PREROUTING ACCEPT [230:182343]
:INPUT ACCEPT [135:176556]
:OUTPUT ACCEPT [12551:777266]
:POSTROUTING ACCEPT [12549:776610]
-A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE
COMMIT
*filter
:INPUT ACCEPT [117806:59570521]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [128830:17715306]
-A FORWARD -o lxcbr0 -j ACCEPT
-A FORWARD -i lxcbr0 -j ACCEPT
COMMIT


Container creation

Select a template from /usr/share/lxc/templates that matches the target distro to containerize.

  • Debian-based: debootstrap.
  • Fedora-based: yum.

Run lxc-create to create the container, which installs the root filesystem of the LXC to /var/lib/lxc/CONTAINER_NAME/rootfs by default. Example creating cooker with name omv0:

# lxc-create -t openmandriva -n omv0 -- --arch x86_64 --release cooker

Container configuration

Basic config with networking

System resources to be virtualized/isolated when a process is using the container are defined in /var/lib/lxc/CONTAINER_NAME/config. By default, the creation process will make a minimum setup without networking support. Below is an example config with networking:

/var/lib/lxc/omv0/config
lxc.rootfs = /var/lib/lxc/omv0/rootfs
lxc.utsname = omv0
lxc.autodev = 1
lxc.tty = 4
lxc.pts = 1024
lxc.mount = /var/lib/lxc/omv0/fstab
lxc.cap.drop = sys_module mac_admin mac_override sys_time

# When using LXC with apparmor, uncomment the next line to run unconfined:
#lxc.aa_profile = unconfined

#networking
lxc.network.type = veth
lxc.network.flags = up
lxc.network.link = lxcbr0
lxc.network.name = eth0
lxc.network.mtu = 1500
#cgroups
lxc.cgroup.devices.deny = a
# /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
# consoles
lxc.cgroup.devices.allow = c 5:1 rwm
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 4:0 rwm
lxc.cgroup.devices.allow = c 4:1 rwm
# /dev/{,u}random
lxc.cgroup.devices.allow = c 1:9 rwm
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 136:* rwm
lxc.cgroup.devices.allow = c 5:2 rwm
# rtc
lxc.cgroup.devices.allow = c 10:135 rwm


Systemd conflicts in the /dev tree

To avoid conflicts of systemd and lxc in the /dev tree. It is highly recommended to enable the autodev mode. This will cause LXC to create its own device tree. This also means that the traditional way of manually creating device nodes in the container rootfs Template:Ic tree will not work because /dev is overmounted by LXC.

Any device nodes required that are not created by LXC by default must be created by the autodev hook script!}}

It is also important to disable services that are not supported inside a container. Either attach to the running LXC, or chroot into the container rootfs, and mask those services:

ln -s /dev/null /etc/systemd/system/systemd-udevd.service
ln -s /dev/null /etc/systemd/system/systemd-udevd-control.socket
ln -s /dev/null /etc/systemd/system/systemd-udevd-kernel.socket
ln -s /dev/null /etc/systemd/system/proc-sys-fs-binfmt_misc.automount

This disables udev and mounting of /proc/sys/fs/binfmt_misc.

Maintain devpts consistency

Additionally ensure a pty declaration in the LXC container because the presence of this causes LXC to mount devpts as a new instance. Without this, the container gets the host's devpts, negative results will occur.

lxc.pts = 1024
Prevent excess journald activity

By default, lxc symlinks /dev/kmsg to /dev/console, this leads to journald running at 100% CPU usage all the time. To prevent the symlink, use:

lxc.kmsg = 0

Xorg program considerations (optional)

In order to run programs on the host's display, some bind mounts need to be defined so that the containerized programs can access the host's resources. Add the following section to /var/lib/lxc/playtime/config:

## for xorg
## fix overmounting see: https://github.com/lxc/lxc/issues/434
lxc.mount.entry = tmpfs tmp tmpfs defaults
lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir
lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none bind,optional,create=dir
lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file

Managing Containers

To list all installed LXC containers:

# lxc-ls -f

Systemd can be used to start and to stop LXCs via lxc@CONTAINER_NAME.service. Enable lxc@CONTAINER_NAME.service to have it start when the host system boots.

Users can also start/stop LXCs without systemd. Start a container:

# lxc-start -n CONTAINER_NAME

Stop a container:

# lxc-stop -n CONTAINER_NAME

To attach to a container:

# lxc-attach -n CONTAINER_NAME

Once attached, treat the container like any other linux system, set the root password, create users, install packages, etc.

Running Xorg programs

Either attach to or SSH into the target container and prefix the call to the program with the DISPLAY ID of the host's X session. For most simple setups, the display is always 0.

An example of running Firefox from the container in the host's display:

$ DISPLAY=:0 firefox

Alternatively, to avoid directly attaching to or connecting to the container, the following can be used on the host to automate the process:

# lxc-attach -n playtime --clear-env -- sudo -u YOURUSER env DISPLAY=0.0 firefox

See also