debops.lxc default variables

APT packages

lxc__base_packages

List of base APT packages to install for LXC support.

lxc__base_packages:
  - [ 'lxc', 'lxcfs', 'debootstrap', 'xz-utils' ]
  - '{{ [ "dnsmasq-base" ] if (lxc__net_deploy_state == "present") else [] }}'
  - '{{ []
        if (ansible_distribution_release in
            [ "wheezy", "jessie", "stretch",
              "precise", "trusty", "xenial" ])
        else [ "apparmor", "lxc-templates" ] }}'
lxc__packages

List of additional APT packages to install with LXC.

lxc__packages: []
lxc__version

The variable that exposes the version of the LXC service installed on a host. It will be set automatically using the Ansible local facts.

lxc__version: '{{ ansible_local.lxc.version|d("0.0.0") }}'

Unprivileged LXC containers

lxc__root_subuid_start

Specify the first subUID of the unprivileged LXC containers managed by the root system account.

lxc__root_subuid_start: '{{ ansible_local.root_account.subuids[0]["start"]
                            if (ansible_local.root_account.subuids|d())
                            else "100000" }}'
lxc__root_subuid_count

Specify the number of subUIDs reserved for the unprivileged LXC containers managed by the root system account.

lxc__root_subuid_count: '{{ ansible_local.root_account.subuids[0]["count"]
                            if (ansible_local.root_account.subuids|d())
                            else "65536" }}'
lxc__root_subgid_start

Specify the first subGID of the unprivileged LXC containers managed by the root system account.

lxc__root_subgid_start: '{{ ansible_local.root_account.subgids[0]["start"]
                            if (ansible_local.root_account.subgids|d())
                            else "100000" }}'
lxc__root_subgid_count

Specify the number of subGIDs reserved for the unprivileged LXC containers managed by the root system account.

lxc__root_subgid_count: '{{ ansible_local.root_account.subgids[0]["count"]
                            if (ansible_local.root_account.subgids|d())
                            else "65536" }}'

LXC internal network configuration

These variables define configuration of the internal LXC network for local containers. If you want to support container networking on multiple LXC hosts at once, you should set up external DHCP/DNS server and connect the LXC containers to the host bridge interfaces instead.

lxc__net_deploy_state

If present, the role will configure internal LXC network using the lxc-net service. If absent, the internal LXC network will be disabled and de-configured.

The lxc-net service is not configured inside container environment or if the host is also configured to support LXD service using the debops.lxd role.

lxc__net_deploy_state: '{{ "absent"
                           if ((ansible_virtualization_type in [ "lxc", "docker", "openvz" ] and
                                ansible_virtualization_role == "guest") or
                               (inventory_hostname in groups["debops_service_lxd"]|d([])) or
                               (ansible_local.lxd.installed|d(False))|bool)
                           else "present" }}'
lxc__net_bridge

Name of the network bridge which will be configured for the internal LXC network.

lxc__net_bridge: 'lxcbr0'
lxc__net_address

The IPv4 address and CIDR mask of the internal LXC bridge, which will be the NAT gateway for the LXC containers. This value will be used to configure the internal dnsmasq DHCP server.

You should ensure that the internal LXC network does not collide with other local networks, to avoid routing issues.

lxc__net_address: '10.0.3.1/24'
lxc__net_router

Enable or disable router advertisement on the lxc-net bridge. You should disable this if the primary network route of the LXC containers should go through normal host bridges instead of the bridge managed by the lxc-net service.

lxc__net_router: True
lxc__net_dhcp_start

The index of the IP address from the above subnet that starts the DHCP range. First IP address is usually the gateway address, therefore you need to start with a higher number.

lxc__net_dhcp_start: '2'
lxc__net_dhcp_end

The index of the IP address from the above subnet that ends the DHCP range. Negative number tells the role to count from the end of the subnet; last IP address is usually the broadcast address, therefore you need to get at least one address before it.

lxc__net_dhcp_end: '-2'
lxc__net_base_domain

The DNS domain used as a base for the internal LXC domain. By default it is based on the LXC host domain.

lxc__net_base_domain: '{{ ansible_domain }}'
lxc__net_domain

The DNS domain used in the internal LXC network.

lxc__net_domain: '{{ "lxc" + (("." + lxc__net_base_domain)
                              if lxc__net_base_domain|d()
                              else "") }}'
lxc__net_fqdn

The FQDN of the internal LXC bridge / internal LXC gateway registered in the dnsmasq service. It can be seen, for example, in traceroutes.

The resolvconf service will be used to add or remove the LXC domain in the /etc/resolv.conf; with local DNS resolver, for example dnsmasq, configured on the LXC host the containers can then be accessed by their hostnames instead of the IP addresses.

lxc__net_fqdn: '{{ ansible_hostname + "."
                   + ansible_local.lxc.net_domain|d(lxc__net_domain) }}'
lxc__net_dnsmasq_conf

Absolute path to the custom dnsmasq configuration file used by the internal LXC network.

lxc__net_dnsmasq_conf: '/etc/lxc/lxc-net-dnsmasq.conf'
lxc__net_dnsmasq_options

String or YAML text block with additional dnsmasq(8) options added to its custom configuration file.

lxc__net_dnsmasq_options: ''

System-wide configuration

These variables define contents of the configuration files in the /etc/lxc/ directory. See lxc__configuration for more details.

lxc__default_configuration

The default LXC configuration files defined by the role.

lxc__default_configuration:

  - name: 'lxc'
    comment: 'System-wide LXC configuration options'
    options:

      - name: 'lxc.lxcpath'
        comment: 'Path where LXC containers are stored'
        value: '/var/lib/lxc'

      - name: 'lxc.cgroup.use'
        value: '@all'
        state: '{{ "present"
                   if (lxc__version is version("2.1.0", "<"))
                   else "absent" }}'

      - name: 'lxc.default_config'
        comment: |
          Default configuration file used for new LXC containers if not
          specified otherwise
        value: '/etc/lxc/privileged.conf'

      - name: 'lxc.default_unprivileged_config'
        comment: |
          Default configuration file used for new unprivileged LXC containers,
          created by the 'lxc-new-unprivileged' script
        value: '/etc/lxc/internal-unprivileged.conf'

    # Default configuration for unprivileged LXC containers.
  - name: 'unprivileged'
    options:

      - name: '{{ "lxc.network.type"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.type" }}'
        value: 'veth'

      - name: '{{ "lxc.network.link"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.link" }}'
        value: '{{ "br0"
                   if (ansible_local|d() and ansible_local.ifupdown|d() and
                       (ansible_local.ifupdown.configured|d())|bool)
                   else (lxc__net_bridge
                         if (lxc__net_deploy_state == "present")
                         else "br0") }}'

      - name: '{{ "lxc.network.flags"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.flags" }}'
        value: 'up'

      - name: 'lxc.id_map_user'
        alias: '{{ "lxc.id_map"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.idmap" }}'
        value: 'u 0 {{ lxc__root_subuid_start }} {{ lxc__root_subuid_count }}'
        separator: True

      - name: 'lxc.id_map_group'
        alias: '{{ "lxc.id_map"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.idmap" }}'
        value: 'g 0 {{ lxc__root_subgid_start }} {{ lxc__root_subgid_count }}'

      - name: 'lxc.start.auto'
        value: '0'
        separator: True

      - name: 'lxc.cap.drop_secure'
        alias: 'lxc.cap.drop'
        value: '{{ [ "mknod", "sys_rawio", "syslog", "wake_alarm", "sys_time" ]
                  + ([] if (lxc__version is version("3.0.0", "<") or
                            lxc__version is version("4.0.0", ">="))
                  else [ "sys_admin" ]) }}'

      # Select how many CPUs are available inside of the container
      - name: 'lxc.cgroup.cpuset.cpus'
        value: '0-{{ ansible_processor_vcpus|int - 1 }}'
        state: 'comment'
        separator: True

      # Limit the amount of memory available inside of the container
      - name: 'lxc.cgroup.memory.limit_in_bytes'
        value: '{{ (ansible_memtotal_mb|int / 1024) | round | int }}G'
        state: 'comment'

      # Limit available swap space inside of the container. The value is
      # a combined memory (above) + swap size, it should be bigger than the
      # amount of memory allocated to the container.
      - name: 'lxc.cgroup.memory.memsw.limit_in_bytes'
        value: '{{ ((ansible_memtotal_mb|int + ansible_swaptotal_mb|int) / 1024) | round | int }}G'
        state: 'comment'

      # Required in AppArmor environment to allow systemd services to start
      # inside of the LXC unprivileged containers.
      # See also: https://bugs.debian.org/916644,
      # https://bugs.debian.org/918839, https://bugs.debian.org/911806
      - name: 'lxc.apparmor.profile'
        value: 'generated'
        state: '{{ "absent"
                   if (lxc__version is version("3.0.0", "<") or
                       lxc__version is version("4.1.0", ">="))
                   else "present" }}'

      # Required in AppArmor environment to allow systemd services to start
      # inside of the LXC unprivileged containers.
      # See also: https://bugs.debian.org/916644,
      # https://bugs.debian.org/918839, https://bugs.debian.org/911806
      - name: 'lxc.apparmor.allow_nesting'
        value: '1'
        state: '{{ "absent"
                   if (lxc__version is version("3.0.0", "<") or
                       lxc__version is version("4.1.0", ">="))
                   else "present" }}'

    # Default configuration for privileged LXC containers
  - name: 'privileged'
    options:

      - name: '{{ "lxc.network.type"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.type" }}'
        value: 'veth'

      - name: '{{ "lxc.network.link"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.link" }}'
        value: '{{ "br0"
                   if (ansible_local|d() and ansible_local.ifupdown|d() and
                       (ansible_local.ifupdown.configured|d())|bool)
                   else (lxc__net_bridge
                         if (lxc__net_deploy_state == "present")
                         else "br0") }}'

      - name: '{{ "lxc.network.flags"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.flags" }}'
        value: 'up'

      - name: 'lxc.start.auto'
        value: '0'
        separator: True

      - name: 'lxc.cap.drop_secure'
        alias: 'lxc.cap.drop'
        value: '{{ [ "mknod", "sys_rawio", "syslog", "wake_alarm", "sys_time" ]
                  + ([] if (lxc__version is version("3.0.0", "<") or
                            lxc__version is version("4.0.0", ">="))
                            else [ "sys_admin" ]) }}'

    # Configuration for unprivileged LXC containers that uses only the internal
    # bridge managed by the 'lxc-net' service
  - name: 'internal-unprivileged'
    state: '{{ "present" if (lxc__net_deploy_state == "present") else "absent" }}'
    options:

      - name: '{{ "lxc.network.type"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.type" }}'
        value: 'veth'

      - name: '{{ "lxc.network.link"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.link" }}'
        value: '{{ lxc__net_bridge }}'

      - name: '{{ "lxc.network.flags"
                  if (lxc__version is version("2.1.0", "<"))
                  else "lxc.net.0.flags" }}'
        value: 'up'

      - name: 'lxc.id_map_user'
        alias: '{{ "lxc.id_map"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.idmap" }}'
        value: 'u 0 {{ lxc__root_subuid_start }} {{ lxc__root_subuid_count }}'
        separator: True

      - name: 'lxc.id_map_group'
        alias: '{{ "lxc.id_map"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.idmap" }}'
        value: 'g 0 {{ lxc__root_subgid_start }} {{ lxc__root_subgid_count }}'

      - name: 'lxc.start.auto'
        value: '0'
        separator: True

      - name: 'lxc.cap.drop_secure'
        alias: 'lxc.cap.drop'
        value: '{{ [ "mknod", "sys_rawio", "syslog", "wake_alarm", "sys_time" ]
                  + ([] if (lxc__version is version("3.0.0", "<") or
                            lxc__version is version("4.0.0", ">="))
                            else [ "sys_admin" ]) }}'

      # Select how many CPUs are available inside of the container
      - name: 'lxc.cgroup.cpuset.cpus'
        value: '0-{{ ansible_processor_vcpus|int - 1 }}'
        state: 'comment'
        separator: True

      # Limit the amount of memory available inside of the container
      - name: 'lxc.cgroup.memory.limit_in_bytes'
        value: '{{ (ansible_memtotal_mb|int / 1024) | round | int }}G'
        state: 'comment'

      # Limit available swap space inside of the container. The value is
      # a combined memory (above) + swap size, it should be bigger than the
      # amount of memory allocated to the container.
      - name: 'lxc.cgroup.memory.memsw.limit_in_bytes'
        value: '{{ ((ansible_memtotal_mb|int + ansible_swaptotal_mb|int) / 1024) | round | int }}G'
        state: 'comment'

      # Required in AppArmor environment to allow systemd services to start
      # inside of the LXC unprivileged containers.
      # See also: https://bugs.debian.org/916644,
      # https://bugs.debian.org/918839, https://bugs.debian.org/911806
      - name: 'lxc.apparmor.profile'
        value: 'generated'
        state: '{{ "absent"
                   if (lxc__version is version("3.0.0", "<") or
                       lxc__version is version("4.1.0", ">="))
                   else "present" }}'

      # Required in AppArmor environment to allow systemd services to start
      # inside of the LXC unprivileged containers.
      # See also: https://bugs.debian.org/916644,
      # https://bugs.debian.org/918839, https://bugs.debian.org/911806
      - name: 'lxc.apparmor.allow_nesting'
        value: '1'
        state: '{{ "absent"
                   if (lxc__version is version("3.0.0", "<") or
                       lxc__version is version("4.1.0", ">="))
                   else "present" }}'

    # Default configuration for privileged LXC containers with multiple network
    # interfaces
  - name: 'external-internal'
    options:

      - name: 'lxc.network.type_net0'
        alias: '{{ "lxc.network.type"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.net.0.type" }}'
        value: 'veth'

      - name: 'lxc.network.link_net0'
        alias: '{{ "lxc.network.link"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.net.0.link" }}'
        value: '{{ "br0"
                   if (ansible_local|d() and ansible_local.ifupdown|d() and
                       (ansible_local.ifupdown.configured|d())|bool)
                   else (lxc__net_bridge
                         if (lxc__net_deploy_state == "present")
                         else "br0") }}'

      - name: 'lxc.network.flags_net0'
        alias: '{{ "lxc.network.flags"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.net.0.flags" }}'
        value: 'up'

      - name: 'lxc.network.type_net1'
        alias: '{{ "lxc.network.type"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.net.1.type" }}'
        value: 'veth'
        separator: True

      - name: 'lxc.network.link_net1'
        alias: '{{ "lxc.network.link"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.net.1.link" }}'
        value: 'br1'

      - name: 'lxc.network.flags_net1'
        alias: '{{ "lxc.network.flags"
                   if (lxc__version is version("2.1.0", "<"))
                   else "lxc.net.1.flags" }}'
        value: 'up'

      - name: 'lxc.start.auto'
        value: '0'
        separator: True

      - name: 'lxc.cap.drop_secure'
        alias: 'lxc.cap.drop'
        value: '{{ [ "mknod", "sys_rawio", "syslog", "wake_alarm", "sys_time" ]
                  + ([] if (lxc__version is version("3.0.0", "<") or
                            lxc__version is version("4.0.0", ">="))
                  else [ "sys_admin" ]) }}'
lxc__configuration

List of LXC configuration files which should be present on all hosts in the Ansible inventory.

lxc__configuration: []
lxc__group_configuration

List of LXC configuration files which should be present on hosts in a specific Ansible inventory group.

lxc__group_configuration: []
lxc__host_configuration

List of LXC configuration files which should be present on specific hosts in the Ansible inventory.

lxc__host_configuration: []
lxc__combined_configuration

The variable that combines all of the configuration lists and is used in the role tasks.

lxc__combined_configuration: '{{ lxc__default_configuration
                                 + lxc__configuration
                                 + lxc__group_configuration
                                 + lxc__host_configuration }}'

Common container configuration

These variables define the contents of the /usr/share/lxc/config/common.conf.d/ directory. Files in this directory that end in .conf are included in the common configuration of each LXC container.

The syntax is the same as the system-wide configuration. See lxc__configuration for details.

lxc__common_default_conf

Default configuration for all LXC containers defined by the role.

lxc__common_default_conf:

  - name: 'static-hwaddr'
    comment: |
      Generate static, predictable MAC addresses for container network
      interfaces before the container is started. Containers will have to be
      restarted for new MAC addresses to be used.
    options:

      - name: 'lxc.hook.pre-start_hwaddr'
        alias: 'lxc.hook.pre-start'
        value: '/usr/local/bin/lxc-hwaddr-static'

  - name: 'destroy-systemd-instance'
    comment: |
      At container destruction, ensure that its corresponding systemd service
      instance is disabled.
    options:

      - name: 'lxc.hook.destroy_systemd_instance'
        alias: 'lxc.hook.destroy'
        value: '/usr/local/lib/lxc/lxc-destroy-systemd-instance'
lxc__common_conf

Common configuration for all LXC containers defined on all hosts in the Ansible inventory.

lxc__common_conf: []
lxc__common_group_conf

Common configuration for all LXC containers defined on hosts in a specific Ansible inventory group.

lxc__common_group_conf: []
lxc__common_host_conf

Common configuration for all LXC containers defined on specific hosts in the Ansible inventory.

lxc__common_host_conf: []
lxc__common_combined_conf

Variable which combines configuration from all common conf variables and is used in the role tasks.

lxc__common_combined_conf: '{{ lxc__common_default_conf
                               + lxc__common_conf
                               + lxc__common_group_conf
                               + lxc__common_host_conf }}'

LXC container defaults

These variables define default configuration of LXC containers created by the role. By default, if no other options are specified, the role will create unprivileged LXC containers based on the host OS distribution and release, with configured SSH access.

lxc__default_container_config

Specify the default LXC configuration file used during container creation.

lxc__default_container_config: '/etc/lxc/unprivileged.conf'
lxc__default_container_ssh

If True, the LXC containers created by the role will be configured with SSH access by the lxc-prepare-ssh script. If False, the LXC containers will not have direct SSH access configured. This can be overriden for individual containers using the item.ssh parameter.

lxc__default_container_ssh: True
lxc__default_container_ssh_root_sshkeys

The list of SSH public keys stored in the /etc/lxc/root_authorized_keys file. If the file is not empty, its contents will be added to the /root/.ssh/authorized_keys file in all new LXC containers. This list can be used to define SSH identities of system administrators which should have access to all LXC containers created on the host.

lxc__default_container_ssh_root_sshkeys: []
lxc__default_container_template

Specify the default LXC container template to use if none is defined. By default the role creates unprivileged LXC containers.

lxc__default_container_template: 'download'
lxc__default_container_distribution

The default OS distribution used during container creation.

lxc__default_container_distribution: '{{ ansible_local.core.distribution|d(ansible_distribution) }}'
lxc__default_container_release

The default OS release used during container creation.

lxc__default_container_release: '{{ ansible_local.core.distribution_release|d(ansible_distribution_release) }}'
lxc__default_container_architecture

Specify the default processor architecture used during container creation (required by the "download" LXC template).

lxc__default_container_architecture: '{{ "amd64"
                                         if (ansible_architecture == "x86_64")
                                         else ansible_architecture }}'
lxc__default_container_backing_store

Specify the default backing store used during container creation

lxc__default_container_backing_store: 'dir'

LXC container instances

These variables can be used to create and manage LXC containers on LXC hosts. See lxc__containers for more details.

lxc__containers

The list of LXC containers which should be configured on LXC hosts. Be aware that containers with the same name on multiple LXC hosts might cause issues due to the same static MAC addresses configured on each of them. This situation should be avoided.

lxc__containers: []

Configuration for other Ansible roles

lxc__apt_preferences__dependent_list

Configuration for the debops.apt_preferences role.

lxc__apt_preferences__dependent_list: []
lxc__python__dependent_packages3

Configuration for the debops.python Ansible role.

lxc__python__dependent_packages3:

  - 'python3-lxc'
lxc__python__dependent_packages2

Configuration for the debops.python Ansible role.

lxc__python__dependent_packages2:

  - 'python-lxc'
lxc__ferm__dependent_rules

Configuration for the debops.ferm role.

lxc__ferm__dependent_rules:

  - type: 'custom'
    by_role: 'debops.lxc'
    filename: 'lxc_bootp_checksum'
    weight: '30'
    rule_state: 'absent'

  - type: 'custom'
    by_role: 'debops.lxc'
    name: 'bootp_checksum'
    weight: '30'
    rules: |
      # Add checksums to BOOTP packets for LXC containers
      # https://www.redhat.com/archives/libvir-list/2010-August/msg00035.html
      @hook post "iptables -A POSTROUTING -t mangle -p udp --dport bootpc -j CHECKSUM --checksum-fill";
lxc__sysctl__dependent_parameters

Configuration for the debops.sysctl Ansible role.

lxc__sysctl__dependent_parameters:

  - name: 'lxc-inotify'
    divert: True
    weight: '30'
    options:

      - name: 'fs.inotify.max_user_instances'
        comment: |
          Defines the maximum number of inotify listeners.
          By default, this value is 128, which is quickly exhausted when using
          systemd-based LXC containers (15 containers are enough).
          When the limit is reached, systemd becomes mostly unusable, throwing
          "Too many open files" all around (both on the host and in containers).
          See https://kdecherf.com/blog/2015/09/12/systemd-and-the-fd-exhaustion/
          Increase the user inotify instance limit to allow for about
          100 containers to run before the limit is hit again
        value: 1024