debops.libvirtd_qemu default variables
Sections
libvirt features
- libvirtd_qemu__kvm_support
Enable or disable support for KVM-based virtual machines.
libvirtd_qemu__kvm_support: '{{ True
if (ansible_virtualization_type == "kvm" and
(ansible_virtualization_role == "host" or
libvirtd_qemu__register_hw_virt.stdout | d()))
else False }}'
APT packages
- libvirtd_qemu__base_packages
List of libvirtd packages which will be installed on all distribution releases, unless overridden.
libvirtd_qemu__base_packages:
- 'libvirt-daemon-system'
- libvirtd_qemu__base_packages_map
Override list of base packages for specific distribution releases.
libvirtd_qemu__base_packages_map:
'trusty': [ 'libvirt-bin' ]
'xenial': [ 'libvirt-bin' ]
- libvirtd_qemu__kvm_packages
List of QEMU KVM packages to install. They will be installed on all hosts apart from KVM guests, to not create redundant support.
libvirtd_qemu__kvm_packages: '{{ ["qemu-system-x86", "qemu-utils"]
+ ["qemu-kvm"]
if ansible_distribution_release in
["stretch", "buster", "bionic", "focal"]
else [] }}'
- libvirtd_qemu__packages
List of custom packages to install.
libvirtd_qemu__packages: []
QEMU environment
- libvirtd_qemu__deployment_mode
Specify the type of the environment a given libvirtd operates in.
Possible choices: libvirt
, opennebula
.
libvirtd_qemu__deployment_mode: '{{ ansible_local.libvirtd.deployment_mode
if (ansible_local.libvirtd.deployment_mode | d())
else ("opennebula"
if ("debops_service_opennebula_node" in group_names)
else "libvirt") }}'
- libvirtd_qemu__user
The UNIX system user account which will be used to run QEMU processes. The
Debian default is libvirt-qemu
, however OpenNebula requires the use of
the oneadmin
management account to ensure correct file access.
libvirtd_qemu__user: '{{ (ansible_local.opennebula.user
if (ansible_local.opennebula.user | d())
else "oneadmin")
if (libvirtd_qemu__deployment_mode == "opennebula")
else "libvirt-qemu" }}'
- libvirtd_qemu__group
The UNIX system group which will be used to run QEMU processes. The Debian
default is libvirt-qemu
, however OpenNebula requires the use of the
oneadmin
management group to ensure correct file access.
libvirtd_qemu__group: '{{ (ansible_local.opennebula.group
if (ansible_local.opennebula.group | d())
else "oneadmin")
if (libvirtd_qemu__deployment_mode == "opennebula")
else "libvirt-qemu" }}'
VNC/SPICE remote display options
- libvirtd_qemu__remote_display_allow
List of IP addresses or CIDR subnets which will be allowed to connect to the VNC/SPICE service through the firewall. if nothing is specified, nobody can connect to these services.
libvirtd_qemu__remote_display_allow: []
- libvirtd_qemu__remote_display_ports
List of TCP ports which should be opened in the firewall to allow access to
the VNC/SPICE services. You can specify specific ports or port ranges
separated by the :
character. Only the hosts specified by the
libvirtd_qemu__remote_display_allow
variable will have access
through these ports.
libvirtd_qemu__remote_display_ports: '{{ libvirtd_qemu__remote_display_port_min | string + ":"
+ libvirtd_qemu__remote_display_port_max | string }}'
- libvirtd_qemu__remote_display_port_min
The lower bound of the port range used by the VNC/SPICE services to allow access to the remote VM console. The lower bound port configuration shouldn't be changed, otherwise it might result in negative port numbers used by the services.
libvirtd_qemu__remote_display_port_min: 5900
- libvirtd_qemu__remote_display_port_max
The upper bound of the port range used by the VNC/SPICE services to allow access to the remote VM console.
libvirtd_qemu__remote_display_port_max: 65535
- libvirtd_qemu__spice_listen
Specify the IP address on which QEMU processes should listen for SPICE connections.
libvirtd_qemu__spice_listen: '{{ "0.0.0.0"
if libvirtd_qemu__remote_display_allow | d()
else "127.0.0.1" }}'
- libvirtd_qemu__vnc_listen
Specify the IP address on which QEMU processes should listen for VNC connections.
libvirtd_qemu__vnc_listen: '{{ "0.0.0.0"
if libvirtd_qemu__remote_display_allow | d()
else "127.0.0.1" }}'
QEMU master configuration
These lists define configuration options present in the
/etc/libvirt/qemu.conf
configuration file.
See libvirtd_qemu__configuration for more details.
- libvirtd_qemu__original_configuration
List of original QEMU configuration options defined by the installed software package.
libvirtd_qemu__original_configuration:
- name: 'default_tls_x509_cert_dir' # [[[
comment: |
Use of TLS requires that x509 certificates be issued. The default is
to keep them in /etc/pki/qemu. This directory must contain
ca-cert.pem - the CA master certificate
server-cert.pem - the server certificate signed with ca-cert.pem
server-key.pem - the server private key
and optionally may contain
dh-params.pem - the DH params configuration file
value: '/etc/pki/qemu'
section: 'tls'
state: 'comment'
weight: 1 # ]]]
- name: 'default_tls_x509_verify' # [[[
comment: |
The default TLS configuration only uses certificates for the server
allowing the client to verify the server identity and establish
an encrypted channel.
It is possible to use x509 certificates for authentication too, by
issuing a x509 certificate to every client who needs to connect.
Enabling this option will reject any client who does not have a
certificate signed by the CA in /etc/pki/qemu/ca-cert.pem
value: True
section: 'tls'
state: 'comment'
weight: 2 # ]]]
- name: 'default_tls_x509_secret_uuid' # [[[
comment: |
Libvirt assumes the server-key.pem file is unencrypted by default.
To use an encrypted server-key.pem file, the password to decrypt
the PEM file is required. This can be provided by creating a secret
object in libvirt and then to uncomment this setting to set the UUID
of the secret.
NB This default all-zeros UUID will not work. Replace it with the
output from the UUID for the TLS secret from a 'virsh secret-list'
command and then uncomment the entry
value: '00000000-0000-0000-0000-000000000000'
section: 'tls'
state: 'comment'
weight: 3 # ]]]
- name: 'vnc_listen' # [[[
comment: |
VNC is configured to listen on 127.0.0.1 by default.
To make it listen on all public interfaces, uncomment
this next option.
NB, strong recommendation to enable TLS + x509 certificate
verification when allowing public access
value: '0.0.0.0'
section: 'vnc'
state: 'comment'
weight: 1 # ]]]
- name: 'vnc_auto_unix_socket' # [[[
comment: |
Enable this option to have VNC served over an automatically created
unix socket. This prevents unprivileged access from users on the
host machine, though most VNC clients do not support it.
This will only be enabled for VNC configurations that have listen
type=address but without any address specified. This setting takes
preference over vnc_listen.
value: True
section: 'vnc'
state: 'comment'
weight: 2 # ]]]
- name: 'vnc_tls' # [[[
comment: |
Enable use of TLS encryption on the VNC server. This requires
a VNC client which supports the VeNCrypt protocol extension.
Examples include vinagre, virt-viewer, virt-manager and vencrypt
itself. UltraVNC, RealVNC, TightVNC do not support this
It is necessary to setup CA and issue a server certificate
before enabling this.
value: True
section: 'vnc'
state: 'comment'
weight: 3 # ]]]
- name: 'vnc_tls_x509_cert_dir' # [[[
comment: |
In order to override the default TLS certificate location for
vnc certificates, supply a valid path to the certificate directory.
If the provided path does not exist then the default_tls_x509_cert_dir
path will be used.
value: '/etc/pki/libvirt-vnc'
section: 'vnc'
state: 'comment'
weight: 4 # ]]]
- name: 'vnc_tls_x509_verify' # [[[
comment: |
The default TLS configuration only uses certificates for the server
allowing the client to verify the server identity and establish
an encrypted channel.
It is possible to use x509 certificates for authentication too, by
issuing a x509 certificate to every client who needs to connect.
Enabling this option will reject any client who does not have a
certificate signed by the CA in /etc/pki/libvirt-vnc/ca-cert.pem
If this option is not supplied, it will be set to the value of
"default_tls_x509_verify".
value: True
section: 'vnc'
state: 'comment'
weight: 5 # ]]]
- name: 'vnc_password' # [[[
comment: |
The default VNC password. Only 8 bytes are significant for
VNC passwords. This parameter is only used if the per-domain
XML config does not already provide a password. To allow
access without passwords, leave this commented out. An empty
string will still enable passwords, but be rejected by QEMU,
effectively preventing any use of VNC. Obviously change this
example here before you set this.
value: 'XYZ12345'
section: 'vnc'
state: 'comment'
weight: 6 # ]]]
- name: 'vnc_sasl' # [[[
comment: |
Enable use of SASL encryption on the VNC server. This requires
a VNC client which supports the SASL protocol extension.
Examples include vinagre, virt-viewer and virt-manager
itself. UltraVNC, RealVNC, TightVNC do not support this
It is necessary to configure /etc/sasl2/qemu.conf to choose
the desired SASL plugin (eg, GSSPI for Kerberos)
value: True
section: 'vnc'
state: 'comment'
weight: 7 # ]]]
- name: 'vnc_sasl_dir' # [[[
comment: |
The default SASL configuration file is located in /etc/sasl2/
When running libvirtd unprivileged, it may be desirable to
override the configs in this location. Set this parameter to
point to the directory, and create a qemu.conf in that location
value: '/some/directory/sasl2'
section: 'vnc'
state: 'comment'
weight: 8 # ]]]
- name: 'vnc_allow_host_audio' # [[[
comment: |
QEMU implements an extension for providing audio over a VNC connection,
though if your VNC client does not support it, your only chance for getting
sound output is through regular audio backends. By default, libvirt will
disable all QEMU sound backends if using VNC, since they can cause
permissions issues. Enabling this option will make libvirtd honor the
QEMU_AUDIO_DRV environment variable when using VNC.
value: False
section: 'vnc'
state: 'comment'
weight: 9 # ]]]
- name: 'spice_listen' # [[[
comment: |
SPICE is configured to listen on 127.0.0.1 by default.
To make it listen on all public interfaces, uncomment
this next option.
NB, strong recommendation to enable TLS + x509 certificate
verification when allowing public access
value: '0.0.0.0'
section: 'spice'
state: 'comment'
weight: 1 # ]]]
- name: 'spice_tls' # [[[
comment: |
Enable use of TLS encryption on the SPICE server.
It is necessary to setup CA and issue a server certificate
before enabling this.
value: True
section: 'spice'
state: 'comment'
weight: 2 # ]]]
- name: 'spice_tls_x509_cert_dir' # [[[
comment: |
In order to override the default TLS certificate location for
spice certificates, supply a valid path to the certificate directory.
If the provided path does not exist then the default_tls_x509_cert_dir
path will be used.
value: '/etc/pki/libvirt-spice'
section: 'spice'
state: 'comment'
weight: 3 # ]]]
- name: 'spice_auto_unix_socket' # [[[
comment: |
Enable this option to have SPICE served over an automatically created
unix socket. This prevents unprivileged access from users on the
host machine.
This will only be enabled for SPICE configurations that have listen
type=address but without any address specified. This setting takes
preference over spice_listen.
value: True
section: 'spice'
state: 'comment'
weight: 4 # ]]]
- name: 'spice_password' # [[[
comment: |
The default SPICE password. This parameter is only used if the
per-domain XML config does not already provide a password. To
allow access without passwords, leave this commented out. An
empty string will still enable passwords, but be rejected by
QEMU, effectively preventing any use of SPICE. Obviously change
this example here before you set this.
value: 'XYZ12345'
section: 'spice'
state: 'comment'
weight: 5 # ]]]
- name: 'spice_sasl' # [[[
comment: |
Enable use of SASL encryption on the SPICE server. This requires
a SPICE client which supports the SASL protocol extension.
It is necessary to configure /etc/sasl2/qemu.conf to choose
the desired SASL plugin (eg, GSSPI for Kerberos)
value: True
section: 'spice'
state: 'comment'
weight: 6 # ]]]
- name: 'spice_sasl_dir' # [[[
comment: |
The default SASL configuration file is located in /etc/sasl2/
When running libvirtd unprivileged, it may be desirable to
override the configs in this location. Set this parameter to
point to the directory, and create a qemu.conf in that location
value: '/some/directory/sasl2'
section: 'spice'
state: 'comment'
weight: 7 # ]]]
- name: 'chardev_tls' # [[[
comment: |
Enable use of TLS encryption on the chardev TCP transports.
It is necessary to setup CA and issue a server certificate
before enabling this.
value: True
section: 'chardev'
state: 'comment'
weight: 1 # ]]]
- name: 'chardev_tls_x509_cert_dir' # [[[
comment: |
In order to override the default TLS certificate location for character
device TCP certificates, supply a valid path to the certificate directory.
If the provided path does not exist then the default_tls_x509_cert_dir
path will be used.
value: '/etc/pki/libvirt-chardev'
section: 'chardev'
state: 'comment'
weight: 2 # ]]]
- name: 'chardev_tls_x509_verify' # [[[
comment: |
The default TLS configuration only uses certificates for the server
allowing the client to verify the server identity and establish
an encrypted channel.
It is possible to use x509 certificates for authentication too, by
issuing a x509 certificate to every client who needs to connect.
Enabling this option will reject any client who does not have a
certificate signed by the CA in /etc/pki/libvirt-chardev/ca-cert.pem
value: True
section: 'chardev'
state: 'comment'
weight: 3 # ]]]
- name: 'chardev_tls_x509_secret_uuid' # [[[
comment: |
Uncomment and use the following option to override the default secret
UUID provided in the default_tls_x509_secret_uuid parameter.
NB This default all-zeros UUID will not work. Replace it with the
output from the UUID for the TLS secret from a 'virsh secret-list'
command and then uncomment the entry
value: '00000000-0000-0000-0000-000000000000'
section: 'chardev'
state: 'comment'
weight: 4 # ]]]
- name: 'nographics_allow_host_audio' # [[[
comment: |
By default, if no graphical front end is configured, libvirt will disable
QEMU audio output since directly talking to alsa/pulseaudio may not work
with various security settings. If you know what you are doing, enable
the setting below and libvirt will passthrough the QEMU_AUDIO_DRV
environment variable when using nographics.
value: True
section: 'display'
state: 'comment'
weight: 1 # ]]]
- name: 'remote_display_port_min' # [[[
comment: |
Override the port for creating both VNC and SPICE sessions (min).
This defaults to 5900 and increases for consecutive sessions
or when ports are occupied, until it hits the maximum.
Minimum must be greater than or equal to 5900 as lower number would
result into negative vnc display number.
Maximum must be less than 65536, because higher numbers do not make
sense as a port number.
value: 5900
section: 'display'
state: 'comment'
weight: 2 # ]]]
- name: 'remote_display_port_max' # [[[
value: 65535
section: 'display'
state: 'comment'
weight: 3 # ]]]
- name: 'remote_websocket_port_min' # [[[
comment: |
VNC WebSocket port policies, same rules apply as with remote display
ports. VNC WebSockets use similar display <-> port mappings, with
the exception being that ports start from 5700 instead of 5900.
value: 5700
section: 'display'
state: 'comment'
weight: 4 # ]]]
- name: 'remote_websocket_port_max' # [[[
value: 65535
section: 'display'
state: 'comment'
weight: 5 # ]]]
- name: 'security_driver' # [[[
comment: |
The default security driver is SELinux. If SELinux is disabled
on the host, then the security driver will automatically disable
itself. If you wish to disable QEMU SELinux security driver while
leaving SELinux enabled for the host in general, then set this
to 'none' instead. It is also possible to use more than one security
driver at the same time, for this use a list of names separated by
comma and delimited by square brackets. For example:
security_driver = [ "selinux", "apparmor" ]
Notes: The DAC security driver is always enabled; as a result, the
value of security_driver cannot contain "dac". The value "none" is
a special value; security_driver can be set to that value in
isolation, but it cannot appear in a list of drivers.
value: 'selinux'
section: 'security'
state: 'comment'
weight: 1 # ]]]
- name: 'security_default_confined' # [[[
comment: |
If set to non-zero, then the default security labeling
will make guests confined. If set to zero, then guests
will be unconfined by default. Defaults to 1.
value: True
section: 'security'
state: 'comment'
weight: 2 # ]]]
- name: 'security_require_confined' # [[[
comment: |
If set to non-zero, then attempts to create unconfined
guests will be blocked. Defaults to 0.
value: True
section: 'security'
state: 'comment'
weight: 3 # ]]]
- name: 'user' # [[[
comment: |
The user for QEMU processes run by the system instance. It can be
specified as a user name or as a user id. The qemu driver will try to
parse this value first as a name and then, if the name does not exist,
as a user id.
Since a sequence of digits is a valid user name, a leading plus sign
can be used to ensure that a user id will not be interpreted as a user
name.
Some examples of valid values are:
user = "qemu" # A user named "qemu"
user = "+0" # Super user (uid=0)
user = "100" # A user named "100" or a user with uid=100
value: 'root'
section: 'user-group'
state: 'comment'
weight: 1 # ]]]
- name: 'group' # [[[
comment: |
The group for QEMU processes run by the system instance. It can be
specified in a similar way to user.
value: 'root'
section: 'user-group'
state: 'comment'
weight: 2 # ]]]
- name: 'dynamic_ownership' # [[[
comment: |
Whether libvirt should dynamically change file ownership
to match the configured user/group above. Defaults to 1.
Set to 0 to disable file ownership changes.
value: True
section: 'user-group'
state: 'comment'
weight: 3 # ]]]
- name: 'cgroup_controllers' # [[[
comment: |
What cgroup controllers to make use of with QEMU guests
- 'cpu' - use for schedular tunables
- 'devices' - use for device whitelisting
- 'memory' - use for memory tunables
- 'blkio' - use for block devices I/O tunables
- 'cpuset' - use for CPUs and memory nodes
- 'cpuacct' - use for CPUs statistics.
NB, even if configured here, they will not be used unless
the administrator has mounted cgroups, e. g.:
mkdir /dev/cgroup
mount -t cgroup -o devices,cpu,memory,blkio,cpuset none /dev/cgroup
They can be mounted anywhere, and different controllers
can be mounted in different locations. libvirt will detect
where they are located.
value: [ 'cpu', 'devices', 'memory', 'blkio', 'cpuset', 'cpuacct' ]
section: 'cgroup'
state: 'comment'
weight: 1 # ]]]
- name: 'cgroup_device_acl' # [[[
comment: |
This is the basic set of devices allowed / required by
all virtual machines.
As well as this, any configured block backed disks,
all sound device, and all PTY devices are allowed.
This will only need setting if newer QEMU suddenly
wants some device we do not already know about.
RDMA migration requires the following extra files to be added to the list:
"/dev/infiniband/rdma_cm",
"/dev/infiniband/issm0",
"/dev/infiniband/issm1",
"/dev/infiniband/umad0",
"/dev/infiniband/umad1",
"/dev/infiniband/uverbs0"
value: [ '/dev/null', '/dev/full', '/dev/zero',
'/dev/random', '/dev/urandom',
'/dev/ptmx', '/dev/kvm', '/dev/kqemu',
'/dev/rtc', '/dev/hpet', '/dev/vfio/vfio' ]
section: 'cgroup'
state: 'comment'
weight: 2 # ]]]
- name: 'save-image_format' # [[[
comment: |
The default format for Qemu/KVM guest save images is raw; that is, the
memory from the domain is dumped out directly to a file. If you have
guests with a large amount of memory, however, this can take up quite
a bit of space. If you would like to compress the images while they
are being saved to disk, you can also set "lzop", "gzip", "bzip2", or "xz"
for save_image_format. Note that this means you slow down the process of
saving a domain in order to save disk space; the list above is in descending
order by performance and ascending order by compression ratio.
save_image_format is used when you use 'virsh save' or 'virsh managedsave'
at scheduled saving, and it is an error if the specified save_image_format
is not valid, or the requested compression program cannot be found.
dump_image_format is used when you use 'virsh dump' at emergency
crashdump, and if the specified dump_image_format is not valid, or
the requested compression program cannot be found, this falls
back to "raw" compression.
snapshot_image_format specifies the compression algorithm of the memory save
image when an external snapshot of a domain is taken. This does not apply
on disk image format. It is an error if the specified format is not valid,
or the requested compression program cannot be found.
value: 'raw'
section: 'dump'
state: 'comment'
weight: 1 # ]]]
- name: 'dump_image_format' # [[[
value: 'raw'
section: 'dump'
state: 'comment'
weight: 2 # ]]]
- name: 'snapshot_image_format' # [[[
value: 'raw'
section: 'dump'
state: 'comment'
weight: 3 # ]]]
- name: 'auto_dump_path' # [[[
comment: |
When a domain is configured to be auto-dumped when libvirtd receives a
watchdog event from qemu guest, libvirtd will save dump files in directory
specified by auto_dump_path. Default value is /var/lib/libvirt/qemu/dump
value: '/var/lib/libvirt/qemu/dump'
section: 'dump'
state: 'comment'
weight: 4 # ]]]
- name: 'auto_dump_bypass_cache' # [[[
comment: |
When a domain is configured to be auto-dumped, enabling this flag
has the same effect as using the VIR_DUMP_BYPASS_CACHE flag with the
virDomainCoreDump API. That is, the system will avoid using the
file system cache while writing the dump file, but may cause
slower operation.
value: False
section: 'dump'
state: 'comment'
weight: 5 # ]]]
- name: 'auto_start_bypass_cache' # [[[
comment: |
When a domain is configured to be auto-started, enabling this flag
has the same effect as using the VIR_DOMAIN_START_BYPASS_CACHE flag
with the virDomainCreateWithFlags API. That is, the system will
avoid using the file system cache when restoring any managed state
file, but may cause slower operation.
value: False
section: 'dump'
state: 'comment'
weight: 6 # ]]]
- name: 'hugetlbfs_mount' # [[[
comment: |
If provided by the host and a hugetlbfs mount point is configured,
a guest may request huge page backing. When this mount point is
unspecified here, determination of a host mount point in /proc/mounts
will be attempted. Specifying an explicit mount overrides detection
of the same in /proc/mounts. Setting the mount point to "" will
disable guest hugepage backing. If desired, multiple mount points can
be specified at once, separated by comma and enclosed in square
brackets, for example:
hugetlbfs_mount = ["/dev/hugepages2M", "/dev/hugepages1G"]
The size of huge page served by specific mount point is determined by
libvirt at the daemon startup.
NB, within these mount points, guests will create memory backing
files in a location of $MOUNTPOINT/libvirt/qemu
value: '/dev/hugepages'
section: 'proc'
state: 'comment'
weight: 1 # ]]]
- name: 'bridge_helper' # [[[
comment: |
Path to the setuid helper for creating tap devices. This executable
is used to create <source type='bridge'> interfaces when libvirtd is
running unprivileged. libvirt invokes the helper directly, instead
of using "-netdev bridge", for security reasons.
value: '/usr/libexec/qemu-bridge-helper'
section: 'proc'
state: 'comment'
weight: 2 # ]]]
- name: 'clear_emulator_capabilities' # [[[
comment: |
If clear_emulator_capabilities is enabled, libvirt will drop all
privileged capabilities of the QEmu/KVM emulator. This is enabled by
default.
Warning: Disabling this option means that a compromised guest can
exploit the privileges and possibly do damage to the host.
value: True
section: 'proc'
state: 'comment'
weight: 3 # ]]]
- name: 'set_process_name' # [[[
comment: |
If enabled, libvirt will have QEMU set its process name to
"qemu:VM_NAME", where VM_NAME is the name of the VM. The QEMU
process will appear as "qemu:VM_NAME" in process listings and
other system monitoring tools. By default, QEMU does not set
its process title, so the complete QEMU command (emulator and
its arguments) appear in process listings.
value: True
section: 'proc'
state: 'comment'
weight: 4 # ]]]
- name: 'max_processes' # [[[
comment: |
If max_processes is set to a positive integer, libvirt will use
it to set the maximum number of processes that can be run by qemu
user. This can be used to override default value set by host OS.
The same applies to max_files which sets the limit on the maximum
number of opened files.
value: 0
section: 'proc'
state: 'comment'
weight: 5 # ]]]
- name: 'max_files' # [[[
value: 0
section: 'proc'
state: 'comment'
weight: 6 # ]]]
- name: 'max_core' # [[[
comment: |
If max_core is set to a non-zero integer, then QEMU will be
permitted to create core dumps when it crashes, provided its
RAM size is smaller than the limit set.
Be warned that the core dump will include a full copy of the
guest RAM, if the 'dump_guest_core' setting has been enabled,
or if the guest XML contains
<memory dumpcore="on">...guest ram...</memory>
If guest RAM is to be included, ensure the max_core limit
is set to at least the size of the largest expected guest
plus another 1GB for any QEMU host side memory mappings.
As a special case it can be set to the string "unlimited" to
to allow arbitrarily sized core dumps.
By default the core dump size is set to 0 disabling all dumps
Size is a positive integer specifying bytes or the
string "unlimited"
value: 'unlimited'
section: 'proc'
state: 'comment'
weight: 7 # ]]]
- name: 'dump_guest_core' # [[[
comment: |
Determine if guest RAM is included in QEMU core dumps. By
default guest RAM will be excluded if a new enough QEMU is
present. Setting this to '1' will force guest RAM to always
be included in QEMU core dumps.
This setting will be ignored if the guest XML has set the
dumpcore attribute on the <memory> element.
value: True
section: 'proc'
state: 'comment'
weight: 8 # ]]]
- name: 'mac_filter' # [[[
comment: |
mac_filter enables MAC addressed based filtering on bridge ports.
This currently requires ebtables to be installed.
value: True
section: 'proc'
state: 'comment'
weight: 9 # ]]]
- name: 'relaxed_acs_check' # [[[
comment: |
By default, PCI devices below non-ACS switch are not allowed to be assigned
to guests. By setting relaxed_acs_check to 1 such devices will be allowed to
be assigned to guests.
value: True
section: 'proc'
state: 'comment'
weight: 10 # ]]]
- name: 'allow_disk_format_probing' # [[[
comment: |
If allow_disk_format_probing is enabled, libvirt will probe disk
images to attempt to identify their format, when not otherwise
specified in the XML. This is disabled by default.
WARNING: Enabling probing is a security hole in almost all
deployments. It is strongly recommended that users update their
guest XML <disk> elements to include <driver type='XXXX'/>
elements instead of enabling this option.
value: True
section: 'proc'
state: 'comment'
weight: 11 # ]]]
- name: 'lock_manager' # [[[
comment: |
In order to prevent accidentally starting two domains that
share one writable disk, libvirt offers two approaches for
locking files. The first one is sanlock, the other one,
virtlockd, is then our own implementation. Accepted values
are "sanlock" and "lockd".
value: 'lockd'
section: 'proc'
state: 'comment'
weight: 12 # ]]]
- name: 'max_queued' # [[[
comment: |
Set limit of maximum APIs queued on one domain. All other APIs
over this threshold will fail on acquiring job lock. Specially,
setting to zero turns this feature off.
Note, that job lock is per domain.
value: 0
section: 'proc'
state: 'comment'
weight: 13 # ]]]
- name: 'keepalive_interval' # [[[
comment: |
This allows qemu driver to detect broken connections to remote
libvirtd during peer-to-peer migration. A keepalive message is
sent to the daemon after keepalive_interval seconds of inactivity
to check if the daemon is still responding; keepalive_count is a
maximum number of keepalive messages that are allowed to be sent
to the daemon without getting any response before the connection
is considered broken. In other words, the connection is
automatically closed approximately after
keepalive_interval * (keepalive_count + 1) seconds since the last
message received from the daemon. If keepalive_interval is set to
-1, qemu driver will not send keepalive requests during
peer-to-peer migration; however, the remote libvirtd can still
send them and source libvirtd will send responses. When
keepalive_count is set to 0, connections will be automatically
closed after keepalive_interval seconds of inactivity without
sending any keepalive messages.
value: 5
section: 'keepalive'
state: 'comment'
weight: 1 # ]]]
- name: 'keepalive_count' # [[[
value: 5
section: 'keepalive'
state: 'comment'
weight: 2 # ]]]
- name: 'seccomp_sandbox' # [[[
comment: |
Use seccomp syscall whitelisting in QEMU.
1 = on, 0 = off, -1 = use QEMU default
Defaults to -1.
value: 1
section: 'seccomp'
state: 'comment'
weight: 1 # ]]]
- name: 'migration_address' # [[[
comment: |
Override the listen address for all incoming migrations. Defaults to
0.0.0.0, or :: if both host and qemu are capable of IPv6.
value: '0.0.0.0'
section: 'migration'
state: 'comment'
weight: 1 # ]]]
- name: 'migration_host' # [[[
comment: |
The default hostname or IP address which will be used by a migration
source for transferring migration data to this host. The migration
source has to be able to resolve this hostname and connect to it so
setting "localhost" will not work. By default, the host configured
hostname is used.
value: 'host.example.com'
section: 'migration'
state: 'comment'
weight: 2 # ]]]
- name: 'migration_port_min' # [[[
comment: |
Override the port range used for incoming migrations.
Minimum must be greater than 0, however when QEMU is not running as root,
setting the minimum to be lower than 1024 will not work.
Maximum must not be greater than 65535.
value: 49152
section: 'migration'
state: 'comment'
weight: 3 # ]]]
- name: 'migration_port_max' # [[[
value: 49215
section: 'migration'
state: 'comment'
weight: 4 # ]]]
- name: 'log_timestamp' # [[[
comment: |
Timestamp QEMU log messages (if QEMU supports it)
Defaults to 1.
value: False
section: 'log'
state: 'comment'
weight: 1 # ]]]
- name: 'nvram' # [[[
comment: |
Location of master nvram file
When a domain is configured to use UEFI instead of standard
BIOS it may use a separate storage for UEFI variables. If
that is the case libvirt creates the variable store per domain
using this master file as image. Each UEFI firmware can,
however, have different variables store. Therefore the nvram is
a list of strings when a single item is in form of:
${PATH_TO_UEFI_FW}:${PATH_TO_UEFI_VARS}.
Later, when libvirt creates per domain variable store, this list is
searched for the master image. The UEFI firmware can be called
differently for different guest architectures. For instance, it is OVMF
for x86_64 and i686, but it is AAVMF for aarch64. The libvirt default
follows this scheme.
value: [ '/usr/share/OVMF/OVMF_CODE.fd:/usr/share/OVMF/OVMF_VARS.fd',
'/usr/share/OVMF/OVMF_CODE.secboot.fd:/usr/share/OVMF/OVMF_VARS.fd',
'/usr/share/AAVMF/AAVMF_CODE.fd:/usr/share/AAVMF/AAVMF_VARS.fd' ]
section: 'nvram'
state: 'comment'
weight: 1 # ]]]
- name: 'stdio_handler' # [[[
comment: |
The backend to use for handling stdout/stderr output from
QEMU processes.
'file': QEMU writes directly to a plain file. This is the
historical default, but allows QEMU to inflict a
denial of service attack on the host by exhausting
filesystem space
'logd': QEMU writes to a pipe provided by virtlogd daemon.
This is the current default, providing protection
against denial of service by performing log file
rollover when a size limit is hit.
value: 'logd'
section: 'stdio'
state: 'comment'
weight: 1 # ]]]
- name: 'gluster_debug_level' # [[[
comment: |
QEMU gluster libgfapi log level, debug levels are 0-9, with 9 being the
most verbose, and 0 representing no debugging output.
The current logging levels defined in the gluster GFAPI are:
0 - None
1 - Emergency
2 - Alert
3 - Critical
4 - Error
5 - Warning
6 - Notice
7 - Info
8 - Debug
9 - Trace
Defaults to 4
value: 9
section: 'gluster'
state: 'comment'
weight: 1 # ]]]
- name: 'namespaces' # [[[
comment: |
To enhance security, QEMU driver is capable of creating private namespaces
for each domain started. Well, so far only "mount" namespace is supported. If
enabled it means qemu process is unable to see all the devices on the system,
only those configured for the domain in question. Libvirt then manages
devices entries throughout the domain lifetime. This namespace is turned on
by default.
value: [ 'mount' ]
section: 'namespaces'
state: 'comment'
weight: 1 # ]]]
- libvirtd_qemu__default_configuration
List of QEMU configuration options which are changed from their original values by the role.
libvirtd_qemu__default_configuration:
- name: 'spice_listen'
value: '{{ libvirtd_qemu__spice_listen }}'
state: '{{ "present"
if (libvirtd_qemu__remote_display_allow | d())
else "ignore" }}'
- name: 'vnc_listen'
value: '{{ libvirtd_qemu__vnc_listen }}'
state: '{{ "present"
if (libvirtd_qemu__remote_display_allow | d())
else "ignore" }}'
- name: 'remote_display_port_min'
value: '{{ libvirtd_qemu__remote_display_port_min }}'
state: '{{ "present"
if (libvirtd_qemu__deployment_mode == "opennebula")
else "ignore" }}'
- name: 'remote_display_port_max'
value: '{{ libvirtd_qemu__remote_display_port_max }}'
state: '{{ "present"
if (libvirtd_qemu__deployment_mode == "opennebula")
else "ignore" }}'
- name: 'user'
value: '{{ libvirtd_qemu__user }}'
state: '{{ "present"
if (libvirtd_qemu__deployment_mode == "opennebula")
else "ignore" }}'
- name: 'group'
value: '{{ libvirtd_qemu__group }}'
state: '{{ "present"
if (libvirtd_qemu__deployment_mode == "opennebula")
else "ignore" }}'
- name: 'dynamic_ownership'
value: False
state: '{{ "present"
if (libvirtd_qemu__deployment_mode == "opennebula")
else "ignore" }}'
- libvirtd_qemu__configuration
List of QEMU configuration options which should be set on all hosts in the Ansible inventory.
libvirtd_qemu__configuration: []
- libvirtd_qemu__group_configuration
List of QEMU configuration options which should be set on hosts in specific Ansible inventory group.
libvirtd_qemu__group_configuration: []
- libvirtd_qemu__host_configuration
List of QEMU configuration options which should be set on specific hosts in the Ansible inventory.
libvirtd_qemu__host_configuration: []
- libvirtd_qemu__combined_configuration
Variable which combines all of the configuration lists and passes them to the configuration file template for processing.
libvirtd_qemu__combined_configuration: '{{ libvirtd_qemu__original_configuration
+ libvirtd_qemu__default_configuration
+ libvirtd_qemu__configuration
+ libvirtd_qemu__group_configuration
+ libvirtd_qemu__host_configuration }}'
- libvirtd_qemu__configuration_sections
List which defines what sections are present in the
/etc/libvirt/qemu.conf
configuration file.
See libvirtd_qemu__configuration_sections for more details.
libvirtd_qemu__configuration_sections:
- name: 'tls'
title: 'TLS configuration'
state: 'hidden'
- name: 'vnc'
title: 'VNC display configuration'
state: 'hidden'
- name: 'spice'
title: 'SPICE display configuration'
state: 'hidden'
- name: 'chardev'
title: 'Character device configuration'
state: 'hidden'
- name: 'display'
title: 'Display configuration'
state: 'hidden'
- name: 'security'
title: 'Security configuration'
state: 'hidden'
- name: 'user-group'
title: 'User and group configuration'
state: 'hidden'
- name: 'cgroup'
title: 'Cgroup configuration'
state: 'hidden'
- name: 'dump'
title: 'Data dump configuration'
state: 'hidden'
- name: 'proc'
title: 'Process configuration'
state: 'hidden'
- name: 'keepalive'
title: 'Keepalive configuration'
state: 'hidden'
- name: 'seccomp'
title: 'SecComp configuration'
state: 'hidden'
- name: 'migration'
title: 'Migration configuration'
state: 'hidden'
- name: 'log'
title: 'Logging configuration'
state: 'hidden'
- name: 'nvram'
title: 'NVRAM configuration'
state: 'hidden'
- name: 'stdio'
title: 'IO configuration'
state: 'hidden'
- name: 'gluster'
title: 'GlusterFS configuration'
state: 'hidden'
- name: 'namespaces'
title: 'Namespaces configuration'
state: 'hidden'
- name: 'unknown'
title: 'Other configuration'
state: 'hidden'
- libvirtd_qemu__configuration_comments
Enable or disable inclusion of the parameter comments in the generated configuration file.
libvirtd_qemu__configuration_comments: True
Configuration for other Ansible roles
- libvirtd_qemu__ferm__dependent_rules
Configuration for the debops.ferm
Ansible role.
libvirtd_qemu__ferm__dependent_rules:
- name: 'libvirtd_qemu__remote_display'
type: 'accept'
dport: '{{ libvirtd_qemu__remote_display_ports }}'
saddr: '{{ libvirtd_qemu__remote_display_allow }}'
accept_any: False
rule_state: '{{ "present"
if (libvirtd_qemu__deployment_mode == "opennebula")
else "absent" }}'