Skip to content
Snippets Groups Projects
Commit 34cfa6c1 authored by Hans Persson's avatar Hans Persson
Browse files

Split to separate files for autoloader.

parent 64b46b9e
No related branches found
No related tags found
No related merge requests found
/*
* Define a virtualization guest the NSC way.
*
* This is a first prototype. Currently, it defines the VM as a
* Xen domU. It creates two configurations for each guest, one
* named "$name" for running normally, and one named "${name}-install"
* which will boot a kickstarted CentOS installer.
*
*
* PARAMETERS
*
* - ensure::
* Tells if the VM guest should be installed or not. Possible
* values are 'installed' (default) and 'absent'. The values
* 'started' and 'stopped' that xen::domu handles are forbidden
* here. Note though that 'absent' will still destroy a running
* guest with this name.
*
* - autoboot::
* If the guest should be started automatically when the VM host
* boots. Possible values are true (default) and false. This
* parameter only applies to the normal configuration for the
* guest; the -install configuration is never autobooted.
*
* - virtmethod::
* What kind of virtualization technology to use. One of:
* xenpv Xen paravirtualization (default). Requires support in
* the guest OS.
* xenhvm Xen hardware virtualization. Requires support in the
* host CPU.
*
* - systemvg
* LVM volume group where virtual disks for the guest are stored.
* Defaults to the value of the eponymous parameter of the
* vm::siteconfig class.
*
* - disk::
* List of disk images to export to the guest, starting with xvdb.
* The image for the system disk, xvda, will be automatically
* prepended to this list, using /dev/$systemvg/vm-$name.system,
* where $hostname is the name of the VM host (dom0), and $name
* is the name of the guest. That image must currently already
* be partitioned into / (xvda1), /var (xvda2) and swap (xvda3).
* The disk images can either be on the normal Xen format for
* disk images, or on the form
* "+" role [ "," devicespec ]
* in which case it is transformed into
* "phy:/dev/${systemvg}/vm-${name}." role [ "," devicespec ]
*
* - interfaces::
* List of MAC addresses, one for each interface given to the
* guest. Passed on as-is to the underlying provider definition.
* May be an empty list, but the -install target won't work with
* that configuration. Must be explicitly specified, unless
* ensure is set to 'absent'.
*
* - mounts::
* A list of extra mount points to create and add to /etc/fstab
* during kickstart. Each entry is on the form
* [ device "=>" ] mountpoint [ "(" fstype ")" ]
* Arbitrary whitespace is allowed between the elements in the entry.
* The meanings of the fields are:
* - device::
* Device, LABEL= or UUID= specification to find the filesystem.
* Defaults to "LABEL=$name-$mountpoint"
* - mountpoint::
* The directory where the filesystem should be mounted.
* It will be created if it doesn't already exist.
* - fstype::
* The file system type. If not specified, the kickstart
* script will try to determine the type by mounting the
* filesystem. If that fails, the type will be set to "auto".
* It should usually be enough to only specify the mountpoint.
* If no mountpoint for /home is specified, vm::guest will try
* to find /home anyway during kickstart, but if it doesn't find any
* suitable filesystem, it will silently ignore the problem and not
* add an fstab entry for it.
*
* - guest_hostname::
* The hostname/FQDN of the guest. This is used during kickstart to
* configure the hostname and (if netconfig is set to --static-auto)
* the IP address of the guest. If the hostname parameter contains
* any periods, it is assumed to be a fully qualified domain name,
* else the $domain fact of the VM host will be added.
*
* - netconfig::
* Specifies how the network should be configured during kickstart.
* Can be one of:
* --static-auto::
* Look up the IP address of the guest name (at compile time),
* and decide other network parameters based on that. This is
* the default.
* --dhcp::
* Use DHCP on eth0 to configure the network.
* --literal <PARAMS>::
* Pass <PARAMS> directly to the network option in the kickstart
* file, without any interpretation. This is currently the only
* way to use any other interface than eth0 for installation.
* --manual::
* Ask interactively for network configuration during kickstart.
*
* - rootpw::
* The root password to set during kickstart installation.
* Can be either the encrypted password, or on the form
* "--plain PASSWD", which will use PASSWD as the plaintext
* password and encrypt it (using the MD5 method).
* If not set, or set to the empty string, kickstart will ask
* interactively for the wanted password on the guest console.
*
* - os, arch::
* The OS type and architecture to use when installing the VM guest.
* $os typically includes the OS version, e.g. "centos-6".
*
*
*
* USAGE NOTES
*
* The system disk is generally presumed to be ephemeral; it should
* not contain any information or state that can't be re-created by
* a re-installation and running Puppet. /, /boot and /var should
* thus not need to be backup:ed.
*
* Precious information and state should be stored on the disk
* images supplied via the disk parameter. Filesystems on them
* must be created manually, to lessen the risk of automatic
* trashing of information.
*
* Disk images should usually be logical LVM volumes, with names on
* the form "/dev/vg${host}/vm-${guest}.${role}", where
* - ${host} is the hostname of the VM host (dom0)
* - ${guest} is the name of the guest (domU), i.e. the name specified
* for the vm::guest resource, not the guest_hostname parameter.
* - ${role} is what filesystem is hosted on it, like "home",
* "mailspool" or "mailqueue".
*
* Don't partition the disk images unless needed. Instead create
* the filesystem directly on the unpartitioned image, to make it
* easier to access them outside of the guest. (The system disk
* does need to be partitioned, by the way.)
*
* Filesystems should be mounted by label, and the labels should
* be on the form "$guest-/$filesys", in order that similar file-
* systems for different guests doesn't collide on the VM host.
*
*
*
* LIMITATIONS/DEFICIENCIES
*
* - Bridging of network interfaces is not handled. Users need to
* set up bridging using networking::rh_interface themselves.
* - Creation of disk image files/devices and partitioning of them
* must be done manually. We should consider instead specifying
* a list of wanted filesystems and sizes, having vm::guest
* create and partition LV:s as needed.
* - There is no way to specify mount options in the 'mounts' parameter.
* - The guest OS must be installed manually by running
* 'xl create $name-install'.
* - Only RedHat-based guest OSes can be configured. Specifically, the
* kernel parameters Anaconda expects are hardcoded.
* - We probably want a few more parameters.
* - The magic handling of /home in the 'mounts' parameter should
* be generalized.
*/
define vm::guest($ensure='installed', $autoboot=true, $virtmethod='xenpv',
$guest_hostname='',
$systemvg=undef,
$cpus=1, $memory=':undef', $disk=[], $interfaces=':undef',
$mounts=[], $netconfig='--static-auto', $rootpw='',
$os='centos-7', $arch=$architecture)
{
include vm::anacondafiles
include vm::osparams
# Find out OS specific parameters
include vm::siteconfig
$repolist = $vm::siteconfig::osrepos["${os}.${arch}"]
$osparams = $vm::siteconfig::osparams["${os}.${arch}"]
$ksdiskname = $osparams['ksdiskname']
$ksmethod = inline_template('<%= @repolist[0].split(nil, 2)[-1] %>')
$xsystemvg = $systemvg ? {
undef => $vm::siteconfig::systemvg,
default => $systemvg,
}
include vm::guests::dom0_memory
# Handle magic "+" syntax for $disk:
$xdisk = regsubst($disk,
'^\+([^:,]+)(,.*)?$',
"phy:/dev/${xsystemvg}/vm-${name}.\\1\\2")
$sysdisk = ["phy:/dev/${xsystemvg}/vm-${name}.system"]
$ksdisk = ["file:${vm::osparams::xen_kickstart_dir}/ks.iso,${ksdiskname}:cdrom,r"]
$rundisk = concat($sysdisk, $xdisk)
$installdisk = concat($sysdisk, $xdisk, $ksdisk)
$vm_fqdn = $guest_hostname ? {
/.*[.].*/ => $guest_hostname,
default => "${name}.${::domain}"
}
case $ensure
{
'installed': {
if $memory == ':undef' {
fail("Vm::Guest[${title}], memory not specified")
}
if $interfaces == ':undef' {
fail("Vm::Guest[${title}], interfaces not specified")
}
$nodash_os = regsubst($os, '-', '_', 'G')
include vm::network_bridging
include "vm::osinst::${nodash_os}::${arch}"
include vm::guests::blacklist_hw_drivers
file {
"${vm::osparams::xen_kickstart_dir}/${name}.ks":
content => template(
"vm/kickstart/xenguest.${os}.ks.erb"),
owner => 'root', group => 'root', mode => '0444',
require => File[$vm::osparams::xen_kickstart_dir],
notify => Exec['vm::create-kickstart-iso'];
}
}
'absent': {
file {
"${vm::osparams::xen_kickstart_dir}/${name}.ks":
ensure => absent,
notify => Exec['vm::create-kickstart-iso'];
}
}
default: {
fail("Vm::Guest[${title}], bad value for ensure: \"${ensure}\"")
}
}
case $virtmethod {
'xenpv': {
$kparams = "text ks=cdrom:/${name}.ks method=${ksmethod}"
$domparams = { }
$xen_domu_type = 'pv'
}
'xenhvm': {
$kparams = "console=ttyS0 text ks=cdrom:/${name}.ks method=${ksmethod}"
$domparams = {
'serial' => '"pty"', 'hdtype' => '"ahci"',
'xen_platform_pci' => '1',
}
$xen_domu_type = 'hvm'
}
'xenpvh': {
$xen_domu_type = 'pvh'
fail("Vm::Guest[${title}]: Xen PVH guests not yet supported")
}
default: {
fail("Vm::Guest[${title}]:",
" Bad value for virtmethod: \"${virtmethod}\"")
}
}
$instdir = "${vm::osparams::xen_anaconda_dir}/${os}.${arch}"
xen::domu {
$name:
ensure => $ensure,
hostname => $vm_fqdn,
vmname => $name,
type => $xen_domu_type,
autoboot => $autoboot,
cpus => $cpus, memory => $memory,
disk => $rundisk, interfaces => $interfaces,
extraparams => $domparams,
;
"${name}-install":
ensure => $ensure,
hostname => $vm_fqdn,
vmname => $name,
type => $xen_domu_type,
autoboot => false,
cpus => $cpus,
memory => 2048, # EL-7 installer needs at least 1.5 Gbyte
disk => $installdisk, interfaces => $interfaces,
bootloader => '',
kernel => "${instdir}/${osparams['installer_kernel']}",
ramdisk => "${instdir}/${osparams['installer_initrd']}",
kernelparams => $kparams,
onreboot => 'destroy', oncrash => 'destroy',
onshutdown => 'destroy',
extraparams => $domparams,
;
}
}
# Help classes.
class vm::anacondafiles
{
include xen::dom0
include env::package::mkisofs
include vm::osparams
file {
[ $vm::osparams::xen_anaconda_dir, $vm::osparams::xen_kickstart_dir ]:
ensure => directory,
owner => 'root', group => 'root', mode => '0755',
require => Class['xen::dom0'];
}
exec {
'vm::create-kickstart-iso':
command => 'rm -f ks.iso..tmp ; mkisofs -quiet -r -l -no-pad -o ks.iso..tmp *.ks && mv -f ks.iso..tmp ks.iso',
path => ['/bin', '/usr/bin', '/sbin', '/usr/sbin'],
cwd => $vm::osparams::xen_kickstart_dir,
refreshonly => true,
require => Class[env::package::mkisofs];
}
}
class vm::network_bridging
{
# Newer kernels (not in 3.18 in Xen4CentOS6, but 4.9 does) pass bridged
# packets through iptables FORWARD chain by default. We must disable
# that behaviour. (See https://patchwork.ozlabs.org/patch/29319/ for
# other implications of having it enabled, due to conntrack.)
sysctl {
[ 'net.bridge.bridge-nf-call-iptables',
'net.bridge.bridge-nf-call-ip6tables',
'net.bridge.bridge-nf-call-arptables',
]:
value => 0, ignoremissing => true;
}
}
/*
* Blacklist emulated disk and network hardware (on HVM guests), so the
* Xenfront devices will be used instead.
*/
class vm::guests::blacklist_hw_drivers
{
$hwdrivers = [
'ahci', 'ata_piix', 'libata', '8139cp', '8139too', 'e1000', 'e1000e',
]
$blacklist = inline_template(
"<% @hwdrivers.each do |drv| %>install <%= drv %> /bin/echo <%= drv %>: Unwanted hardware driver on Xen guest >&2; /bin/false\n<% end -%>"
)
$blacklist_file = '/etc/modprobe.d/xen-blacklist-hwemu.conf'
if ($::virtual == 'xenhvm') {
file {
$blacklist_file:
ensure => file,
mode => '0444', content => $blacklist;
}
} else {
file {
$blacklist_file:
ensure => absent;
}
}
# Need to get the blacklist into the initramfs as well, so *it*
# won't load the unwanted drivers.
exec {
'vm::guests::blacklist_hw_drivers::regenerate_initramfs':
command => shellquote('/sbin/dracut', '-f', '--regenerate-all'),
path => ['/bin', '/usr/bin', '/sbin', '/usr/sbin'],
timeout => 600, # Normally runs in ca 150-200 s
refreshonly => true,
subscribe => File[$blacklist_file];
}
}
/*
* Increase the amount of memory reserved for Dom0.
* The default 1 Gbyte tends to be a bit too little these days.
*/
class vm::guests::dom0_memory($mem='2048M')
{
xen::bootparam {
'dom0_mem':
value => "${mem},max:${mem}";
}
}
/*
* Class configuring a Xen host to not save its guests when
* rebooting. Thus, a reboot of the host will also force the
* guests to reboot.
*/
class vm::guests::nosave
{
include xen::dom0
$xensave_dir = '/var/lib/xen/save'
mount {
$xensave_dir:
ensure => absent;
}
cfgfile::redhat::sysconfig {
'xendomains-save':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SAVE', value => '',
require => Class[xen::dom0];
'xendomains-restore':
subsystem => 'xendomains',
setting => 'XENDOMAINS_RESTORE', value => 'false',
require => Class[xen::dom0];
}
}
class vm::guests::parallel_shutdown
{
include xen::dom0
cfgfile::redhat::sysconfig {
'xendomains-shutdown':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SHUTDOWN', value => '"--halt"',
require => Class[xen::dom0];
}
}
/*
* Class configuring a Xen host to save and restore its guests
* when rebooting, so the guests don't need to reboot then.
*
* Note that you need to manually create the xensave LVM volume,
* and make it large enough to fit all guests.
*/
class vm::guests::save
{
include xen::dom0
$xensave_dir = '/var/lib/xen/save'
fs::mount {
$xensave_dir:
device => "/dev/vg${hostname}/xensave";
}
cfgfile::redhat::sysconfig {
'xendomains-save':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SAVE', value => $xensave_dir,
require => Class[xen::dom0];
'xendomains-restore':
subsystem => 'xendomains',
setting => 'XENDOMAINS_RESTORE', value => 'true',
require => Class[xen::dom0];
}
}
class vm::guests::serial_shutdown
{
include xen::dom0
cfgfile::redhat::sysconfig {
'xendomains-shutdown':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SHUTDOWN', value => '"--halt --wait"',
require => Class[xen::dom0];
}
}
# This file intentionally left (almost) blank.
/* /*
* Configuration for the vm::guest definition. * Configuration for the vm::guest definition.
* *
...@@ -40,549 +43,7 @@ ...@@ -40,549 +43,7 @@
* Unlike the other variables, this is a parameter to the class, * Unlike the other variables, this is a parameter to the class,
* making it possible to override on a per-node basis. * making it possible to override on a per-node basis.
*/ */
/*
* Define a virtualization guest the NSC way.
*
* This is a first prototype. Currently, it defines the VM as a
* Xen domU. It creates two configurations for each guest, one
* named "$name" for running normally, and one named "${name}-install"
* which will boot a kickstarted CentOS installer.
*
*
* PARAMETERS
*
* - ensure::
* Tells if the VM guest should be installed or not. Possible
* values are 'installed' (default) and 'absent'. The values
* 'started' and 'stopped' that xen::domu handles are forbidden
* here. Note though that 'absent' will still destroy a running
* guest with this name.
*
* - autoboot::
* If the guest should be started automatically when the VM host
* boots. Possible values are true (default) and false. This
* parameter only applies to the normal configuration for the
* guest; the -install configuration is never autobooted.
*
* - virtmethod::
* What kind of virtualization technology to use. One of:
* xenpv Xen paravirtualization (default). Requires support in
* the guest OS.
* xenhvm Xen hardware virtualization. Requires support in the
* host CPU.
*
* - systemvg
* LVM volume group where virtual disks for the guest are stored.
* Defaults to the value of the eponymous parameter of the
* vm::siteconfig class.
*
* - disk::
* List of disk images to export to the guest, starting with xvdb.
* The image for the system disk, xvda, will be automatically
* prepended to this list, using /dev/$systemvg/vm-$name.system,
* where $hostname is the name of the VM host (dom0), and $name
* is the name of the guest. That image must currently already
* be partitioned into / (xvda1), /var (xvda2) and swap (xvda3).
* The disk images can either be on the normal Xen format for
* disk images, or on the form
* "+" role [ "," devicespec ]
* in which case it is transformed into
* "phy:/dev/${systemvg}/vm-${name}." role [ "," devicespec ]
*
* - interfaces::
* List of MAC addresses, one for each interface given to the
* guest. Passed on as-is to the underlying provider definition.
* May be an empty list, but the -install target won't work with
* that configuration. Must be explicitly specified, unless
* ensure is set to 'absent'.
*
* - mounts::
* A list of extra mount points to create and add to /etc/fstab
* during kickstart. Each entry is on the form
* [ device "=>" ] mountpoint [ "(" fstype ")" ]
* Arbitrary whitespace is allowed between the elements in the entry.
* The meanings of the fields are:
* - device::
* Device, LABEL= or UUID= specification to find the filesystem.
* Defaults to "LABEL=$name-$mountpoint"
* - mountpoint::
* The directory where the filesystem should be mounted.
* It will be created if it doesn't already exist.
* - fstype::
* The file system type. If not specified, the kickstart
* script will try to determine the type by mounting the
* filesystem. If that fails, the type will be set to "auto".
* It should usually be enough to only specify the mountpoint.
* If no mountpoint for /home is specified, vm::guest will try
* to find /home anyway during kickstart, but if it doesn't find any
* suitable filesystem, it will silently ignore the problem and not
* add an fstab entry for it.
*
* - guest_hostname::
* The hostname/FQDN of the guest. This is used during kickstart to
* configure the hostname and (if netconfig is set to --static-auto)
* the IP address of the guest. If the hostname parameter contains
* any periods, it is assumed to be a fully qualified domain name,
* else the $domain fact of the VM host will be added.
*
* - netconfig::
* Specifies how the network should be configured during kickstart.
* Can be one of:
* --static-auto::
* Look up the IP address of the guest name (at compile time),
* and decide other network parameters based on that. This is
* the default.
* --dhcp::
* Use DHCP on eth0 to configure the network.
* --literal <PARAMS>::
* Pass <PARAMS> directly to the network option in the kickstart
* file, without any interpretation. This is currently the only
* way to use any other interface than eth0 for installation.
* --manual::
* Ask interactively for network configuration during kickstart.
*
* - rootpw::
* The root password to set during kickstart installation.
* Can be either the encrypted password, or on the form
* "--plain PASSWD", which will use PASSWD as the plaintext
* password and encrypt it (using the MD5 method).
* If not set, or set to the empty string, kickstart will ask
* interactively for the wanted password on the guest console.
*
* - os, arch::
* The OS type and architecture to use when installing the VM guest.
* $os typically includes the OS version, e.g. "centos-6".
*
*
*
* USAGE NOTES
*
* The system disk is generally presumed to be ephemeral; it should
* not contain any information or state that can't be re-created by
* a re-installation and running Puppet. /, /boot and /var should
* thus not need to be backup:ed.
*
* Precious information and state should be stored on the disk
* images supplied via the disk parameter. Filesystems on them
* must be created manually, to lessen the risk of automatic
* trashing of information.
*
* Disk images should usually be logical LVM volumes, with names on
* the form "/dev/vg${host}/vm-${guest}.${role}", where
* - ${host} is the hostname of the VM host (dom0)
* - ${guest} is the name of the guest (domU), i.e. the name specified
* for the vm::guest resource, not the guest_hostname parameter.
* - ${role} is what filesystem is hosted on it, like "home",
* "mailspool" or "mailqueue".
*
* Don't partition the disk images unless needed. Instead create
* the filesystem directly on the unpartitioned image, to make it
* easier to access them outside of the guest. (The system disk
* does need to be partitioned, by the way.)
*
* Filesystems should be mounted by label, and the labels should
* be on the form "$guest-/$filesys", in order that similar file-
* systems for different guests doesn't collide on the VM host.
*
*
*
* LIMITATIONS/DEFICIENCIES
*
* - Bridging of network interfaces is not handled. Users need to
* set up bridging using networking::rh_interface themselves.
* - Creation of disk image files/devices and partitioning of them
* must be done manually. We should consider instead specifying
* a list of wanted filesystems and sizes, having vm::guest
* create and partition LV:s as needed.
* - There is no way to specify mount options in the 'mounts' parameter.
* - The guest OS must be installed manually by running
* 'xl create $name-install'.
* - Only RedHat-based guest OSes can be configured. Specifically, the
* kernel parameters Anaconda expects are hardcoded.
* - We probably want a few more parameters.
* - The magic handling of /home in the 'mounts' parameter should
* be generalized.
*/
define vm::guest($ensure='installed', $autoboot=true, $virtmethod='xenpv',
$guest_hostname='',
$systemvg=undef,
$cpus=1, $memory=':undef', $disk=[], $interfaces=':undef',
$mounts=[], $netconfig='--static-auto', $rootpw='',
$os='centos-7', $arch=$architecture)
{
include vm::anacondafiles
include vm::osparams
# Find out OS specific parameters
include vm::siteconfig
$repolist = $vm::siteconfig::osrepos["${os}.${arch}"]
$osparams = $vm::siteconfig::osparams["${os}.${arch}"]
$ksdiskname = $osparams['ksdiskname']
$ksmethod = inline_template('<%= @repolist[0].split(nil, 2)[-1] %>')
$xsystemvg = $systemvg ? {
undef => $vm::siteconfig::systemvg,
default => $systemvg,
}
include vm::guests::dom0_memory
# Handle magic "+" syntax for $disk:
$xdisk = regsubst($disk,
'^\+([^:,]+)(,.*)?$',
"phy:/dev/${xsystemvg}/vm-${name}.\\1\\2")
$sysdisk = ["phy:/dev/${xsystemvg}/vm-${name}.system"]
$ksdisk = ["file:${vm::osparams::xen_kickstart_dir}/ks.iso,${ksdiskname}:cdrom,r"]
$rundisk = concat($sysdisk, $xdisk)
$installdisk = concat($sysdisk, $xdisk, $ksdisk)
$vm_fqdn = $guest_hostname ? {
/.*[.].*/ => $guest_hostname,
default => "${name}.${::domain}"
}
case $ensure
{
'installed': {
if $memory == ':undef' {
fail("Vm::Guest[${title}], memory not specified")
}
if $interfaces == ':undef' {
fail("Vm::Guest[${title}], interfaces not specified")
}
$nodash_os = regsubst($os, '-', '_', 'G')
include vm::network_bridging
include "vm::osinst::${nodash_os}::${arch}"
include vm::guests::blacklist_hw_drivers
file {
"${vm::osparams::xen_kickstart_dir}/${name}.ks":
content => template(
"vm/kickstart/xenguest.${os}.ks.erb"),
owner => 'root', group => 'root', mode => '0444',
require => File[$vm::osparams::xen_kickstart_dir],
notify => Exec['vm::create-kickstart-iso'];
}
}
'absent': {
file {
"${vm::osparams::xen_kickstart_dir}/${name}.ks":
ensure => absent,
notify => Exec['vm::create-kickstart-iso'];
}
}
default: {
fail("Vm::Guest[${title}], bad value for ensure: \"${ensure}\"")
}
}
case $virtmethod {
'xenpv': {
$kparams = "text ks=cdrom:/${name}.ks method=${ksmethod}"
$domparams = { }
$xen_domu_type = 'pv'
}
'xenhvm': {
$kparams = "console=ttyS0 text ks=cdrom:/${name}.ks method=${ksmethod}"
$domparams = {
'serial' => '"pty"', 'hdtype' => '"ahci"',
'xen_platform_pci' => '1',
}
$xen_domu_type = 'hvm'
}
'xenpvh': {
$xen_domu_type = 'pvh'
fail("Vm::Guest[${title}]: Xen PVH guests not yet supported")
}
default: {
fail("Vm::Guest[${title}]:",
" Bad value for virtmethod: \"${virtmethod}\"")
}
}
$instdir = "${vm::osparams::xen_anaconda_dir}/${os}.${arch}"
xen::domu {
$name:
ensure => $ensure,
hostname => $vm_fqdn,
vmname => $name,
type => $xen_domu_type,
autoboot => $autoboot,
cpus => $cpus, memory => $memory,
disk => $rundisk, interfaces => $interfaces,
extraparams => $domparams,
;
"${name}-install":
ensure => $ensure,
hostname => $vm_fqdn,
vmname => $name,
type => $xen_domu_type,
autoboot => false,
cpus => $cpus,
memory => 2048, # EL-7 installer needs at least 1.5 Gbyte
disk => $installdisk, interfaces => $interfaces,
bootloader => '',
kernel => "${instdir}/${osparams['installer_kernel']}",
ramdisk => "${instdir}/${osparams['installer_initrd']}",
kernelparams => $kparams,
onreboot => 'destroy', oncrash => 'destroy',
onshutdown => 'destroy',
extraparams => $domparams,
;
}
}
/*
* Class configuring a Xen host to save and restore its guests
* when rebooting, so the guests don't need to reboot then.
*
* Note that you need to manually create the xensave LVM volume,
* and make it large enough to fit all guests.
*/
class vm::guests::save
{
include xen::dom0
$xensave_dir = '/var/lib/xen/save'
fs::mount {
$xensave_dir:
device => "/dev/vg${hostname}/xensave";
}
cfgfile::redhat::sysconfig {
'xendomains-save':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SAVE', value => $xensave_dir,
require => Class[xen::dom0];
'xendomains-restore':
subsystem => 'xendomains',
setting => 'XENDOMAINS_RESTORE', value => 'true',
require => Class[xen::dom0];
}
}
/*
* Class configuring a Xen host to not save its guests when
* rebooting. Thus, a reboot of the host will also force the
* guests to reboot.
*/
class vm::guests::nosave
{
include xen::dom0
$xensave_dir = '/var/lib/xen/save'
mount {
$xensave_dir:
ensure => absent;
}
cfgfile::redhat::sysconfig {
'xendomains-save':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SAVE', value => '',
require => Class[xen::dom0];
'xendomains-restore':
subsystem => 'xendomains',
setting => 'XENDOMAINS_RESTORE', value => 'false',
require => Class[xen::dom0];
}
}
class vm::guests::parallel_shutdown
{
include xen::dom0
cfgfile::redhat::sysconfig {
'xendomains-shutdown':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SHUTDOWN', value => '"--halt"',
require => Class[xen::dom0];
}
}
class vm::guests::serial_shutdown
{
include xen::dom0
cfgfile::redhat::sysconfig {
'xendomains-shutdown':
subsystem => 'xendomains',
setting => 'XENDOMAINS_SHUTDOWN', value => '"--halt --wait"',
require => Class[xen::dom0];
}
}
/* /*
* Increase the amount of memory reserved for Dom0. * FIXME! The mixture of variables and parameters is ugly. This should
* The default 1 Gbyte tends to be a bit too little these days. * be refactored somehow.
*/ */
class vm::guests::dom0_memory($mem='2048M')
{
xen::bootparam {
'dom0_mem':
value => "${mem},max:${mem}";
}
}
/*
* Blacklist emulated disk and network hardware (on HVM guests), so the
* Xenfront devices will be used instead.
*/
class vm::guests::blacklist_hw_drivers
{
$hwdrivers = [
'ahci', 'ata_piix', 'libata', '8139cp', '8139too', 'e1000', 'e1000e',
]
$blacklist = inline_template(
"<% @hwdrivers.each do |drv| %>install <%= drv %> /bin/echo <%= drv %>: Unwanted hardware driver on Xen guest >&2; /bin/false\n<% end -%>"
)
$blacklist_file = '/etc/modprobe.d/xen-blacklist-hwemu.conf'
if ($::virtual == 'xenhvm') {
file {
$blacklist_file:
ensure => file,
mode => '0444', content => $blacklist;
}
} else {
file {
$blacklist_file:
ensure => absent;
}
}
# Need to get the blacklist into the initramfs as well, so *it*
# won't load the unwanted drivers.
exec {
'vm::guests::blacklist_hw_drivers::regenerate_initramfs':
command => shellquote('/sbin/dracut', '-f', '--regenerate-all'),
path => ['/bin', '/usr/bin', '/sbin', '/usr/sbin'],
timeout => 600, # Normally runs in ca 150-200 s
refreshonly => true,
subscribe => File[$blacklist_file];
}
}
#
# ========================================================================
# Internal helper definitions and classes below. Not for general use.
class vm::osparams
{
$osparams = {
'centos-6.x86_64' => {
'installer_kernel' => 'vmlinuz',
'installer_initrd' => 'initrd.img',
'ksdiskname' => 'xvdz',
},
'centos-7.x86_64' => {
'installer_kernel' => 'vmlinuz',
'installer_initrd' => 'initrd.img',
'ksdiskname' => 'hdd',
},
}
$xen_anaconda_dir = '/var/lib/xen/anaconda'
$xen_kickstart_dir = '/var/lib/xen/kickstart'
}
class vm::osinst::centos_6::x86_64
{
include vm::siteconfig
include vm::osparams
$ostag = 'centos-6.x86_64'
$osparams = $vm::siteconfig::osparams[$ostag]
$repolist = $vm::siteconfig::osrepos[$ostag]
$mirrorsrc = inline_template(
'<%= @repolist[0].split(nil, 2)[-1] %>/images/pxeboot')
$mirrordst = "${vm::osparams::xen_anaconda_dir}/${ostag}"
file {
$mirrordst:
ensure => directory,
owner => 'root', group => 'root', mode => '0755';
}
mirror::lftp {
"vm-anaconda-${ostag}":
source => "${mirrorsrc}/.",
target => $mirrordst,
creates => "${mirrordst}/${osparams['installer_kernel']}",
hour => 2, minute => 10,
require => File[$mirrordst];
}
}
class vm::osinst::centos_7::x86_64
{
include vm::siteconfig
include vm::osparams
$ostag = 'centos-7.x86_64'
$osparams = $vm::siteconfig::osparams[$ostag]
$repolist = $vm::siteconfig::osrepos[$ostag]
$mirrorsrc = inline_template(
'<%= @repolist[0].split(nil, 2)[-1] %>/images/pxeboot')
$mirrordst = "${vm::osparams::xen_anaconda_dir}/${ostag}"
file {
$mirrordst:
ensure => directory,
owner => 'root', group => 'root', mode => '0755';
}
mirror::lftp {
"vm-anaconda-${ostag}":
source => "${mirrorsrc}/.",
target => $mirrordst,
creates => "${mirrordst}/${osparams['installer_kernel']}",
hour => 2, minute => 10,
require => File[$mirrordst];
}
}
class vm::anacondafiles
{
include xen::dom0
include env::package::mkisofs
include vm::osparams
file {
[ $vm::osparams::xen_anaconda_dir, $vm::osparams::xen_kickstart_dir ]:
ensure => directory,
owner => 'root', group => 'root', mode => '0755',
require => Class['xen::dom0'];
}
exec {
'vm::create-kickstart-iso':
command => 'rm -f ks.iso..tmp ; mkisofs -quiet -r -l -no-pad -o ks.iso..tmp *.ks && mv -f ks.iso..tmp ks.iso',
path => ['/bin', '/usr/bin', '/sbin', '/usr/sbin'],
cwd => $vm::osparams::xen_kickstart_dir,
refreshonly => true,
require => Class[env::package::mkisofs];
}
}
class vm::network_bridging
{
# Newer kernels (not in 3.18 in Xen4CentOS6, but 4.9 does) pass bridged
# packets through iptables FORWARD chain by default. We must disable
# that behaviour. (See https://patchwork.ozlabs.org/patch/29319/ for
# other implications of having it enabled, due to conntrack.)
sysctl {
[ 'net.bridge.bridge-nf-call-iptables',
'net.bridge.bridge-nf-call-ip6tables',
'net.bridge.bridge-nf-call-arptables',
]:
value => 0, ignoremissing => true;
}
}
class vm::osinst::centos_6::x86_64
{
include vm::siteconfig
include vm::osparams
$ostag = 'centos-6.x86_64'
$osparams = $vm::siteconfig::osparams[$ostag]
$repolist = $vm::siteconfig::osrepos[$ostag]
$mirrorsrc = inline_template(
'<%= @repolist[0].split(nil, 2)[-1] %>/images/pxeboot')
$mirrordst = "${vm::osparams::xen_anaconda_dir}/${ostag}"
file {
$mirrordst:
ensure => directory,
owner => 'root', group => 'root', mode => '0755';
}
mirror::lftp {
"vm-anaconda-${ostag}":
source => "${mirrorsrc}/.",
target => $mirrordst,
creates => "${mirrordst}/${osparams['installer_kernel']}",
hour => 2, minute => 10,
require => File[$mirrordst];
}
}
class vm::osinst::centos_7::x86_64
{
include vm::siteconfig
include vm::osparams
$ostag = 'centos-7.x86_64'
$osparams = $vm::siteconfig::osparams[$ostag]
$repolist = $vm::siteconfig::osrepos[$ostag]
$mirrorsrc = inline_template(
'<%= @repolist[0].split(nil, 2)[-1] %>/images/pxeboot')
$mirrordst = "${vm::osparams::xen_anaconda_dir}/${ostag}"
file {
$mirrordst:
ensure => directory,
owner => 'root', group => 'root', mode => '0755';
}
mirror::lftp {
"vm-anaconda-${ostag}":
source => "${mirrorsrc}/.",
target => $mirrordst,
creates => "${mirrordst}/${osparams['installer_kernel']}",
hour => 2, minute => 10,
require => File[$mirrordst];
}
}
class vm::osparams
{
$osparams = {
'centos-6.x86_64' => {
'installer_kernel' => 'vmlinuz',
'installer_initrd' => 'initrd.img',
'ksdiskname' => 'xvdz',
},
'centos-7.x86_64' => {
'installer_kernel' => 'vmlinuz',
'installer_initrd' => 'initrd.img',
'ksdiskname' => 'hdd',
},
}
$xen_anaconda_dir = '/var/lib/xen/anaconda'
$xen_kickstart_dir = '/var/lib/xen/kickstart'
}
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment