Virtualization

Platform virtualization

Software-based virtualization (simulation)

Create an empty disk image and then install Fedora onto it, running the procedure in a qemu simulator:

$ qemu-img create -f qcow2 disk.qcow2 4G
$ qemu-system-x86_64 -hda disk.qcow2 \
      -cdrom Fedora-20-x86_64-netinst.iso \
      -boot d \
      -net nic \
      -net user \
      -m 1024

To accelerate qemu when virtualizing the same platform as the host, first use modprobe to install the appropriate KVM modules, and then add the --enable-kvm option to the qemu-system-x86_64 command above.

You might want to run qemu with -nographic when running on a computer with no graphical console. For this to work, the hosted kernel must use the serial device as its console. You can arrange for this by passing console=ttyS0 on the hosted kernel’s command line, likely by editing your bootloader’s configuration.

You can also set the host’s MAC address by using -net nic,macaddr=aa:bb:cc:dd:ee:ff.

Another option allows you to configure a network between two QEMU hosts without root access on the host running QEMU. Start one host with -device e1000,netdev=n1,mac=52:54:00:12:34:56 -netdev socket,id=n1,listen=:1024, and start another with -device e1000,netdev=n1,mac=52:54:00:12:34:57 -netdev socket,id=n1,connect=:1024.

Simulating other architectures

Qemu can simulate one architecture on another. For example, qemu can facilitate experimenting with the RISC-V architecture on an AMD64 computer. Fedora provides RISC-V kernels and disk images that are suitable for running in qemu at https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/. After gathering and uncompressing a related pair of .elf and .raw files, you can boot them using qemu by running:

$ qemu-system-riscv64 -nographic \
        -machine virt \
        -smp 4 \
        -m 4G \
        -kernel riscv.elf \
        -bios none \
        -object rng-random,filename=/dev/urandom,id=rng0 \
        -device virtio-rng-device,rng=rng0 \
        -device virtio-blk-device,drive=hd0 \
        -drive file=riscv.raw,format=raw,id=hd0 \
        -device virtio-net-device,netdev=usernet \
        -netdev user,id=usernet,hostfwd=tcp::10000-:22

(Replace riscv.elf and riscv.raw with the name of the files you downloaded.)

“Real” networking in qemu

Qemu can easily simulate a network connection in userspace with the help of the host computer, but this approach has limitations. Sometimes it is helpful to tie the simulated computer’s network adapter into the host computer kernel’s view of networking. This is done using bridge and tap interfaces. Assuming the host computer uses NetworkManager, define a bridge interface by creating a file such as /etc/NetworkManager/system-connections/br0.nmconnection:

[connection]
id=br0
type=bridge
interface-name=br0

[bridge]
stp=false

[ipv4]
method=auto

Ensure /etc/NetworkManager/system-connections/br0.nmconnection is readable only by root. Next, configure a physical interface to be a member of the bridge, such as by editing /etc/NetworkManager/system-connections/enp3s0f0.nmconnection:

[connection]
id=enp3s0f0
type=ethernet
interface-name=enp3s0f0
master=br0
slave-type=bridge

In this way, the bridge can obtain an IPv4 address through the physical interface, which is defined to be a member of the bridge.

Next, create a tap interface for the simulated host, and add it to the bridge by running these commands:

$ tunctl -t tap0 -u root
$ brctl addif br0 tap0
$ ifconfig tap0 up

Finally, start the simulated host and associate the host with the tab device by running:

$ qemu-system-riscv64 -nographic \
        -machine virt \
        -smp 4 \
        -m 4G \
        -kernel riscv.elf \
        -bios none \
        -object rng-random,filename=/dev/urandom,id=rng0 \
        -device virtio-rng-device,rng=rng0 \
        -device virtio-blk-device,drive=hd0 \
        -drive file=riscv.raw,format=raw,id=hd0 \
        -device e1000,netdev=net0,mac=aa:bb:cc:dd:ee:ff \
        -device tap,id=net0,ifname=tap0,script=no,downscript=no

Notice the -device and -netdev options have changed from the earlier example.

Xen

Running OpenWrt as a Xen HVM DomU guest

The following Xen DomU configuration defines a guest named OpenWrt:

name    = "OpenWrt"
memory  =  1024
vcpus   =  1
builder = "hvm"
vif     = [ "model=e1000,script=vif-bridge" ]
disk    = [ "tap2:tapdisk:aio:/path/to/openwrt-x86-generic-combined-ext4.img,xvda,w" ]
serial  = "pty"

To select a network bridge on a host which has configured more than one, add a statement of the form bridge=brname to the list of network parameters. To hard-code an Ethernet MAC, add mac=mac.

Running CentOS as a Xen HVM DomU guest

The following Xen DomU configuration defines a guest named CentOS, which includes an SDL-based graphics console:

name    = "CentOS"
memory  =  4096
vcpus   =  1
builder = "hvm"
vif     = [ "model=e1000,script=vif-bridge" ]
disk    = [ "tap2:tapdisk:aio:/path/to/disk.img,xvda,w" ]
serial  = "pty"
sdl     = 1

If you click on the SDL window, then the Xen interface will capture your mouse. To release the mouse, press Ctrl-Alt. Ctl-Alt-f will enter or leave full screen mode. Alternatively, you can omit sdl = 1 and configure GRUB to boot the Linux kernel with console=ttyS0.

Running OpenBSD as a Xen HVM DomU guest

The following Xen DomU configuration defines a guest named OpenBSD:

name    = "OpenBSD"
memory  =  4096
vcpus   =  1
builder = "hvm"
vif     = [ "model=e1000,script=vif-bridge" ]
disk    = [ "tap2:tapdisk:aio:/path/to/disk.img,xvda,w" ]
serial  = "pty"
sdl     = 1

See the description of CentOS above for how to use the SDL console. Alternatively, you can omit sdl = 1 and configure OpenBSD to use a serial console. To do this, add tty00 "/usr/libexec/getty std.9600" vt220 on secure to /etc/ttys and add:

stty com0 19200
set tty com0

to /etc/boot.conf.

Networking

The Xen domain configurations above assume bridged networking. This requires some configuration on the host. The examples here assume the use of NetworkManager.

Bridged

You can set up a network bridge by placing the following in Dom0’s /etc/sysconfig/network-scripts/ifcfg-xenbr0: Define a bridge interface by creating a file such as /etc/NetworkManager/system-connections/xenbr0.nmconnection:

[connection]
id=xenbr0
type=bridge
interface-name=xenbr1

[ipv4]
method=auto

[ipv6]
dhcp-iaid=mac
method=auto

Replace the use of method=auto with method=link-local if you do not want the Dom0 host to obtain an IP address.

Associate an physical interface with the bridge, for example by creating /etc/NetworkManager/system-connections/bridge-slave-eno1.nmconnection:

[connection]
id=bridge-slave-eno1
type=ethernet
interface-name=eno1
master=xenbr0
slave-type=bridge
NAT

Alternatively, you can configure a Xen guest to connect to a network through Dom0 with Dom0 acting as a NAT router.

  1. Configure the guest with vif = [ "model=e1000,script=vif-nat,ip=10.0.0.1/32,gatewaydev=INTERFACE" ], where INTERFACE is the network interface which links to your default Internet router.
  2. Add the following to /etc/sysctl.conf on Dom0: net.ipv4.ip_forward=1 and run sysctl -p1.
  3. Run iptables -t nat -A POSTROUTING -o INTERFACE -j MASQUERADE, where INTERFACE is the interface from step one. (If you use firewalld, then run firewall-cmd --add-masquerade instead.)
  4. Boot the guest and configure its IP address as 10.0.0.1, its default gateway to 10.0.0.129 (Dom0’s virtual interface), and its DNS resolver to a valid server.

Boot from an installation CD-ROM

Add the following to your Xen DomU guest configuration:

disk = [ "tap2:tapdisk:aio:/path/to/cdrom.iso,hdc:cdrom,r" ]

You might want to instead add this statement to an existing disk list, as his will provide access to both the virtual CD-ROM and disk.

Pass an entire logical volume into a Xen guest

If you have an entire logical volume on Dom0 set aside for the guest, then you can pass it to the guest with the following configuration fragment:

disk = [ "phy:/dev/mapper/lv-name,xvdb,w" ]

Pass a USB device into a Xen guest

Add the following to your Xen DomU guest configuration:

usb       = 1
usbdevice = "host:xxxx:yyyy"

or

usb       = 1
usbdevice = "host:x.y"

In the first example, xxxx:xxxx represents the USB device’s tag. In the second example, x.y represents the USB device’s bus address. You can learn these identifiers by using lsusb.

Ensuring DomU virtual machines start after booting Dom0

  1. Place the configurations which you want to start upon booting in /etc/xen/.
  2. Make a symlink for each configuration from /etc/xen/ to /etc/xen/auto/.
  3. Run systemctl enable xendomains to ensure the xendomains script executes when Dom0 boots.

OpenStack

Extracting a disk image that can be imported into other virtualization platforms

  1. Generate a snapshot of a running instance.
  2. Run glance image-list to find the identifier of the snapshot you want to extract.
  3. Run glance image-download ID --file FILENAME.qcow2

Create an OpenStack image from an image on disk

In order for a disk image to interact fully with OpenStack it must contain a few utilities. On Fedora, the acpid, cloud-init, and cloud-utils-growpart packages provide them. Enable the acpid service, edit /etc/cloud/cloud.cfg accordingly (pay attention to the default username), and add NOZEROCONF=yes to /etc/sysconfig/network. Also ensure a SSH server is present.

Additionally, the disk image must include the virtual I/O drivers in its initial ramdisk. Edit /etc/dracut.conf.d/openstack.conf, and add add_drivers="virtio_blk virtio_gpu". Run dracut --regenerate-all --force on the computer to update its existing initial ramdisks to reflect this. You can inspect an initial ramdisk by running lsinitrd /boot/initramfs-VERSION.img.

To load the image into OpenStack, run glance image-create --name NAME --visibility=private --disk-format=qcow2 --container-format=bare --file=IMAGE-FILE.qcow2.

Manage OpenStack quotas

  1. View quotas for a project using openstack quota show $(openstack project show -f value -c id PROJECT).
  2. Set a quota using openstack quota set --QUOTANAME N $(openstack project show -f value -c id PROJECT).

VirtualBox

Share a folder from host to Linux guest

  1. Select guest Settings→Shared Folders.
  2. Add the folder on your host which you would like to add to your guest; remember the folder name.
  3. Ensure VirtualBox guest addition exists on the guest.
  4. On the Linux guest, run mount -t vboxsf folder-name mount-point.

Pass a USB device from host to Linux guest

  1. If you need USB 2 and 3 support, then install the VirtualBox extension pack from Oracle on the host: sudo VBoxManage extpack install path-to-extpack.
  2. Add the user running VirtualBox to the vboxusers group: sudo gpasswd -a $USER vboxusers. You might need to log out and log back in for this change to take affect.
  3. After booting the guest, look for the USB icon in VirtualBox’s guest control panel at the bottom of the guest’s window. Right click on it to select a USB device to pass through.

You might want to always pass a certain USB device to the guest. To do this, first identify the device’s properties using VBoxManager list usbhost, and then create a filter using the interface at guest Settings→USB.

Disk images

  • Convert a raw disk image such that it can be used with VirtualBox or VMware: qemu-img convert -f raw FOO.img -O vmdk FOO.vmdk (This will allow the use of an OpenWrt image such as openwrt-x86-generic-combined-ext4.img.gz if you uncompress it first.)
  • Create a sparse QCOW image for use with Xen: qcow-create $((1024*1024)) vm-disk.qcow

Eucalyptus

Administrative commands

  • Reset the password on a Eucalyptus account: euare-usermodloginprofile --as-account ACCOUNT-NAME -u admin -p "PASSWORD".
  • List the instances: euca-describe-instances verbose
  • List the security groups: euca-describe-groups verbose
  • List the keypairs: euca-describe-keypairs verbose
  • List the snapshots: euca-describe-snapshots verbose
  • List the volumes: euca-describe-volumes verbose

Publishing base images

  1. Create a disk image containing an OS install; here we use fedora-37.img as an example.
  2. With root privileges, run euca-import-volume --format raw --availability-zone ZONE --bucket fedora-37-3gb-ebs --description "Fedora 37 3 GB EBS" fedora-37.img.
  3. Run euca-describe-conversion-tasks import-vol-ID, where ID is the value reported by the previous step.
  4. Run euca-create-snapshot vol-ID.
  5. Run euca-register --name "Fedora37-3GB-EBS" --snapshot snap-acef31baa17c634f0 -a x86_64 --root-device-name /dev/sda --description "Fedora 37 3 GB EBS".
  6. Edit the image’s details, and set the access controls to “public”.

As an alternative to steps 1–4, you can snapshot an existing instance’s volume.

As written, the commands above will make an image available from within the administrator account, and the administrator can elect to mark them public and thus available to other accounts. Adding the -I ACCESS-KEY and -S SECRET-KEY arguments (and additionally the --owner-akid=ACCESS-KEY and --owner-sak=SECRET-KEY, in the case of euca-import-volume) will instead associate the image with another account. You can generate access and secret keys using Eucalyptus’s “Security Credentials” feature within the “Users” panel.

Extracting a true disk image that can be imported into other virtualization platforms

  1. Inspect the instance to discover the volume name.
  2. Stop the instance.
  3. Find the instance’s volume file, which should exist in /var/lib/eucalyptus/volumes/.
  4. Copy the volume file to a computer that has the utilities required by the remaining steps.
  5. Associate the volume file with a loopback device by running losetup -f -P VOLUME-FILE.
  6. Associate the loopback device with the computer’s LVM subsystem by running pvscan --cache and vgchange -ay.
  7. Run losetup and pvdisplay and observe the links in /dev/mapper/ to identify the associations between disk images, loopback devices, physical devices, volume groups, and device mappings.
  8. Extract the disk image using dd if=/dev/dm-N of=IMAGE, where N is the correct device number.
  9. Make use of the disk image. It should boot in QEMU, for example. If it makes use of cloud-init, then a full virtualization suite should be able to setup SSH keys and other material after booting a virtual machine based around the disk image.