Skip to main content

Virtualization in cloud

note

This lab has two sections - one for native virtualization, and one for cloud virtualization in OpenStack. For the second section you will create the virtual machines from the command line, and not the web interface.

Native virtualization

Setup

We will be using a virtual machine in the faculty's cloud.

When creating a virtual machine in the Launch Instance window:

  • Name your VM using the following convention: scgc_lab<no>_<username>, where <no> is the lab number and <username> is your institutional account.
  • Select Boot from image in Instance Boot Source section
  • Select SCGC Template in Image Name section
  • Select the m1.xlarge flavor.
info

There will not be a zip archive for this lab. We will work using the existing virtual machine disk images.

Preparation - using X11 forwarding

Please make sure to enable X11 forwarding on all SSH connections. Use the -X parameter when running ssh.

tip

Activating compression for the video stream can improve performance. You can use compression by also appending the -C parameter to the SSH command. You can get more details in the Working with OpenStack lab.

If you intend to use the root account to run the commands in this lab, you must fetch the xauth token created for the student user.

student@lab-virt-host:~$ sudo -i
root@lab-virt-host:~$ xauth merge /home/student/.Xauthority
info

An alternative to SSH tunneling or X11 forwarding is Chrome Remote Desktop, which allows you to connect to the graphical inteface of your VM.

If you want to use this method, follow the steps from here.

Managing virtual machines with KVM

Computational centers use virtualization on a large scale since it provides flexibility in managing compute resources. In order to improve performance in a virtualised environment, processors have introduced features and specific instructions that enable guest operating systems to run uninterrupted and unmodified. The software entity that is responsible with facilitating this type of interaction between hardware and the guest operating system is called a hypervisor.

KVM stands for "Kernel Virtual Machine" and is a kernel-level hypervisor that implements native virtualization. In this lab, we will explore using this virtualization solution to handle various use-cases.

First of all, we must verify that the underlying hardware supports native virtualization. The virtualization extensions' name depends on the hardware manufacturer:

  • INTEL: VMX (Virtual Machine eXtensions)
  • AMD: SVM (Secure Virtual Machine)

Verify that the system supports virtualization

To verify that the processor supports the hardware extensions we can run the following command:

student@lab-virt-host:~$ grep -E 'vmx|svm' /proc/cpuinfo
flags : fpu vme [...] vmx ssse3 [...]

The flags section must include vmx (Virtual Machine eXtensions) on Intel systems, or svm (Secure Virtual Machine) on AMD systems to be able to fully take advantage of virtualization.

To use KVM we need to install the qemu-kvm package that contains the qemu userspace tool. qemu can be used to create and manage virtual machines by interacting with the kernel module of the hypervisor.

student@lab-virt-host:~$ sudo apt update
student@lab-virt-host:~$ sudo apt install qemu-kvm

Before we can start a virtual machine, the kernel module module must be loaded:

student@lab-virt-host:~$ lsmod | grep kvm
kvm_intel 282624 0
kvm 663552 1 kvm_intel

qemu is able to emulate or virtualize multiple processor architectures. As you can see in the output of the command above, the kvm_intel module is also loaded besides the kvm module. This means that, at the moment, this machine can support x86 guests using KVM. For each architecture there will be a different kernel module that is loaded. Loading the KVM kernel module leads to the creation of the /dev/kvm character device. This device is used to communicate with the hypervisor using ioctl operations:

student@lab-virt-host:~$ ls -l /dev/kvm
crw-rw---- 1 root kvm 10, 232 Feb 30 15:06 /dev/kvm

We will use the kvm command to start a virtual machine. The user that starts the virtual machine must either be root or be a part of the group that owns the /dev/kvm character device (the kvm group in our case).

Note

From this point on, most commands will require running as the root user. The text will highlight this using sudo, but you can switch users to root as mentioned above.

Starting a virtual machine

Let's create a virtual machine that has 512MB of RAM (the -m parameter), 2 virtual CPU cores (the -smp parameter) and a virtual disk backed by the debian-12.qcow2 disk image (the -hda parameter):

student@lab-virt-host:~/work$ sudo kvm -hda debian-12.qcow2 -m 512 -smp 2
qemu-system-x86_64: warning: dbind: Couldn't connect to accessibility bus: Failed to connect to socket 0000a: Connection refused
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]

If the command executes successfully, a new window should be shown on your system and you can see the guest's output.

Warnings

You may see some warning messages when running the kvm command. These messages can usually be ignored.

We can inspect the processes / threads created by kvm to see how it manages the virtual machine. After opening a new terminal, check the KVM threads by running the following command:

student@lab-virt-host:~/work$ ps -efL | grep kvm
root 5368 5344 5368 0 1 00:50 pts/1 00:00:00 sudo kvm -m 512 -smp 2 -hda debian-12.qcow2
root 5369 5368 5369 4 5 00:50 pts/1 00:00:00 qemu-system-x86_64 -enable-kvm -m 512 -smp 2 -hda debian-12.qcow2
root 5369 5368 5370 0 5 00:50 pts/1 00:00:00 qemu-system-x86_64 -enable-kvm -m 512 -smp 2 -hda debian-12.qcow2
root 5369 5368 5371 1 5 00:50 pts/1 00:00:00 qemu-system-x86_64 -enable-kvm -m 512 -smp 2 -hda debian-12.qcow2
root 5369 5368 5374 87 5 00:50 pts/1 00:00:09 qemu-system-x86_64 -enable-kvm -m 512 -smp 2 -hda debian-12.qcow2
root 5369 5368 5375 0 5 00:50 pts/1 00:00:00 qemu-system-x86_64 -enable-kvm -m 512 -smp 2 -hda debian-12.qcow2
Inspect

Stop the virtual machine by pressing CTRL+C in the terminal that you started it in. Start a new virtual machine with 4 virtual CPU cores and compare the number of threads. How do you explain the difference?

Display export via VNC

When interacting with virtual machines, we do not usually want to start them in the foreground. Instead, the virtual machine is started in the background and in case we need to access its terminal, we connect to its console. Using the -vnc option, kvm will start a VNC server and export the virtual machine's console through it.

student@lab-virt-host:~/work$ sudo kvm -m 512 -smp 2 -hda debian-12.qcow2 -vnc :1
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]
qemu-system-x86_64: warning: host doesn't support requested feature: CPUID.80000001H:ECX.svm [bit 2]

When starting the virtual machine like this, its console is not displayed, but the process is still in foreground. To avoid this, we add the --daemonize parameter:

student@lab-virt-host:~/work$ sudo kvm -m 512 -smp 2 -hda debian-12.qcow2 -vnc :1 --daemonize

The -vnc :1 parameter starts a VNC server on the first VNC port.

Connect to the VNC server

Find the port that the VNC server uses and connect to it. You can start a VNC client on the server using the vncviewer command that is created when the xtightvncviewer package is installed.

Hint: You can find the VNC port by inspecting the listening TCP ports.

Virtual machine disk storage

In the previous section we have started a virtual machine using an already existing disk image - debian-12.qcow2. The qcow2 extension stands for "QEMU Copy-on-Write" and allows us to create multiple layered images on top of a read-only base image. Using the debian-12.qcow2 image as base, for each virtual machine that we want to start, we will create a new qcow2 image that will host all changes for the specific virtual machine. Examples on how to create this layered setup will be shown in the following sections.

Creating a new disk image

For start, we will create a new qcow2 image that we will use to create a new virtual machine and install an operating system from an ISO image. Create a new disk image using the qemu-img tool (if not already installed already, install the qemu-utils package).

student@lab-virt-host:~/work$ qemu-img create -f qcow2 virtualdisk.qcow 2G
Formatting 'virtualdisk.qcow', fmt=qcow2 size=2147483648 cluster_size=65536 lazy_refcounts=off refcount_bits=16

The first argument of qemu-img is the subcommand that we want to use, in this case it is create. When creating a new image you must specify its format (using the -f parameter), name and maximum size (2G).

The installation process requires an installation medium (in ISO format). Begin by downloading the latest Debian installer ISO, SHA512 checksums and signature files from the Debian download page. Verify the ISO's integrity and that the checksum is signed using an official signature.

student@lab-virt-host:~/work$ sha512sum -c --ignore-missing SHA512SUMS
debian-11.3.0-amd64-netinst.iso: OK
student@lab-virt-host:~/work$ gpg --keyserver keyring.debian.org --receive-keys 0x11CD9819
gpg: /home/student/.gnupg/trustdb.gpg: trustdb created
gpg: key DA87E80D6294BE9B: public key "Debian CD signing key <debian-cd@lists.debian.org>" imported
gpg: Total number processed: 1
gpg: imported: 1
student@lab-virt-host:~/work$ gpg --verify SHA512SUMS.sign
gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Sat 26 Mar 2022 09:22:41 PM UTC
gpg: using RSA key DF9B9C49EAA9298432589D76DA87E80D6294BE9B
gpg: Good signature from "Debian CD signing key <debian-cd@lists.debian.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B

After the ISO disk image has been verified, you can start a new virtual machine that uses it using the -cdrom argument:

student@lab-virt-host:~/work$ sudo kvm -hda virtualdisk.qcow -smp 2 -m 512 -cdrom debian-11.3.0-amd64-netinst.iso

The virtual machine will boot from the CD because the disk image that we have created above does not have a bootloader. You can continue the installation process as normal.

Note

You can stop the installation process after it begins.

Adding a new disk image

KVM is able to use multiple disk images on a single virtual machine.

For this task we will start a new virtual machine with the debian-12.qcow2 image as its primary boot device. Create an additional 1GB qcow2 disk image and include it in the virtual machine's parameters. Hint: use the -hdb parameter.

Inspect the size of the disk image. Notice that the qcow2 format is able to expand the disk when data is written to it, but it will be initially small.

student@lab-virt-host:~/work$ du -sh image-name.qcow2
196K image-name.qcow2
Format the disks

After the virtual machine finishes booting check what block devices are available. Create two 500MB partitions on the second disk (the one you have created earlier) and format them using the ext4 filesystem. Mount both partitions and create 100MB files on each of them.

Inspect the size of the disk image on the host system and then stop the virtual machine.

Creating a disk image based on a base image

The copy-on-write feature of the qcow2 disk format allows reusing a base disk image in multiple virtual machines without overwriting the contents of the base file. This means that we can create a template file and then run multiple virtual machines without copying the template for each one of them.

For this task we aim to create two virtual machines from the same debian-12.qcow2 image. Before being able to do this, we must first create a disk image based on debian-12.qcow2 for each of the virtual machines.

student@lab-virt-host:~/work$ qemu-img create -f qcow2 -b debian-12.qcow2 sda-vm1.qcow2
Formatting 'sda-vm1.qcow2', fmt=qcow2 size=8589934592 backing_file=debian-12.qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
student@lab-virt-host:~/work$ du -sh sda-vm1.qcow2
196K sda-vm1.qcow2

Create an additional disk image for the second virtual machine, called sda-vm2.qcow2. Start both virtual machines using the newly created disk images as their only attached disk.

Write data in the virtual disk

Create a 50MB file in the first virtual machine and inspect the size of disks created for the two virtual machines, as well as the size of the base image. What has changed?

You can stop the virtual machines after inspecting the changes to the disks.

Converting between virtual disk formats

qemu-img also allows converting between various virtual machine disk formats. We may need to convert a qcow2 image to the VMDK format (the default format used by VMWare) or to the VDI format (the default format used by VirtualBox), without going through the installation process again. We can use the convert subcommand to achieve this:

student@lab-virt-host:~/work$ qemu-img convert -O vdi debian-12.qcow2 debian-12.vdi

We can then inspect the image, both before, and after conversion using qemu-img info.

student@lab-virt-host:~/work$ qemu-img info debian-12.qcow2
image: debian-12.qcow2
file format: qcow2
virtual size: 8 GiB (8589934592 bytes)
disk size: 1 GiB
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 new 0 B 20XX-02-30 00:55:00 00:00:00.000
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
student@lab-virt-host:~/work$ qemu-img info debian-12.vdi
image: debian-12.vdi
file format: vdi
virtual size: 8 GiB (8589934592 bytes)
disk size: 1.16 GiB
cluster_size: 1048576

OpenStack

This lab's tasks will be performed in the faculty's OpenStack cloud. We will create, modify and delete different cloud objects (instances, networks, subnets).

To interact with OpenStack, we will use the official OpenStack client. The client is already installed on fep.grid.pub.ro.

Resource names

All resources that you create must contain your username. Replace user.name with your own username in the following tasks. If the resource name is not specified, or is generic (e.g. vm1), append your username to it (e.g. user.name-vm1).

Authentication

All operations performed in OpenStack require authentication. As such, before using the OpenStack client, we must provide the necessary authentication parameters. This is usually done using an OpenStack RC file that defines some variables to set up a certain environment inside the shell.

OpenStack RC

To obtain your OpenStack RC file from the Horizon dashboard, go to ProjectAPI Access, click on the Download OpenStack RC File dropdown and select OpenStack RC File.

Copy the RC file to fep

You must copy the configuration file to your home on fep.grid.pub.ro.

OpenStack RC format

The OpenStack RC file is a script file that defines and exports various shell variables that will be used by the OpenStack client. The parameters can also be passed as command line arguments, but since most of them will be the same for all commands, using exported variables is more convenient.

The command line arguments take precedence over the corresponding environment variables. The arguments usually have a similar format to their environment counterparts, but written in lowercase and with underscores (_) replaced by dashes (-) - e.g. OS_USERNAME / --os-username.

When working with multiple projects, it is usually enough to replace the OS_PROJECT_ID and OS_PROJECT_NAME variables, while other parameters can remain the same.

Authenticating using your password

If you inspect the RC file you will notice that the default configuration reads your password as input and sets it as an environment variable. While this approach may work, storing passwords as environment variables is usually discouraged. You could opt to not use an authentication variable (the password), but if a valid authentication token is not defined, the OpenStack client would instead prompt for authentication every time it runs, which would greatly decrease ease of use.

Token-based authentication

Besides password-based authentication, OpenStack supports other authentication plugins. We will use token-based authentication, which uses tokens to only grant access to some OpenStack services. Furthermore, the token has an expiration date attached to it, which reduces the impact of a potential information leak, but also means that the token must be periodically renewed.

To use token-based authentication, you must update the RC file according to the following patch (remove the lines starting with - and add the lines starting with +; remove the + symbols at the beginning of each line):

 # In addition to the owning entity (tenant), OpenStack stores the entity
# performing the action as the **user**.
export OS_USERNAME="user.name"
-# With Keystone you pass the keystone password.
-echo "Please enter your OpenStack Password for project $OS_PROJECT_NAME as user $OS_USERNAME: "
-read -sr OS_PASSWORD_INPUT
-export OS_PASSWORD=$OS_PASSWORD_INPUT
+unset OS_TOKEN
+export OS_TOKEN=$(openstack token issue --os-auth-type=password -f value -c id)
+export OS_AUTH_TYPE="token"
+unset OS_USER_DOMAIN_NAME
# If your configuration has multiple regions, we set that information here.
# OS_REGION_NAME is optional and only valid in certain environments.
export OS_REGION_NAME="RegionOne"

The changes above update the authentication method - we only use password authentication to generate a new token (using openstack token issue) and set the OS_TOKEN variable to be the token returned by the command (in column id). After the token has been retrieved, we can set the authentication method to token using the OS_AUTH_TYPE variable, so subsequent commands will use the token to authenticate.

Note that we must first undefine the variable that defined the old token since the OpenStack client throws an error if one is defined when using password authentication. The OS_USER_DOMAIN_NAME must also be undefined since it is not compatible with token authentication.

After updating the RC file, source it to execute the commands inside it as if they were manually ran inside the current shell. This will make the shell define and export the OpenStack variables, so child processes will inherit them. You will be asked for your password when the token issuing command runs:

[user.name@fep8 ~]$ source scgc_prj-openrc.sh
Password:

After entering your password, if no error is shown, everything should be set as expected and you will now be able to run OpenStack commands. For example, list the catalog of installed services using openstack catalog list:

[user.name@fep8 ~]$ openstack catalog list
+-------------+----------------+---------------------------------------------+
| Name | Type | Endpoints |
+-------------+----------------+---------------------------------------------+
| placement | placement | RegionOne |
| | | admin: https://cloud.grid.pub.ro:8780 |
| | | RegionOne |
| | | internal: https://cloud.grid.pub.ro:8780 |
| | | RegionOne |
| | | public: https://cloud.grid.pub.ro:8780 |
| | | |
| neutron | network | RegionOne |
| | | admin: https://cloud.grid.pub.ro:9696 |
| | | RegionOne |
| | | public: https://cloud.grid.pub.ro:9696 |
| | | RegionOne |
| | | internal: https://cloud.grid.pub.ro:9696 |
[...]
+-------------+----------------+---------------------------------------------+

Token management

Generate a new authentication token to inspect its format using the following command:

[user.name@fep8 ~]$ openstack token issue

Inspect the other options that the command accepts by appending the -h parameter.

After you finish inspecting the tokens, you can revoke them using the following command (the ID of the token to revoke must be given as a positional parameter):

[user.name@fep8 ~]$ openstack token revoke gAAAAA[...]

Resource management

OpenStack can manage various resources (e.g. virtual machines, networks, virtual machine disk images) and allow users to configure complex networks of systems with custom functionality.

Listing resources

To be able to boot a virtual machine instance, we must know the following parameters (objects):

  • image: the name or ID of the image used to boot the instance;
  • flavor: the name or ID of the flavor. The flavor defines the amount of resoruces reserved for the virtual machine (e.g. CPU, RAM, disk space);
  • key pair: the public SSH key to inject into the instance during the first boot;
  • network: which virtual network(s) the virtual machine will be connected to;
  • security-group: which security group (set of filter rules) to apply to the instance's networking.

For each of the above parameters we must inspect the list of resources that are available in our OpenStack tenant.

Resource UUID vs name

Most commands in OpenStack can use both a resource's name and the resource's unique ID (UUID). In this lab we will prefer using the UUID, which is the recommended approach in most automation scripts. This is because while both the name and the ID are unique at any given time, an object's ID cannot be changed, while some objects can be renamed.

Images

Images are managed by the Glance service. We can list them using the following command:

[user.name@fep8 ~]$ openstack image list
+--------------------------------------+------------------------------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------------------------------+--------+
[...]
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx11 | SCGC Template | active |
[...]
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12 | Ubuntu 16.04 Xenial | active |
[...]
+--------------------------------------+------------------------------------------+--------+

To boot the instance, we will use the Ubuntu 16.04 Xenial image. Use its specific ID, as shown in the output of the openstack image list command. We can get more information about this image using the openstack image show command:

[user.name@fep8 ~]$ openstack image show xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12
+------------------+------------------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------------------+
| checksum | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| container_format | bare |
| created_at | 20XX-02-30T00:00:00Z |
| disk_format | qcow2 |
| file | /v2/images/xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12/file |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12 |
| min_disk | 0 |
| min_ram | 0 |
| name | Ubuntu 16.04 Xenial |
| owner | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| properties | nested='false', os_distro='nested', os_hash_algo='sha512', [...] |
| protected | False |
| schema | /v2/schemas/image |
| size | 313982976 |
| status | active |
| tags | |
| updated_at | 20XX-02-30T00:00:00Z |
| virtual_size | 2361393152 |
| visibility | public |
+------------------+------------------------------------------------------------------+

Flavors

Flavors are managed by the Nova service (the compute service). We will list the available flavors using openstack flavor list:

[user.name@fep8 ~]$ openstack flavor list
+--------------------------------------+----------------+------+------+-----------+-------+-----------+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+----------------+------+------+-----------+-------+-----------+
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx21 | m1.tiny | 512 | 8 | 0 | 1 | True |
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx22 | m1.xlarge | 4096 | 24 | 0 | 4 | True |
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx23 | m1.medium | 1536 | 16 | 0 | 1 | True |
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx24 | m1.large | 4096 | 16 | 0 | 2 | True |
[...]
+--------------------------------------+----------------+------+------+-----------+-------+-----------+

Let's find more information about the m1.tiny flavor, which has the ID of xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx21, using openstack flavor show:

[user.name@fep8 ~]$ openstack flavor show xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx21
+----------------------------+--------------------------------------+
| Field | Value |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| access_project_ids | None |
| description | None |
| disk | 8 |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx21 |
| name | m1.tiny |
| os-flavor-access:is_public | True |
| properties | type='gp' |
| ram | 512 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 1 |
+----------------------------+--------------------------------------+

Key pairs

SSH key pairs are also managed by the Nova service. To list available resources, we will use the following command:

[user.name@fep8 ~]$ openstack keypair list
+------+-------------------------------------------------+------+
| Name | Fingerprint | Type |
+------+-------------------------------------------------+------+
| fep | xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx | ssh |
+------+-------------------------------------------------+------+
Visible resources

OpenStack resources are scoped at multiple levels. Despite working in a shared project with other users, you are only able to see your own SSH keys in the output of keypair list. The same is also true for other resource types, but scope structure differs (e.g. you can only see a project's private images, images that were shared with the project, or public images, but cannot see a different project's private images).

You can use the openstack keypair show command to get more details on the resource:

[user.name@fep8 ~]$ openstack keypair show fep
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| created_at | 20XX-02-30T00:00:00.000000 |
| fingerprint | xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx |
| id | fep |
| is_deleted | False |
| name | fep |
| private_key | None |
| type | ssh |
| user_id | user.name |
+-------------+-------------------------------------------------+

Networks

Networks are managed by the Neutron service. We will use the openstack net list command to list all available networks:

[user.name@fep8 ~]$ openstack net list
+--------------------------------------+----------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx31 | vlan9 | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx35 |
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx32 | demo-net | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx36 |
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx33 | Net224 | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx37 |
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx34 | Net240 | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx38 |
+--------------------------------------+----------+--------------------------------------+

Let's see the available details about:

  • the vlan9 network, using openstack net show;
  • its associated subnet, using openstack subnet show.
[user.name@fep8 ~]$ openstack net show xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx31
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | nova |
| created_at | 20XX-02-30T00:00:00Z |
| description | |
| dns_domain | None |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx31 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| mtu | 1500 |
| name | vlan9 |
| port_security_enabled | True |
| project_id | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| provider:network_type | None |
| provider:physical_network | None |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 6 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx35 |
| tags | |
| updated_at | 20XX-02-30T00:00:00Z |
+---------------------------+--------------------------------------+
[user.name@fep8 ~]$ openstack subnet show xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx35
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| allocation_pools | 10.9.0.100-10.9.255.254 |
| cidr | 10.9.0.0/16 |
| created_at | 20XX-02-30T00:00:01Z |
| description | |
| dns_nameservers | 1.1.1.1 |
| dns_publish_fixed_ip | None |
| enable_dhcp | True |
| gateway_ip | 10.9.0.1 |
| host_routes | |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx35 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | vlan9-subnet |
| network_id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx31 |
| project_id | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| revision_number | 2 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 20XX-02-30T00:00:02Z |
+----------------------+--------------------------------------+

Security groups

Security groups are managed by the Neutron service. We will use the following command to list them:

[user.name@fep8 ~]$ openstack security group list
+--------------------------------------+----------+------------------------+----------------------------------+------+
| ID | Name | Description | Project | Tags |
+--------------------------------------+----------+------------------------+----------------------------------+------+
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx41 | security | | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | [] |
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx42 | default | Default security group | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | [] |
+--------------------------------------+----------+------------------------+----------------------------------+------+

For a verbose description of the security group we can run openstack security group show followed by the ID of the group we want to inspect:

[user.name@fep8 ~]$ openstack security group show xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx42
+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 20XX-02-30T00:00:00Z |
| description | Default security group |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx42 |
| name | default |
| project_id | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| revision_number | 7 |
| rules | created_at='20XX-02-30T00:00:20Z', direction='egress', ethertype='IPv6', id='xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx51', standard_attr_id='4801', updated_at='20XX-02-30T00:00:22Z' |
[...]
+-----------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Booting an instance

Finally, now that we have all the required information, we can start a new instance. We will use:

  • image: Ubuntu 16.04 Xenial;
  • flavor: m1.tiny;
  • key pair: your own key pair;
  • network: vlan9;
  • security group: default;
  • name: user.name-vm.

We will run the following command to create the instance:

[user.name@fep8 ~]$ openstack server create --flavor m1.tiny --image xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12 --nic net-id=xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx31 --security-group default --key-name fep user.name-vm
+-----------------------------+------------------------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | NOSTATE |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | |
| adminPass | AbcdEfghIJkl |
| config_drive | |
| created | 20XX-02-30T00:01:00Z |
| flavor | m1.tiny (xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx21) |
| hostId | |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71 |
| image | Ubuntu 16.04 Xenial (xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12) |
| key_name | fep |
| name | user.name-vm |
| progress | 0 |
| project_id | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| properties | |
| security_groups | name='xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx42' |
| status | BUILD |
| updated | 20XX-02-30T00:01:00Z |
| user_id | user.name |
| volumes_attached | |
+-----------------------------+------------------------------------------------------------+
Observe the virtual machine's state

Follow the state of the booted instance in the Horizon web interface.

Make a note of the ID of the instance, since we will be using it later.

Instance lifecycle

In this section we will perform various operations related to the lifecycle of an instance.

Query

We can use the openstack server list to list all virtual machine instances:

[user.name@fep8 ~]$ openstack server list
+--------------------------------------+--------------+--------+------------------+---------------------+-----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------+--------+------------------+---------------------+-----------+
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71 | user.name-vm | ACTIVE | vlan9=10.9.3.125 | Ubuntu 16.04 Xenial | m1.tiny |
+--------------------------------------+--------------+--------+------------------+---------------------+-----------+

Use the openstack server show to get details about the running instance:

[user.name@fep8 ~]$ openstack server show xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71
+-----------------------------+------------------------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | NCIT |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 20XX-02-30T00:01:29.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | vlan9=10.9.3.125 |
| config_drive | |
| created | 20XX-02-30T00:01:00Z |
| flavor | m1.tiny (xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx21) |
| hostId | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71 |
| image | Ubuntu 16.04 Xenial (xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12) |
| key_name | fep |
| name | user.name-vm |
| progress | 0 |
| project_id | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 20XX-02-30T00:01:29Z |
| user_id | user.name |
| volumes_attached | |
+-----------------------------+------------------------------------------------------------+
Test the connectivity

Test the connectivity to the virtual machine. Connect using SSH as the ubuntu user.

Stop the instance

To stop the instance without deleting it, we can use the openstack server stop command. This is equivalent to shutting the instance down.

[user.name@fep8 ~]$ openstack server stop xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71
[user.name@fep8 ~]$ openstack server list
+--------------------------------------+--------------+---------+------------------+---------------------+-----------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+--------------+---------+------------------+---------------------+-----------+
| xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71 | user.name-vm | SHUTOFF | vlan9=10.9.3.125 | Ubuntu 16.04 Xenial | m1.tiny |
+--------------------------------------+--------------+---------+------------------+---------------------+-----------+

Starting an instance

After being stopped, an instance can be started using the openstack server start command:

[user.name@fep8 ~]$ openstack server start xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71
Verify successful start

After starting the instance:

  • access Horizon and verify that the instance has started;
  • connect using SSH to check if it is still reachable.

Terminating an instance

Terminate the instance using the openstack server delete command:

[user.name@fep8 ~]$ openstack server delete xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx71
[user.name@fep8 ~]$ openstack server list
+----+------+--------+----------+-------+--------+
| ID | Name | Status | Networks | Image | Flavor |
+----+------+--------+----------+-------+--------+
+----+------+--------+----------+-------+--------+

Initial configuration

OpenStack provides a mechanism for configuring an instance at first boot. This is called a day-0 configuration and is implemented by the cloud-init module.

To use this functionality you must first create an initialization script (on fep), that will be injected when the instance runs for the first time:

[user.name@fep8 ~]$ cat day0.txt
#!/bin/bash
echo test > /tmp/test.txt

Afterwards, boot an instance using openstack server create and inject the script as user data in the virtual machine. You can use the same parameters as before.

tip

Read the documentation of the server create subcommand and find how you can inject the configuration script as user data.

Verify that the script ran

After the instance finishes booting, login on the virtual machine and confirm that the /tmp/test.txt file has been created.

Any script, no matter how complex, can be injected in the instance using this mechanism.

Clean up

Terminate the instance before advancing to the next task.

Networking

We want to create a topology of 2 virtual machines (a client and a server), connected through a private network. Each virtual machine must also have a management connection in the vlan9 network:

+--------+          user.name-network            +--------+
| client |---------------------------------------| server |
+--------+ 172.16.X.0/24 +--------+
| |
| vlan9 | vlan9
| |
  • vlan9 already exists and is a provider (physical) network;
  • user.name-network will have to be created and will be a self-service network (user defined, only visible inside our own project).

Creating the network

Create the network using the following command:

[user.name@fep8 ~]$ openstack net create user.name-network
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 20XX-02-30T00:02:00Z |
| description | |
| dns_domain | None |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx81 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| mtu | 1450 |
| name | user.name-network |
| port_security_enabled | True |
| project_id | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| provider:network_type | None |
| provider:physical_network | None |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 20XX-02-30T00:02:00Z |
+---------------------------+--------------------------------------+
Verify successful creation

Verify that the network has been successfully created using:

  • Horizon, access ProjectNetworkNetworks;
  • the openstack net show command.

Creating the subnet

The next step is to create a subnet for the user.name-network. We will use the openstack subnet create command and:

  • 172.16.X.0/24 as the subnet prefix;
  • user.name-subnet for name;
  • no gateway (the virtual machines will have a gateway set through vlan9).
[user.name@fep8 ~]$ openstack subnet create user.name-subnet --network user.name-network --subnet-range 172.16.X.0/24
+----------------------+--------------------------------------+
| Field | Value |
+----------------------+--------------------------------------+
| allocation_pools | 172.16.X.2-172.16.X.254 |
| cidr | 172.16.X.0/24 |
| created_at | 20XX-02-30T00:03:00Z |
| description | |
| dns_nameservers | |
| dns_publish_fixed_ip | None |
| enable_dhcp | True |
| gateway_ip | 172.16.X.1 |
| host_routes | |
| id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx91 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | user.name-subnet |
| network_id | xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx81 |
| project_id | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 20XX-02-30T00:03:00Z |
+----------------------+--------------------------------------+
Verify successful creation

Verify that the network has been successfully created using:

  • Horizon, access ProjectNetworkNetworksuser.name-networkSubnets;
  • the openstack subnet show command.

Boot the instances

Boot two m1.tiny instances based on Ubuntu 16.04 Xenial that are connected to both vlan9 and the newly created network.

tip

The --nic parameter can be specified multiple times.

Test connectivity

Connect to both virtual machines using SSH and request IPs through DHCP for the second network interface:

ubuntu@user.name-vm-1:~$ sudo dhclient ens4
ubuntu@user.name-vm-2:~$ sudo dhclient ens4

Verify that each instance gets the correct IP address and that you can send packages from one instance to the other through the private network.

Automatic configuration

Delete the instances and recreate them. This time, instead of manually logging in on the instances and running dhclient, do this via cloud-init.

Clean up

Delete both virtual machines, the subnet and the network that you have previously created.

tip

Use the openstack server delete and openstack net delete commands to delete the resources.

Orchestration

OpenStack allows defining complex architectures composed of multiple cloud objects through a single operation, through a mechanism called orchestration. To use orchestration we need an additional object, called a stack. The service that handles orchestration in OpenStack is called Heat.

Creating a stack

We will define a new stack that deploys three Ubuntu virtual machines at the same time. For this, go to ProjectOrchestrationStacks and click on Launch Stack. The stack will be called user.name-stack.

For Template source upload a file with the following content (substitute the parameters for name, image and key_name accordingly):

heat_template_version: 2013-05-23

resources:
vm1:
type: OS::Nova::Server
properties:
name: user.name-vm1
image: xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12
flavor: m1.tiny
key_name: fep
networks:
- network: vlan9

vm2:
type: OS::Nova::Server
properties:
name: user.name-vm2
image: xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12
flavor: m1.tiny
key_name: fep
networks:
- network: vlan9

vm3:
type: OS::Nova::Server
properties:
name: user.name-vm3
image: xxxxxxxx-yyyy-zzzz-tttt-xxxxxxxxxx12
flavor: m1.tiny
key_name: fep
networks:
- network: vlan9
Stack password

You do not need to enter your LDAP account's password in the field with the label Password for user "user.name". The meaning of the stack parameters is presented in the Launch and manage stacks page.

Inspect the stack

After the stack is created:

  • verify that the three instances have been launched;
  • click on the stack name and inspect the associated resources;
  • suspend / resume the stack and see what happens to the instances;
  • delete the stack.

Initial configuration

With the template above as a base, create a new one that will also provision the instances with an initial configuration:

  • each instance should have apache2 installed;
  • the index.html file in /var/www/html should contain This is <VM name>.
tip

Review the software configuration page for configuration examples.

Revoke your authentication token

Before disconnecting, revoke your authentication token:

[user.name@fep8 ~]$ openstack token revoke $OS_TOKEN