KVM in Terraform
KVM in Terrarform
After the last post on using KVM I wanted to find a better way to manage entire sets of VMs at once while testing out new ideas. A make file is ok, and starting up a few VMs with virsh
would work but I wanted a fully automated solution to bring up and tear down VMs when needed. Follow along with me to learn this as well. I will be using the provider documentation found here. The code for this post can be found in my repository here.
Provider
The provider we are using is libvirt
. The following code will be to use the provider.
terraform {
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
}
}
}
# Configure the Libvirt provider
provider "libvirt" {
uri = "qemu:///system"
}
The cool thing here is that the uri
parameter for the provider supports a few different ways to connect to the libvirt daemon. You can even use ssh to connect. I will be using a local libvirt installation but please experiment with remote installations. You can find docs on this here.
Volume Setup
The first thing we need is a volume to boot from. We can create this using the libvirt_volume object.
resource "libvirt_volume" "ubuntu_focal_server" {
name = "ubuntu_focal_server"
source = "/tmp/ubuntu.iso"
}
resource "libvirt_volume" "test_ubuntu" {
name = "test_ubuntu.qcow2"
size = 10000000000
}
Here we are using the Ubuntu 20.04 server image and creating a drive to install that on. We will use the test
drive in our domain next.
VM Creation
Finally let’s create the VM. To do this we create a libvirt_domain
. The following will create the VM with our volume we made earlier.
resource "libvirt_domain" "default" {
name = "testvm-ubuntu"
vcpu = 2
memory = 2048
running = false
disk {
volume_id = libvirt_volume.ubuntu_focal_server.id
}
disk {
volume_id = libvirt_volume.test_ubuntu.id
}
network_interface {
bridge = "virbr0"
}
}
Here we set 2 cores with 2GB of memory. The running parameter determines if the vm is running. We set to false so that we can test creation first. The disk setup references the ID of the volume we created earlier.
Notice that we are using the bridge from the last post. This will allow us to SSH into it if required.
Putting it Together
Putting all of the code together we get this file.
# Configure the Libvirt provider
terraform {
required_providers {
libvirt = {
source = "dmacvicar/libvirt"
}
}
}
provider "libvirt" {
uri = "qemu:///system"
}
resource "libvirt_volume" "ubuntu_focal_server" {
name = "ubuntu_focal_server"
source = "/tmp/ubuntu.iso"
}
resource "libvirt_volume" "test_ubuntu" {
name = "test_ubuntu.qcow2"
size = 10000000000
}
resource "libvirt_domain" "default" {
name = "testvm-ubuntu1"
vcpu = 2
memory = 2048
running = false
disk {
volume_id = libvirt_volume.ubuntu_focal_server.id
}
disk {
volume_id = libvirt_volume.test_ubuntu.id
}
network_interface {
bridge = "virbr0"
}
}
Now run terraform to create the VM.
terraform init
terraform apply
Checking Creation
Let’s check if the VM got created. Using virsh
you can see the VMs that are shutoff currently.
dan@ubuntu:~$ virsh list --all
Id Name State
---------------------------------
- testvm-ubuntu1 shut off <-- here it is
Start the VM
To start the VM up set running = true
and rerun the terraform.
Now recheck to see if it’s running.
dan@ubuntu:~$ virsh list --all
Id Name State
---------------------------------
1 testvm-ubuntu1 running
Connecting to the VM
To connect to the VM you can run virt-viewer and select the VM you want to connect to. You can install the OS from here as well.
Final Notes
During install you can give a static ip to allow SSH more easily. Make sure to remove the iso image file so that you can boot from the new installation. You can use the ubuntu base image to create new images now so that the install won’t be required for more 20.04 servers.
Issues While Writing
I had some issues with this. The first was that when creating a domain there would be a permissions error. This was fixed by setting security_driver = "none"
in /etc/libvirt/qemu.conf
. Then run the following to restart daemon.
systemctl restart libvirtd
Another thing I had to do is use virsh
to stop and start the VMs.
# stop vm
virsh destroy vm-name
# start vm
virsh start vm-name