IaC (Infrastructure as Code) is a very useful way to define and maintain you infrastructure. It allows you to define the structure of everything in a file and let Terraform manage the creation and updates to those virtual machines. I will be using Proxmox for the VMs today.

Requirements

  • A working Proxmox environment
  • terraform installed on another computer to manage the Proxmox VMs
  • A VM template in Proxmox to create VMs from

The provider we will be using allows for iso files to be used for the VM creation but I haven’t been able to get it working. So we will be using a template instead.

Disclaimer: I use the root user for Proxmox in this post. That’s definitely not best practice and will write another post on creating a new user for this.

Terraform Provider: https://registry.terraform.io/providers/Telmate/proxmox/latest/docs

Create VM

This is the terraform file you need. Please substitute your own values where necessary.

terraform {
    required_providers {
      proxmox = {
          source = "Telmate/proxmox"
          version = "2.9.3"
      }
    }
}
provider "proxmox" {
    pm_api_url = "https://192.168.50.10:8006/api2/json"
    pm_user = "root@pam"
    pm_password = "passwordhere"
}
resource "proxmox_vm_qemu" "ubuntu-pve" {
    name = "ubuntu-pve"
    target_node = "pve01"
    clone = "ubuntu-test"
    os_type = "ubuntu"
    cores = 4
    memory = 2048
}

This file will connect to the Proxmox api and create a new VM named ubuntutest from the ubuntu-test template. Here is the plan.

dan@ubuntu-test:~/scratch$ terraform plan

Terraform used the selected providers to generate the following
execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # proxmox_vm_qemu.ubuntu-test will be created
  + resource "proxmox_vm_qemu" "ubuntu-pve" {
      + additional_wait           = 0
      + agent                     = 0
      + balloon                   = 0
      + bios                      = "seabios"
      + boot                      = "c"
      + bootdisk                  = (known after apply)
      + clone                     = "ubuntu-test"
      + clone_wait                = 0
      + cores                     = 4
      + cpu                       = "host"
      + default_ipv4_address      = (known after apply)
      + define_connection_info    = true
      + force_create              = false
      + full_clone                = true
      + guest_agent_ready_timeout = 100
      + hotplug                   = "network,disk,usb"
      + id                        = (known after apply)
      + kvm                       = true
      + memory                    = 2048
      + name                      = "ubuntu-pve"
      + nameserver                = (known after apply)
      + numa                      = false
      + onboot                    = false
      + oncreate                  = true
      + os_type                   = "ubuntu"
      + preprovision              = true
      + reboot_required           = (known after apply)
      + scsihw                    = (known after apply)
      + searchdomain              = (known after apply)
      + sockets                   = 1
      + ssh_host                  = (known after apply)
      + ssh_port                  = (known after apply)
      + tablet                    = true
      + target_node               = "pve01"
      + unused_disk               = (known after apply)
      + vcpus                     = 0
      + vlan                      = -1
      + vmid                      = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

There are many options you can add to the resource block to customize your new VM.

Cluster Config

To specify more than one VM you can use multiple resource blocks:

terraform {
    required_providers {
      proxmox = {
          source = "Telmate/proxmox"
          version = "2.9.3"
      }
    }
}
provider "proxmox" {
    pm_api_url = "https://192.168.50.10:8006/api2/json"
    pm_user = "root@pam"
    pm_password = "passwordhere"
}
resource "proxmox_vm_qemu" "ubuntu-pve" {
    name = "ubuntu-pve"
    target_node = "pve01"
    clone = "ubuntu-test"
    os_type = "ubuntu"
    cores = 4
    memory = 2048
}
resource "proxmox_vm_qemu" "ubuntu-test1" {
    name = "ubuntu-test1"
    target_node = "pve01"
    clone = "ubuntu-test"
    os_type = "ubuntu"
    cores = 1
    memory = 512
}
resource "proxmox_vm_qemu" "build-server" {
    name = "build-server"
    target_node = "pve01"
    clone = "ubuntu-test"
    os_type = "ubuntu"
    cores = 8
    memory = 8096
}

You can specify as many VMs as you need. Keep in mind you may need to configure each VM that is created.

Cloud-Init

Cloud-init allows you to provision settings like ssh-keys and IP settings. To setup a cloud image template run the following from the Proxmox server:

# from https://pve.proxmox.com/wiki/Cloud-Init_Support
# download the image
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img

# create a new VM
qm create 9000 --name "ubuntu-1804-cloudinit-template" --memory 2048 --net0 virtio,bridge=vmbr0 --net1 virtio,bridge=vmbr1

# import the downloaded disk to local-lvm storage
qm importdisk 9000 bionic-server-cloudimg-amd64.img local-lvm

# finally attach the new disk to the VM as scsi drive
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0

# configure the cloud init CD drive
qm set 9000 --ide2 local-lvm:cloudinit

# set boot to scsi0 only
qm set 9000 --boot c --bootdisk scsi0

# set the display to serial. some cloud init images rely on this
qm set 9000 --serial0 socket --vga serial0

# set this VM as a template
qm template 9000

Here is the Cloud-init for two servers with my ssh-keys.

terraform {
    required_providers {
      proxmox = {
          source = "Telmate/proxmox"
          version = "2.9.3"
      }
    }
}
provider "proxmox" {
    pm_api_url = "https://192.168.50.11:8006/api2/json"
    pm_user = "root@pam"
    pm_password = "passwordhere"
}
resource "proxmox_vm_qemu" "jira" {
  name = "jira.local"
  desc = "Jira Server"
  target_node = "pve01"

  clone = "ubuntu-1804-cloudinit-template"

  cores = 1
  sockets = 1
  memory = 2048

  ciuser = "root"
  ipconfig0 = "ip=192.168.50.200/24"
  ipconfig1 = "ip=10.0.2.15/24,gw=10.0.2.2"

  sshkeys = <<EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbayIeOdJpD+zIaj3/dyayTaqfiJ0N2n7olmTEB9E3SV2JWDUdVELUVrC+FGpTW3J/ZsAo/b1ABprh/rDKuHLMsWgqhKAhTa2I5lI9ea1IxzWinsC6Qp+domXeDj58XLnKhBxL7TOmNLOeEMY4KedMH+z4uQcOVbhoaGcPBL1RGdZVJnBPxYRRRrEoM2+7Ea+4PJnYX6xJeFHP9FRNm/Li0cNToZgirus7V/khoqZiTVPrq2xX791gIRCHK4Ex2rBUbDG0KtJPVZEibAEMXyDA8Vma66axqxe+5ihO/1YwHWwhBK4Gy5af1MNwnKIpl5P/ysn94nT49OUC/RHW5Qr+DGA7tpwT6Hjz9TBtxXQyiXean8ISuPXb3y3r8cSppWoQ06ExS1KlwRqLsYHVCEYz6glOBwtQwxlzXSB1H3e0OLybisjgioFGAzgZn4J03FBcBLyaEE3vVJ2r+8LCUppLmsIvBtWDJZ9/GVVZPPC7xB1sHPYEPo8WtdKGks3LVWk= dan@ubuntu
EOF
}
resource "proxmox_vm_qemu" "confluence" {
  name = "confluence.local"
  desc = "Confluence Server"
  target_node = "pve01"

  clone = "ubuntu-1804-cloudinit-template"

  cores = 1
  sockets = 1
  memory = 2048

  ciuser = "root"
  ipconfig0 = "ip=192.168.50.201/24"
  ipconfig1 = "ip=10.0.2.15/24,gw=10.0.2.2"

  sshkeys = <<EOF
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDbayIeOdJpD+zIaj3/dyayTaqfiJ0N2n7olmTEB9E3SV2JWDUdVELUVrC+FGpTW3J/ZsAo/b1ABprh/rDKuHLMsWgqhKAhTa2I5lI9ea1IxzWinsC6Qp+domXeDj58XLnKhBxL7TOmNLOeEMY4KedMH+z4uQcOVbhoaGcPBL1RGdZVJnBPxYRRRrEoM2+7Ea+4PJnYX6xJeFHP9FRNm/Li0cNToZgirus7V/khoqZiTVPrq2xX791gIRCHK4Ex2rBUbDG0KtJPVZEibAEMXyDA8Vma66axqxe+5ihO/1YwHWwhBK4Gy5af1MNwnKIpl5P/ysn94nT49OUC/RHW5Qr+DGA7tpwT6Hjz9TBtxXQyiXean8ISuPXb3y3r8cSppWoQ06ExS1KlwRqLsYHVCEYz6glOBwtQwxlzXSB1H3e0OLybisjgioFGAzgZn4J03FBcBLyaEE3vVJ2r+8LCUppLmsIvBtWDJZ9/GVVZPPC7xB1sHPYEPo8WtdKGks3LVWk= dan@ubuntu
EOF
}