Showing posts with label KVM. Show all posts
Showing posts with label KVM. Show all posts

Thursday, December 23, 2021

Terraform for dummies part 5: Terraform deployment On-premises (KVM)

This image has an empty alt attribute; its file name is image.pngIntro

For a long time, Terraform was associated with deploying resources in the cloud. But what many people don’t know is that terraform already had private and community based providers that worked perfectly on non cloud environments. Today, we will discover how to deploy a compute vm in a KVM host. Not only that, but we will also do it on top of VirtualBox in a nested virtualization environment. As always, I will provide the vagrant build to allow you to launch the lab for a front-row experience. It is indeed the cheapest way to use terraform on-prem on your laptop.

Terraform on-prem


Libvirtd provider is a community based project built by Duncan Mac-Vicar. There is no difference between using terraform on cloud platforms and doing it with Libvirtd provider. In this lab I had to enable nested virtualization in my VBox to make it easier to run the demo.The resulting hypervision is qemu-kvm, a non bare-metal KVM environment also known as type 2 hypervisor (virtual hardware emulation).     


How to get started


No need to subscribe to a cloud Free-tier using credit cards to play with terraform. You can start this lab right now on your laptop with my vagrant build. The environment comes with all necessary modules & packages to deploy vms using terraform.

Lab Content:
- KVM
- KCLI (wrapper tool for managing vms)
- Terraform 1.0
- Libvirt terraform provider
- Terraform configuration samples to get started (ubuntu.tf , kvm-compute.tf)

GitHub repo
: https://github.com/brokedba/KVM-on-virtualbox 

  • Clone the repo
  • C:\Users\brokedba> git clone https://github.com/brokedba/KVM-on-virtualbox.git
    C:\Users\brokedba> cd KVM-on-virtualbox

  • Start the vm (make sure you have 2Cores and 4GB RAM to spare before the launch)
  • C:\Users\*\KVM-on-virtualbox> vagrant up
    C:\Users\*\KVM-on-virtualbox> vagrant ssh ---- access to KVM host

    Now you have a new virtual machine shipped with kvm and terraform which will help us complete the lab.
    Note: Terraform files will be located under   /root/projects/terraform/


What you should know 

  • Libvirt provider in Terraform registry 


    Up until terraform version 0.12, Hashicorp didn’t officially recognize this libvirt provider, you could still run config files if the plugin was in a local plugin folder (i.e. /root/.terraform.d/plugins/)
    But after version 0.13,  terraform enforced Explicit Provider Source Locations. As result, you’ll need few tweaks to make it run in terraform. Everything is documented in GitHub issue1 & 2 but I’ll summarize it below.   


    The steps to run libvirt provider in terraform v1.0 (Already done in my build)

    - Download the Binary (current vers: 0.6.12). For my part I used an older version for fedora (0.6.2)

    [root@localhost]# wget URL
    [root@localhost]# tar xvf terraform-provider-libvirt-**.tar.gz

    - Add the plugin in a local registry

    [root@localhost]# mkdir –p ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64
    [root@localhost]# mv terraform-provider-libvirt ~/.local/share/terraform/plugins/registry.terraform.io/dmacvicar/libvirt/0.6.2/linux_amd64

    - Add the below code block to the main.tf file to map libvirt references with the actual provider

    [root@localhost]# vi libvirt.tf
    ...
    terraform { required_version = ">= 0.13" required_providers { libvirt = { source = "dmacvicar/libvirt" version = "0.6.2" } } }
    ... REST of the Config

    - Initialize and validate by running terraform init which will detect and add libvirt plugin in the local registry  

    [root@localhost]# terraform init

     Initializing the backend...
     Initializing provider plugins...
    - Finding dmacvicar/libvirt versions matching "0.6.2"...
    - Installing dmacvicar/libvirt v0.6.2...
    - Installed dmacvicar/libvirt v0.6.2 (unauthenticated)

   

Terraform deployment  

  • Deploy basic ubuntu vm 


    Let’s first provision a simple ubuntu vm on our KVM environment . Again in a nested virtualization mode we are using hardware emulated hypervision “Qemu”, and this will require a small hack by setting a special variable. Will
    explain why further down.  Just bear with me for now.

[root@localhost]# export TERRAFORM_LIBVIRT_TEST_DOMAIN_TYPE="qemu"

  • Next  we need to find our configuration file, let’s check declared resource behind that  ubuntu.tf
  • [root@/*/ubuntu/]# ls /root/projects/terraform/ubuntu/
    .. ubuntu.tf --- you can click to download or read content

    [root@/*/ubuntu/]# vi ubuntu.tf

    provider "libvirt" {
    uri = "qemu:///system"}
    terraform {
      required_providers {
        libvirt = {
          source  = "dmacvicar/libvirt"
          version = "0.6.2"
        }
      }
    } ## 1. --------> Section that declares the provider in Terraform registry

    # 2. ----> We fetch the smallest ubuntu image from the cloud image repo
    resource "libvirt_volume" "ubuntu-disk" {
    name   = "ubuntu-qcow2"
    pool   = "default" ## ---> This should be same as your disk pool name
    source = https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
    format = "qcow2"
    }

    # 3. -----> Create the compute vm
    resource "libvirt_domain" "ubuntu-vm" {
    name   = "ubuntu-vm"
    memory = "512"
    vcpu   = 1

    network_interface {
       network_name = "
    default" ## ---> This should be the same as your network name
      }

    console { # ----> define a console for the domain.
       type        = "pty"
       target_port = "0"
       target_type = "serial" }

    disk {   volume_id = libvirt_volume.ubuntu-disk.id } # ----> map/attach the disk
    graphics { ## ---> graphics settings
       type        = "spice"
       listen_type = "address"
       autoport    = "true"}
    }


  • Run terraform init to initialize the setup and fetch the called providers in the tf file, like we did earlier.
  • [root@localhost]# terraform init

  • Run terraform plan
  • [root@localhost]# terraform plan
    Terraform will perform the following actions:

      # libvirt_domain.ubuntu-vm will be created
      + resource "libvirt_domain" "ubuntu-vm" {
          + arch        = (known after apply)
          + disk        = [
              + {
                  + block_device = null
                  + file         = null
                  + scsi         = null
                  + url          = null
                  + volume_id    = (known after apply)
                  + wwn          = null
                },
            ]
          + emulator    = (known after apply)
          + fw_cfg_name = "opt/com.coreos/config"
          + id          = (known after apply)
          + machine     = (known after apply)
          + memory      = 512
          + name        = "ubuntu-vm"
          + qemu_agent  = false
          + running     = true
          + vcpu        = 1

          + console {
              + source_host    = "127.0.0.1"
              + source_service = "0"
              + target_port    = "0"
              + target_type    = "serial"
              + type           = "pty"
            }

          + graphics {
              + autoport       = true
              + listen_address = "127.0.0.1"
              + listen_type    = "address"
              + type           = "spice"
            }

          + network_interface {
              + addresses    = (known after apply)
              + hostname     = (known after apply)
              + mac          = (known after apply)
              + network_id   = (known after apply)
              + network_name = "default"
            }
        }

      # libvirt_volume.ubuntu-disk will be created
      + resource "libvirt_volume" "ubuntu-disk" {
          + format = "qcow2"
          + id     = (known after apply)
          + name   = "ubuntu-qcow2"
          + pool   = "default"
          + size   = (known after apply)

          + source = https://cloud-images.ubuntu.com/releases/xenial/release/ubuntu-16.04-server-cloudimg-amd64-disk1.img
        }

    Plan: 2 to add, 0 to change, 0 to destroy.

  • Run terraform apply to deploy the vm which was declared in the plan command output 
  • [root@localhost]# terraform apply -auto-approve
    Plan: 2 to add, 0 to change, 0 to destroy.
    libvirt_volume.ubuntu-disk: Creating...
    libvirt_volume.ubuntu-disk: Creation complete after 17s [id=/u01/guest_images/ubuntu-qcow2]
    libvirt_domain.ubuntu-vm: Creating...
    libvirt_domain.ubuntu-vm: Creation complete after 0s [id=29735a37-ef91-4c26-b194-05887b1fb264]

    Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

  • Wait a little a bit and run kcli command or virsh list
  • [root@localhost ubuntu]# kcli list vm
    +-----------+--------+----------------+--------+------+---------+
    |    Name   | Status |      Ips       | Source | Plan | Profile |
    +-----------+--------+----------------+--------+------+---------+
    | ubuntu-vm |   up   | 192.168.122.74 |        |      |         |
    +-----------+--------+----------------+--------+------+---------+

  • Cool, we have a vm with an IP address but you still need to login to it. Cloud images just don’t come with root passwords so let’s destroy it now and jump into our second example.
  • [root@localhost]# terraform destroy -auto-approve
    Destroy complete! Resources: 2 destroyed.

     

  • Deploy a vm with CloudInit 

Same way as with Cloud vms we can also call startup scripts to do anything we want during the bootstrap.
I chose Centos in this example where CloudInit bootstrap actions were:

- Set a new password to root user

- Add an SSH key to root user

- Change the hostname 

  • Create a CloudInit config file: please careful with the indentation. It can also be downloaded here cloud_init.cfg.

# cd ~/projects/terraform
[root@~/projects/terraform]# cat cloud_init.cfg
#cloud-config
disable_root: 0
users:
  - name: root
    ssh-authorized-keys: ### –> add a public SSH key
      - ${file("~/.ssh/id_rsa.pub")}
ssh_pwauth: True
chpasswd: ### –> change the password
  list: |
     root:unix1234
  expire: False

runcmd:
  - hostnamectl set-hostname terracentos

  • I will only display the part where CloudInit is involved but you can read the full content here kvm-compute.tf

# cd ~/projects/terraform
[root@~/projects/terraform]# cat kvm_compute.tf
provider "libvirt" {

resource "libvirt_volume" "centos7-qcow2" {


## 1. ----> Instantiate cloudinit as a media drive to add our startup tasks
resource "libvirt_cloudinit_disk" "commoninit" {
name           = "commoninit.iso"
pool           = "default" ## ---> This should be same as your disk pool name
user_data      = data.template_file.user_data.rendered
}
## 2. ----> Data source converting the cloudinit file into a userdata format
data "template_file" "user_data" { template = file("${path.module}/cloud_init.cfg")}


resource "libvirt_domain" "centovm" {

  name   = "centovm"
  memory = "1024"
  vcpu   = 1

cloudinit = libvirt_cloudinit_disk.commoninit.id ## 3. ----> map CloudInit

...---> Rest of the usual domain declaration

  • We can now run a terraform INIT then PLAN (don’t forget to set TERRAFORM_LIBVIRT_TEST_DOMAIN_TYPE variable)

    [root@~/projects/terraform]# terraform init

    [root@~/projects/terraform]# terraform plan
    ... Other resources declaration
    # libvirt_cloudinit_disk.commoninit will be created
    + resource "libvirt_cloudinit_disk" "commoninit" {
      
    + id        = (known after apply)
        + name      = "commoninit.iso"
        + pool      = "default"
        + user_data = <<-EOT
              #cloud-config
              disable_root: 0
              users:
                - name: root
                  ssh-authorized-keys:
                    - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQ** root@localhost.localdomain

              ssh_pwauth: True
              chpasswd:
                list: |
                   root:unix1234
                expire: False

              runcmd:
                - hostnamectl set-hostname terracentos
          EOT

      }
    ... Remaining declaration

  • Run the Apply

    [root@~/projects/terraform]# terraform apply -auto-approve
    Plan: 3 to add, 0 to change, 0 to destroy.
    libvirt_cloudinit_disk.commoninit: Creation complete after 1m22s [id=/u01/guest_images/commoninit.iso;61c50cfc-**]
    ...

    Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

  • Wait a little a bit after completion and run kcli command to confirm an IP is allocated.

    [root@~/projects/terraform]# kcli list vm
    +-----------+--------+----------------+--------+------+---------+
    |    Name   | Status |      Ips       | Source | Plan | Profile |
    +-----------+--------+----------------+--------+------+---------+
    | centovm |   up   | 192.168.122.68 |        |      |         |
    +-----------+--------+----------------+--------+------+---------+

  • Login into the vm using ssh and password authentication

    -- 1. SSH
    [root@~/projects/terraform]#
    ssh -i ~/.ssh/id_rsa root@192.168.122.68

    Warning: Permanently added '192.168.122.68' (RSA) to the list of known hosts.

    [root@terracentos ~]# cat /etc/centos-release
    CentOS Linux release 7.8.2003 (Core)

    -- 2. Password

    [root@~/projects/terraform]# virsh console centovm
    Connected to domain centovm Escape character is ^]
    CentOS Linux 7 (Core)
    Kernel 3.10.0-1127.el7.x86_64 on an x86_64

    terracentos login: root
    Password:
    [root@terracentos ~]#

And here you go, your local terraform vm was changed during startup using a simple config file just like the ones on AWS ;) .

Undocumented QEMU tip on Terraform

    • I can now explain why we needed to set the environment variable to “qemu” in order to have your deployment working. In fact, the vm will never start-up without this trick. Let’s find why

      • Nested virtualization doesn't seem to support kvm domain type.
      • This issue is similar to what openstak and miniduke encounter when they use kvm within vbox "could not find capabilities for domaintype=kvm"
      • I needed to make libvirt provider chose qemu instead of kvm during the provisioning
      • With the help of @titogarrido we found out that the logic inside its code “domain_def.go
        implied kvm was the only supported virtualization but checked a mysterious variable first
      • Finally found the workaround by setting the variable to qemu which is an old hack from days when authors were testing travis 
    I asked them to replace that variable by an attribute inside terraform code but the bug is still there, see more in my issue

    --- Workaround for non BareMetal hosts (nested)
    export TERRAFORM_LIBVIRT_TEST_DOMAIN_TYPE="qemu"

    --- Below Go check happens where qemu is selected (domain_def.go)

    if v := os.Getenv("TERRAFORM_LIBVIRT_TEST_DOMAIN_TYPE"); v != "" {
    		domainDef.Type = v
    	} else {domainDef.Type = "kvm"}
     

Conclusion

  • We have seen in this lab just how easy it was to deploy resources using terraform on-promises
         (no more credit card needed :) 
  • I am very happy to make this little contribution via my vagrant build shipped with KVM on VirtualBox
  • I hope you’ll give this lab a try as it’s super easy and fun !! :)
  • I would like to thank @titogarrido, who has accepted to dig deeper with me so we can find the bug and
  • I love pair programming tools like tmate that allowed us to live collaborate while he was in Brazil and me in Canada.  
  • If you want to know about my vagrant build check my previous blog post 
  • Learn more about libvirt provider usage in the official terraform registry for libvirt provider dmacvicar/libvirt
  • Thank you for reading !

    Thursday, December 2, 2021

    KVM lab inside a VirtualBox vm (Nested virtualization) using vagrant

    This image has an empty alt attribute; its file name is image-11.pngIntro

    It has been a while since I wanted to blog about nested virtualization, but the good news is I now have a vagrant build to share with you so you can  vagrant it up:). KVM started to interest me as soon as I learned that Oracle allowed for hard partitioning on KVM boxes if installed on top of Oracle Linux. I instantly thought of the benefit for customers that were struggling with Oracle licensing under Vmware farms. Anyway, today Oracle Cloud and engineered systems are relying hugely on KVM and the old OVM is slowly being replaced by OLVM manager which orchestrate vm administration on top of KVM hosts.      

    Nested virtualization

    KVM inside VirtualBox


    So how to make a Hypervisor (KVM) aware of the Host hardware when it's itself installed under another Hypervisor layer (virtualbox)? well this is today possible thanks to nested virtualization feature available in the latest version of Virtualbox  and it is very simple to enable even after your vm has been provisioned. More on how to enable it in my previous blog.
    Please note that it’s actually qemu-kvm that’s available using nested virtualization here , which is a type 2 hypervisor (virtual hardware emulation).  


    How to get started


    Can’t afford a dedicated host for KVM? I got you covered, you can start exploring KVM right now on your laptop with my vagrant build. Sharing is good, but live labs are even better.

    GitHub repo
    : https://github.com/brokedba/KVM-on-virtualbox 



    KVM Tools

      Virsh

      Is a command line interface that can be used to create, destroy, stop start and edit virtual machines and configure the virtual environment (such as virtual networks etc). >> see Cheatsheet 

      Virt-install

      Although this tool is a reference in terms of vm creation, it has never succeeded to create qemu vms on my vm. Which is why I won’t talk about a it today.

      KCLI

      This is a wonderful CLI tool that’s created by karmab and interacts with libvirt API to manage KVM environments from configuring to managing the guest vms. It even interacts with other virtualization providers (KubeVirt, oVirt, OpenStack, VMware vSphere, GCP and AWS) and easily deploy and customize VMs from cloud images. See details in the official Github repository.
      I will do some examples using KCLI in this post since It is already shipped with my nested vagrant build.

      Virt-manager /Ovirt/OLVM

      All these 3 are a set of GUI based management tools to manage VM Guests.

      Virt-viewer

      Is a Utility to display graphical console for a virtual machine


      Examples

      Virsh : Create and build a default storage pool (already done in my nested vagrant vm) and describe the host

      # mkdir /u01/guest_images
      # virsh pool-define-as default dir - - - - "/u01/guest_images"
      # virsh pool-build default

      # virsh pool-list --all
      Name        State      Autostart
      ------------ --------- -------------
       default      active      yes

      [root@localhost ~]# virsh pool-info default
      Name:           default
      UUID:           2b273ed0-e666-4c52-a383-c47a03727fc1
      State:          running
      Persistent:     yes
      Autostart:      yes
      Capacity:       49.97 GiB
      Allocation:     32.21 MiB
      Available:      49.94 GiB

      [root@localhost ~]# virsh nodeinfo ---> the host is actually my virtualbox vm
      CPU model:           x86_64
      CPU(s):              2
      CPU frequency:       2592 MHz
      CPU socket(s):       1
      Core(s) per socket:  2
      Thread(s) per core:  1
      NUMA cell(s):        1
      Memory size:         3761104 KiB

      ---- vm
      # virsh list --all  (list all guest vms including shutdown vms)
      # virsh dominfo db2  (describe a “db2” vm)
      # virsh edit db2   (edit “db2” vm attributes) 

      --- Reboot Shutdown start VMs
      # virsh start my-vm   
      # virsh reboot my-vm  
      # virsh shutdown my-vm


      CREATE A VM INSIDE A NESTED VIRTUAL MACHINE

      • I will be using KCLI in my example where I will

        • Create a default storage pool and configure it (already done in my vagrant vm)

          # kcli create pool -p /u01/guest_images default
          # kcli list pool
          +--------------+-------------------------+
          | Pool         |        Path             |
          +--------------+-------------------------+
          | default      | /u01/guest_images       |
          +--------------+-------------------------+

          Since kcli uses docker we will need to update the kcli alias according to the pool path      

          # alias kcli='docker run --net host -it --rm --security-opt label=disable -v /root/.kcli:/root/.kcli -v /root/.ssh:/root/.ssh -v /u01/guest_images:/u01/guest_images -v /var/run/libvirt:/var/run/libvirt -v $PWD:/workdir quay.io/karmab/kcli'
        • Create a default network (already done in my vagrant vm)

          # kcli create network  -c 192.168.122.0/24 default
          # kcli list network
          +---------+--------+------------------+------+---------+------+
          | Network |  Type  |       Cidr       | Dhcp |  Domain | Mode |
          +---------+--------+------------------+------+---------+------+
          | default | routed | 192.168.122.0/24 | True | default | nat  |
          +---------+--------+------------------+------+---------+------+



        KCLI makes it very easy to download an image from the cloud repository as shown in below example

        • Download ubuntu 1803 from ubuntu cloud image repository

          # kcli download image ubuntu1804  -p default
          Using url https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img...

          # kcli list image
          +----------------------------------------------------+
          | Images                                             |
          +----------------------------------------------------+
          | /u01/guest_images/bionic-server-cloudimg-amd64.img |
          +----------------------------------------------------+

        • You can also use Curl if you have a specific image you want to download
        • # curl –sL image-URL –o /Pool/Path/image.img


    • Create a vm

    • Once the image is in the storage pool you only have to run the kcli create command as below ( see syntax)

      # kcli create vm ubuntuvm -i ubuntu1804 -P network=default -P virttype=qemu –P memory=512 -P numcpus=1

      Deploying vm ubuntu_vm from profile ubuntu1804...
      ubuntu_vm created on local

      # kcli list vm
      +----------+-------+--------------------------------------+-----+------------+
      |    Name  | Status| Ips  | Source      |Plan | Profile |
      +----------+-------+--------------+-----------------------+-----+------------+
      | ubuntuvm |  up   | 192.168.122.5| bionic-server*md64.img|kvirt| ubuntu1804 |
      +----------+-------+--------------+-----------------------+-----+------------+

      Syntax:
      usage : kcli create vm [-h] [-p PROFILE] [--console] [-c COUNT] [-i IMAGE]
                            [--profilefile PROFILEFILE] [-P PARAM]
                            [--paramfile PARAMFILE] [-s] [-w]
                            [VMNAME]


    • Login to the vm

      • The IP address will take some time before it’s assigned but when it’s done, just log in using ssh. kcli creates the vm based on the default ssh key (~/id_rsa).
      • # kcli ssh ubuntuvm
        Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-163-generic x86_64)
        ubuntu@ubuntuvm:~$ uname -a
        Linux ubuntuvm 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

      • You can also log in using root password, but for this you’ll have to set it during the vm creation via Cloud-init
      • # kcli create vm ubuntuvm -i ubuntu1804 -P network=default -P virttype=qemu \
        -P cmds=['echo root:unix1234 | chpasswd']

        # virsh console ubuntuvm
        Connected to domain ubuntuvm
        ubuntuvm login: root
        Password:

        Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-163-generic x86_64)
        root@ubuntuvm:~#












    • Download an Image











     

     





































    KCLI  tips

      • kcli configuration is done in ~/.kcli directory, that you need to manually create (done in my vagrant build already). It will contain:

        • config.yml generic configuration where you declare clients.
        • profiles.yml stores your profiles where you combine things like memory, numcpus and all supported parameters into named profiles to create vms from
      For example, you could create the same vm described earlier by storing the vm specs in the profiles.yml

      --- excerpt from ~/.kcli/profiles.yml
      local_ubuntu1804:
        image: bionic-server-cloudimg-amd64.img
        numcpus: 1
        memory: 512
        nets:
        - default
        pool: default
        cmds:
        - echo root:unix1234 | chpasswd

      Then call the named profile using the –i argument during the creation of the vm   

      # kcli create vm ubuntuvm –i local_ubuntu1804

       

    Conclusion

  • I’m very glad to finally share this with you, especially since it includes my vagrant build that you can try yourself & play with
         KVM from VirtualBox.
  • Keep in mind that the more resource you allocate to your Host/root vm the more stuff you can spin
  • My vagrant default build includes 2 Vcpu and 4GB of RAM, but you can tweak the values in the Vagrantfile
  • Please give KCLI project a star in Github as its creator has helped me a lot which deserves a huge shout out