Thursday, December 2, 2021

KVM lab inside a VirtualBox vm (Nested virtualization) using vagrant

This image has an empty alt attribute; its file name is image-11.pngIntro

It has been a while since I wanted to blog about nested virtualization, but the good news is I now have a vagrant build to share with you so you can  vagrant it up:). KVM started to interest me as soon as I learned that Oracle allowed for hard partitioning on KVM boxes if installed on top of Oracle Linux. I instantly thought of the benefit for customers that were struggling with Oracle licensing under Vmware farms. Anyway, today Oracle Cloud and engineered systems are relying hugely on KVM and the old OVM is slowly being replaced by OLVM manager which orchestrate vm administration on top of KVM hosts.      

Nested virtualization

KVM inside VirtualBox

So how to make a Hypervisor (KVM) aware of the Host hardware when it's itself installed under another Hypervisor layer (virtualbox)? well this is today possible thanks to nested virtualization feature available in the latest version of Virtualbox  and it is very simple to enable even after your vm has been provisioned. More on how to enable it in my previous blog.
Please note that it’s actually qemu-kvm that’s available using nested virtualization here , which is a type 2 hypervisor (virtual hardware emulation).  

How to get started

Can’t afford a dedicated host for KVM? I got you covered, you can start exploring KVM right now on your laptop with my vagrant build. Sharing is good, but live labs are even better.

GitHub repo

KVM Tools


    Is a command line interface that can be used to create, destroy, stop start and edit virtual machines and configure the virtual environment (such as virtual networks etc). >> see Cheatsheet 


    Although this tool is a reference in terms of vm creation, it has never succeeded to create qemu vms on my vm. Which is why I won’t talk about a it today.


    This is a wonderful CLI tool that’s created by karmab and interacts with libvirt API to manage KVM environments from configuring to managing the guest vms. It even interacts with other virtualization providers (KubeVirt, oVirt, OpenStack, VMware vSphere, GCP and AWS) and easily deploy and customize VMs from cloud images. See details in the official Github repository.
    I will do some examples using KCLI in this post since It is already shipped with my nested vagrant build.

    Virt-manager /Ovirt/OLVM

    All these 3 are a set of GUI based management tools to manage VM Guests.


    Is a Utility to display graphical console for a virtual machine


    Virsh : Create and build a default storage pool (already done in my nested vagrant vm) and describe the host

    # mkdir /u01/guest_images
    # virsh pool-define-as default dir - - - - "/u01/guest_images"
    # virsh pool-build default

    # virsh pool-list --all
    Name        State      Autostart
    ------------ --------- -------------
     default      active      yes

    [root@localhost ~]# virsh pool-info default
    Name:           default
    UUID:           2b273ed0-e666-4c52-a383-c47a03727fc1
    State:          running
    Persistent:     yes
    Autostart:      yes
    Capacity:       49.97 GiB
    Allocation:     32.21 MiB
    Available:      49.94 GiB

    [root@localhost ~]# virsh nodeinfo ---> the host is actually my virtualbox vm
    CPU model:           x86_64
    CPU(s):              2
    CPU frequency:       2592 MHz
    CPU socket(s):       1
    Core(s) per socket:  2
    Thread(s) per core:  1
    NUMA cell(s):        1
    Memory size:         3761104 KiB

    ---- vm
    # virsh list --all  (list all guest vms including shutdown vms)
    # virsh dominfo db2  (describe a “db2” vm)
    # virsh edit db2   (edit “db2” vm attributes) 

    --- Reboot Shutdown start VMs
    # virsh start my-vm   
    # virsh reboot my-vm  
    # virsh shutdown my-vm


    • I will be using KCLI in my example where I will

      • Create a default storage pool and configure it (already done in my vagrant vm)

        # kcli create pool -p /u01/guest_images default
        # kcli list pool
        | Pool         |        Path             |
        | default      | /u01/guest_images       |

        Since kcli uses docker we will need to update the kcli alias according to the pool path      

        # alias kcli='docker run --net host -it --rm --security-opt label=disable -v /root/.kcli:/root/.kcli -v /root/.ssh:/root/.ssh -v /u01/guest_images:/u01/guest_images -v /var/run/libvirt:/var/run/libvirt -v $PWD:/workdir'
      • Create a default network (already done in my vagrant vm)

        # kcli create network  -c default
        # kcli list network
        | Network |  Type  |       Cidr       | Dhcp |  Domain | Mode |
        | default | routed | | True | default | nat  |

      KCLI makes it very easy to download an image from the cloud repository as shown in below example

      • Download ubuntu 1803 from ubuntu cloud image repository

        # kcli download image ubuntu1804  -p default
        Using url

        # kcli list image
        | Images                                             |
        | /u01/guest_images/bionic-server-cloudimg-amd64.img |

      • You can also use Curl if you have a specific image you want to download
      • # curl –sL image-URL –o /Pool/Path/image.img

  • Create a vm

  • Once the image is in the storage pool you only have to run the kcli create command as below ( see syntax)

    # kcli create vm ubuntuvm -i ubuntu1804 -P network=default -P virttype=qemu –P memory=512 -P numcpus=1

    Deploying vm ubuntu_vm from profile ubuntu1804...
    ubuntu_vm created on local

    # kcli list vm
    |    Name  | Status| Ips  | Source      |Plan | Profile |
    | ubuntuvm |  up   || bionic-server*md64.img|kvirt| ubuntu1804 |

    usage : kcli create vm [-h] [-p PROFILE] [--console] [-c COUNT] [-i IMAGE]
                          [--profilefile PROFILEFILE] [-P PARAM]
                          [--paramfile PARAMFILE] [-s] [-w]

  • Login to the vm

    • The IP address will take some time before it’s assigned but when it’s done, just log in using ssh. kcli creates the vm based on the default ssh key (~/id_rsa).
    • # kcli ssh ubuntuvm
      Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-163-generic x86_64)
      ubuntu@ubuntuvm:~$ uname -a
      Linux ubuntuvm 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

    • You can also log in using root password, but for this you’ll have to set it during the vm creation via Cloud-init
    • # kcli create vm ubuntuvm -i ubuntu1804 -P network=default -P virttype=qemu \
      -P cmds=['echo root:unix1234 | chpasswd']

      # virsh console ubuntuvm
      Connected to domain ubuntuvm
      ubuntuvm login: root

      Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 4.15.0-163-generic x86_64)

  • Download an Image



KCLI  tips

    • kcli configuration is done in ~/.kcli directory, that you need to manually create (done in my vagrant build already). It will contain:

      • config.yml generic configuration where you declare clients.
      • profiles.yml stores your profiles where you combine things like memory, numcpus and all supported parameters into named profiles to create vms from
    For example, you could create the same vm described earlier by storing the vm specs in the profiles.yml

    --- excerpt from ~/.kcli/profiles.yml
      image: bionic-server-cloudimg-amd64.img
      numcpus: 1
      memory: 512
      - default
      pool: default
      - echo root:unix1234 | chpasswd

    Then call the named profile using the –i argument during the creation of the vm   

    # kcli create vm ubuntuvm –i local_ubuntu1804



  • I’m very glad to finally share this with you, especially since it includes my vagrant build that you can try yourself & play with
         KVM from VirtualBox.
  • Keep in mind that the more resource you allocate to your Host/root vm the more stuff you can spin
  • My vagrant default build includes 2 Vcpu and 4GB of RAM, but you can tweak the values in the Vagrantfile
  • Please give KCLI project a star in Github as its creator has helped me a lot which deserves a huge shout out  
  • No comments:

    Post a Comment