Monday, October 11, 2021

Terraform for dummies part 3: Launch a vm with a static website on Azure

Intro

In this tutorial we will try provisioning a vm on azure using their terraform provider and reproduce the same deployment I have completed on AWS and Oracle Cloud.  As usual we won’t just deploy an instance but also configure an nginx website linked to its public IP. I’ll end this post with some notes on Azure/AWS differences.
I used azurerm_linux_virtual_machine resource instead of the classic vm resource because azure decided it was better to split both windows/Linux for vm deployments(provider 2.0) which is unlike anything seen elsewhere.
 
Note: I have done the same task with my AZ-CLI based bash scripts (avaialble in Github) which was very helpful to understand the logic behind components needed during the vm creation (More details: deploy-webserver-vm-using-azure-cli).

Here’s a direct link to my GitHub repo linked to this lab =>: terraform-examples/terraform-provider-azure

Content :
I. Terraform setup
IV. Partial deployment
 V. Full deployment
Tips  & Conclusion

Overview and Concepts

Topology

The following illustration shows the layers involved between your workstation an Oracle cloud infrastructure while running the terraform actions along with the instance attributes we will be provisioning.

This image has an empty alt attribute; its file name is image-3.png

Besides describing my GitHub repo before starting this tutorial, I’ll just briefly discuss some principles.

  • Infrastructure As Code Manages and provisions cloud resources using a declarative code (i.e Terraform)  and definition files avoiding interactive configuration. Terraform is an immutable Orchestrator that creates and deletes all resources in the proper sequence. Each Cloud vendor has what we call a provider that terraform uses in order to convert declarative texts into API calls reaching the Cloud infrastructure layer.


  • Terraform Files
  • - Can be a single file or split into multiple tf or tf.json files, any other file extension is ignored.
    - Files are merged in alphabetical order but resource definition order doesn't matter (subfolders are not read).
    - Common configurations have 3 type of tf files and a statefile.
      1- main.tf: terraform declaration code (configuration) . The file name can be anything you choose       
      2- variables.tf: Resource variables needed for the deploy
      3- outputs.tf: displays the resources detail at the end of the deploy
      4- terraform.tfstate: keeps track of the state of the stack(resources) after each terraform apply run

  • Terraform resource declaration syntax looks like this:
  • Component "Provider_Resource_type" "MyResource_Name" { Attribute1 = value .. 
                                                           Attribute2 = value ..}

  • Where do I find a good Azure deployment sample?
  • The easiest way is to create/locate an instance from the console and then use the import function from terraform to generate each of the related components in HCL format (vnet, instance,subnet,etc..) based on their id.

    Example for a Vnet >>
    1-  Create a shell resource declaration for the vnet in a file called vnet.tf 
    2-  Get the id of the vnet resource from your Azure portal
    3-  Run the Terraform import then run Terraform show to extract the vnet full declaration from azure to the same file (vnet.tf)
    4- Now you can remove the id attribute with all non required attributes to create a vnet resource (Do that for each resource) 
    1- # vi vnet.tf 
      provider "azurerm" {
    features {}
    }
      resource "azurerm_virtual_network" "terra_vnet" {
    }
    2- # terraform import azurerm_virtual_network.terra_vnet /subscriptions/00*/resourceGroups/Mygroup/providers/Microsoft.Network/virtualNetworks/terra_vnet
    3- # terraform show -no-color > vnet.tf

    Note:
    If you want to import all the existing resources in your account in bulk mode (not one by one) there are tools like py-az2tf or terraformer, which can import both code and state from your azure account automatically.

    Terraform lab content: I have deliberately split this lab in 2:

    • Vnet Deployment: To grasp the basics of a single resource deployment.
    • Instance Deployment: Includes the instance provisioning configured as web sever(includes above vnet) .


    I.Terraform setup

       Although I’m on windows I  tried the lab using WSL (Ubuntu) terminal (but same applies to Mac).

       Azure authentication

      To authenticate with your azure account, Terraform will only need you to login using az cli . This can be done by running az cli commands described here >> az-cli installation

       Assumptions

      - I will assume that  below authentication option is present in your workstation:
    • AZCLI default profile configured with your azure credentials. Refer to my Blog post for more details
    • $ az account show 
      EnvironmentName    HomeTenantId     IsDefault    Name        State
      -----------------  ----------------  ----------- -------------  -------
      AzureCloud        00000000-00000000… True        BrokeDba Lab  Enabled
      - I’ll also assume the presence of an ssh key pair to attach to your vm instance. If not here is a command to generate a PEM based key pair.  
      $  ssh-keygen -P "" -t rsa -b 2048 -m pem -f ~/id_rsa_az
      Generating public/private rsa key pair.
      Your identification has been saved in /home/brokedba/id_rsa_az.
      Your public key has been saved in /home/brokedba/id_rsa_az.pub.


    II. Clone the repository



    III. Provider setup

    1. INSTALL AND SETUP THE AZURE PROVIDER

      • Cd Into terraform-provider-azure/create-vnet where our configuration resides (i.e vnet)
        $ cd /mnt/c/Users/brokedba/azure/terraform-examples/terraform-provider-azure/create-vnet 
      • Azure provider plugin will be automatically installed by running  ”terraform init”.
      • $ terraform init
          Initializing the backend...
        
          Initializing provider plugins...
          - Finding latest version of hashicorp/azurerm...
          - Installing hashicorp/azurerm v2.80.0...
          * Installed hashicorp/azurerm v2.80.0 (signed by HashiCorp)
        provider "azurerm" {
        Terraform has been successfully initialized!
        $ terraform --version Terraform v1.0.3 on linux_amd64 + provider registry.terraform.io/hashicorp/azurerm v2.80.0
      • Let's see what's in the create-vnet directory. Here, only *.tf files matter (click to see content)
      • $ tree
          .
          |-- outputs.tf        ---> displays resources detail after the deploy
          |-- variables.tf      ---> Resource variables needed for the deploy   
          |-- vnet.tf           ---> Our vnet terraform declaration 
        

      IV. Partial Deployment

        DEPLOY A SIMPLE VNET

          • Once the authentication is setup and provider installed , we can run terraform plan command to create an execution plan (quick dry run to check the desired state/actions). Specify the prefix for your resource names
            $ terraform plan
            var.prefix The prefix used for all resources in this example
            Enter a value: Demo Terraform used selected providers to generate the following execution plan.
            Resource actions are indicated with the following symbols: + create
            ------------------------------------------------------------------------ Terraform will perform the following actions: # azurerm_network_security_group.terra_nsg will be created
            + resource "azurerm_network_security_group" "terra_nsg"
            {..
            + security_rule       = [
            + destination_port_ranges             = [+ "22", + "3389",+ "443",+ "80",]
            + direction                           = "Inbound"
            + name                                = "Inbound HTTP access"
            # azurerm_resource_group.rg will be created + resource "azurerm_resource_group" "rg" {..}
            # azurerm_subnet.terra_sub will be created + resource "azurerm_subnet" "terra_sub"
            {..
            + address_prefixes   = ["192.168.0.0/16” ]
            }
            # azurerm_subnet_network_security_group_association.nsg_sub will be created
            + + resource "azurerm_subnet_network_security_group_association" "nsg_sub"{

            # azurerm_virtual_network.terra_vnet will be created
            + resource "azurerm_virtual_network" "terra_vnet" {
            ...
            + address_space         = [ "192.168.0.0/16”
            ...}

            Plan: 5 to add, 0 to change, 0 to destroy.
            - The output being too verbose I deliberately kept only relevant attributes for the Vnet resource plan
          • Next, we can run ”terraform deploy” to provision the resources to create our Vnet (listed in the plan)
          • $ terraform apply -auto-approve
            azurerm_resource_group.rg: Creating...
            azurerm_virtual_network.terra_vnet: Creating...
            azurerm_network_security_group.terra_nsg: Creating...
            azurerm_subnet.terra_sub: Creating... ... Apply complete! Resources: 5 added, 0 changed, 0 destroyed. Outputs: Subnet_CIDR = 192.168.10.0/24
            Subnet_Name = internal
            vnet_CIDR = 192.168.0.0/16
            vnet_Name = Terravnet
            vnet_dedicated_security_group_Name = "Demo-nsg"
            vnet_CIDR = 192.168.0.0/16
            vnet_dedicated_security_ingress_rules = toset([
            {…
            "access" = "Allow"
            "description" = "RDP-HTTP-HTTPS ingress trafic"
            "destination_port_ranges" = toset([
            "22",
            "3389",
            "443",
            "80",])
                   This image has an empty alt attribute; its file name is image-1.png

          Observations:

          - The deploy started by loading the resources variables in variables.tf which allowed the execution of vnet.tf
          - Finally terraform fetched the attributes of the created resources listed in outputs.tf

          Note: We’ll now destroy the Vnet as the next instance deploy contains the same Vnet specs.

            $ terraform destroy -auto-approve
            
            Destroy complete! Resources: 6 destroyed.
            


        V. Full deployment (Instance)

        1. OVERVIEW

          • After our small intro to Vnet creation,  let's launch a vm and configure nginx in it in one command.
          • First we need to switch to the second directory terraform-provider-azure/launch-instance/
            Here's the content:
          • $ tree ./terraform-provider-azure/launch-instance
            .
            |-- cloud-init          --> SubFolder
            |   `--> centos_userdata.txt --> script to config a webserver+ modify the index
            | `--> sles_userdata.txt --> for SUSE
            | `--> ubto_userdata.txt --> for Ubunto
            | `--> el_userdata.txt --> for Enteprise linux distros
            | `--> Win_userdata.ps1 --> for windows
            |-- compute.tf ---> Instance related terraform configuration |-- outputs.tf ---> displays the resources detail at the end of the deploy |-- variables.tf ---> Resource variables needed for the deploy |-- vnet.tf ---> same vnet we deployed earlier

            Note: As you can see we have 2 additional files and one Subfolder. compute.tf is where the compute instance and all its attributes are declared. All the other “.tf” files come from my vnet example with some additions for variables.tf and output.tf

          • Cloud-init: is a cloud instance initialization method that executes tasks upon instance Startup by providing the user_data entry of the azure vm resource definition (See below). I have created enough for 6 OS’ (Centos,Ubuntu,Windows,RHEL,OL,SUSE)
            ...variable "user_data" { default = "./cloud-init/centos_userdata.txt"} 
            $ vi compute.tf resource "azurerm_linux_virtual_machine" "terravm" {
            ... custom_data    = base64encode ("${file(var.user_data)}")
            ...    
          • In my lab, I used cloud-init to install nginx and write an html page that will replace the HomePage at Startup.
          • Make sure you your public ssh key is in your home directory or just modify the path below (see variables.tf)
          • $ vi compute.tf resource "azurerm_linux_virtual_machine" "terravm" {
            ..

            admin_ssh_key {

            username = var.os_publisher[var.OS].admin

            public_key = file("~/id_rsa_az.pub")  } ## Change me

        2. LAUNCH THE INSTANCE

          • Once in “launch-instance” directory, you can  run the plan command to validate the 9 resources required to launch our vm instance. The output has been truncated to reduce verbosity
          • $ terraform plan
             Terraform used selected providers to generate following execution plan. 
            Resource actions are indicated with the following symbols:
            ------------------------------------------------------------------------ Terraform will perform the following actions: ... # Vnet declaration (see previous vnet deploy)
            ...
            # azurerm_linux_virtual_machine.terravm
            will be created + resource "azurerm_linux_virtual_machine" "terravm" { + ... + admin_username           = "centos"
            + location         = "eastus"
            + size             = "Standard_B1s"
            + name                     = "TerraDemo-vm"
            + private_ip              = "192.168.10.51"
            + admin_ssh_key {                    = {
                  + public_key = <<-EOT ssh-rsa AAAABxxx…*
            EOT
            username =”centos”        }
            + custom_data             = (sensitive value)
            + os_disk {…}
            + source_image_reference {
            + offer     = "CentOS"
            + publisher = "OpenLogic"
            + version   = "latest"
            }
            } # azurerm_network_interface.Terranic will be created
              # azurerm_network_interface_security_group_association.terra_assoc_pubip_nsg will be created
            # azurerm_network_security_group.terra_nsg will be created
            # azurerm_public_ip.terrapubip will be created
                ...} ...
              } Plan: 9 to add, 0 to change, 0 to destroy.
          • Now let’s launch our CENTOS7 vm using terraform apply (I left a map of different OS ids in the variables.tf you can choose from)
            $ terraform apply -auto-approve
            ...
            azurerm_resource_group.rg: Creating...
            azurerm_virtual_network.terra_vnet: Creating...

            azurerm_network_security_group.terra_nsg: Creating... azurerm_subnet.terra_sub: Creating... azurerm_subnet_network_security_group_association.nsg_sub: Creating...
            azurerm_network_interface.Terranic: Creating...
            azurerm_linux_virtual_machine.terravm: Creating...... Apply complete! Resources: 9 added, 0 changed, 0 destroyed. Outputs: ...
            Subnet_Name = "internal"
            vnet_CIDR = 192.168.0.0/16
            Subnet_CIDR = 192.168.10.0/24
            vnet_dedicated_security_ingress_rules = toset([
            {…
            "access" = "Allow"
              "description" = "RDP-HTTP-HTTPS ingress trafic"
              "destination_port_ranges" = toset([
            "22",
              "3389",
            "443",
            "80",])
            ]
            SSH_Connection = ssh connection to instance TerraCompute ==> sudo ssh -i ~/id_rsa_az centos@’’ <---- IP not displayed due to bug with az provider
            private_ip = "192.168.10.4" public_ip = ""
            This image has an empty alt attribute; its file name is az-cli_vm.png

            • Once the instance is provisioned, juts copy the public IP address(i.e 52.191.26.102) in chrome and Voila!
            • Although a bug of  azurerm_linux_virtual_machine doesn’t show the public IP , just go to the console and copy past the IP into your browser
            • You can also tear down this configuration by simply running terraform destroy from the same directory

            Tips

            • You can fetch any of the specified attributes in outputs.tf  using terraform output command i.e:  
            • $ terraform output SSH_Connection
              ssh connection to instance TerraCompute ==> sudo ssh -i ~/id_rsa_az centos@ ’’
            • Terraform Console:
              Although terraform is a declarative language, there are still myriads of functions you can use to process strings/number/lists/mappings etc. There is an excellent all in one script with examples of most terraform functions >> here 
            • I added cloud-init files for different distros you can play with by adapting var.user_data & var.OS 


            Differences between Azure/AWS & things I wish azure kept simple

            • Network In Azure every subnet is a public subnet because as soon as you associate a public IP to a Vm’s VNIC, you'll magically have internet access. Internet gateway is not needed here because system routes are taking care of that.
            • CIDR range in azure is slightly larger than aws ( from /8 to /29).
            • ID Azure doesn’t provide regular alpha numeric IDs for its resources but a sort of path based identification (see below)
              $ SUBNET ID  
              /subscriptions/xx/resourceGroups/my_group/providers/Microsoft.Network/virtualNetworks/MY-VNET/subnets/My_SUBNET

            •  Naming  Case insensitive but unique, a resource group can’t have 2 resources of the same type with the same name   
            • Bummer: I wish azure kept one terraform resource  to create vms instead of having one for Linux and one for Windows since v2.0. This implies that we will have extra blocks/modules if we want to do both in one stack instead of leveraging loops. The classic one is not handy for Linux either.  

               CONCLUSION

            • We have demonstrated in this tutorial how to quickly deploy a web server instance using terraform in Azure and leverage Cloud-init to configure the vm during the bootstrap .
            • Remember that all used attributes in this exercise can be modified in the variables.tf file.
            • Route table and internet gateway setting in our code were replaced by the Public IP and VNICs
            • I found azure vm spinning time so slow comparing to AWS or OCI, not sure if it’s it’s regional thing 
            • Improvement: I will look to improve the compute.tf code to allow both windows & Linux type of resource to provision. Validate that userdata works for windows too, unlike on az cli.
              Another improvement can be reached in terms of display of the security rules using formatlist
              stay tuned

        Thank you for reading!

        Monday, September 20, 2021

        Vagrant tips: How to automatically adjust the Time Zone of your vagrant box

        This image has an empty alt attribute; its file name is vagrant_tz.png

        Intro

        I was playing with my Linux vagrant boxes lately and I realized I was always spinning third party boxes that had a totally different time zone (Europe/UK mostly). Up until now I never bothered as I have used bunch of those from vagrant Cloud for years. I sometimes notice, and think of using a shell command to fix it during the bootstrap, but it I haven’t had the time. However, I was heavily debugging some logs this week and the wrong Time Zone started to get annoying for me so I decided to look for a permanent fix. 

        I. Vagrant Time zone plugin to the rescue 

        • Solution

          There is no need to add random shell script in the shell provisioning area of your Vagrantfile.Teemu Matilainen has you covered with his sweet ruby plugin called  vagrant-timezone that does just that.
               



        • How does it work
          Install the plugin:
        • C:\Users\brokedba> vagrant plugin install vagrant-timezone
          Installing the 'vagrant-timezone' plugin. This can take a few minutes...
          Fetching vagrant-timezone-1.3.0.gem
          Installed the plugin 'vagrant-timezone (1.3.0)'!


            Configuration

          To Configure time zone for all Vagrant VMs, add the following to $HOME/.vagrant.d/Vagrantfile or to a project specific Vagrantfile (see below example in windows)

          C:\Users\brokedba\.vagrant.d>vagrant init
          --- Add the IF block below in the master or each build’s Vagrantfile

          Vagrant
          .configure("2") do |config| if Vagrant.has_plugin?("vagrant-timezone") config.timezone.value = "America/Toronto" end # ... other stuff end

          You can of course choose your own Time Zone from this TZ database list   


          - The configuration is done on vagrant up and vagrant reload actions.
          Note that no services are restarted automatically so they may keep using the old time zone information. 
           

          - This plugin requires Vagrant 1.2 or newer.












          

        II. Usage and prerequisites




        III. Compatibility

        • Linux guests.
          • Arch
          • CoreOS
          • Debian (and derivatives)
          • Gentoo
          • RedHat (and derivatives)
        • BSD guests:

          • FreeBSD
          • NetBSD
          • OpenBSD
          • OS X
        • Windows






        Conclusion:


        This will hopefully encourage you to more often use and enjoy vagrant boxes without being stuck with the original time zone .

        Thanks for reading

        Monday, September 6, 2021

        Google SDK (CLI for GCP) installation and few CLI examples

        This image has an empty alt attribute; its file name is image-1.png

        Intro

        Google as most of the cloud providers today, offers a simple Cloud shell solution with all required tools to connect to their platform securely using APIs. However, If you still want to have it in your laptop along with other development tools, you can always install Google Cloud SDK (especially if it’s for educational purpose).

        Cloud SDK includes the gcloud, gsutil and bq command-line tool including few components that aren’t installed by default.  GCloud is the main command line used to manage cloud resources and enabling services.


        Requirement


        Whether on windows or Linux, the basic installation and use of Cloud SDK will require 2 elements:


          Note: To access the GCP APIs using a specific language (like C++, ruby etc), you can download the Cloud Client Libraries.

        I. Cloud SDK Installation

        • Windows

          1- Download and execute the following Cloud SDK installer(current version: 355)
          2- Follow the on-screen instructions (the installer is also used to upgrade existing installations) . 
          This image has an empty alt attribute; its file name is image-3.png
            

          3- Run the version command to confirm that Cloud SDK was installed correctly.
               

          C:\Users\brokedba gcloud --version
          Google Cloud SDK 355.0.0
          bq   2.0.71
          core 2021.08.27
          gsutil 4.67
          
          C:\Users\brokedba> where gcloud > C:\Program Files (x86)\Cloud SDK\google-cloud-sdk\bin\gcloud
          > C:\Program Files (x86)\Cloud SDK\google-cloud-sdk\bin\gcloud.cmd


        • Note: The installation can also be done through PowerShell in one liner command (gcloud,bq,gsutil commands can run from either Command Prompt or PowerShell).
          PS C:\Users\brokedba> (New-Object Net.WebClient).DownloadFile("https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe", "$env:Temp\GoogleCloudSDKInstaller.exe")

        • Linux
          There is either an all-in-one install using packages or using interactive shell script. Let’s start with the script
        • brokedba~$ curl -sL https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-355.0.0-linux-x86_64.tar.gz| sudo tar -xz && sudo bash ./google-cloud-sdk/install.sh

          -- Workflow
          Modify profile to update your $PATH and enable shell command
          completion?

          Do you want to continue (Y/n)?  y

          The Google Cloud SDK installer will now prompt you to update an rc file to bring the Google Cloud CLIs into your environment.

          Enter a path to an rc file to update, or leave blank to use
          [/home/brokedba/.bashrc]:

          brokedba~$ gcloud --version
          Google Cloud SDK 355.0.0
          bq 2.0.71
          core 2021.08.27 gsutil 4.67


          Ubuntu
          Option A

          We can use apt-get and install it as a package:

          1. Add the Cloud SDK distribution URI as a package source

          brokedba~$ echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

          2. Import the GCP public key 

          brokedba~$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

          3. Update and install the Cloud SDK

          brokedba~$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

          Option B

          If you are fine with just the core components (gcloud,gsutil,bq, gsutil, kubectl, anthoscli, ..) you can install snap package which also handles the Autoupdate.

          brokedba~$ snap install google-cloud-sdk --classic


          ► REDHAT, Fedora, CENTOS, OLinux


          # RHEL/OL/CENTOS (7,8+), Fedora 24+
          # – Create a DNF repo with CLoud SDK information
          [@localhost]$ sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
          name=Google Cloud SDK
          baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
          enabled=1
          gpgcheck=1
          repo_gpgcheck=0
          gpgkey=
          https://packages.cloud.google.com/yum/doc/yum-key.gpg
          https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
          EOM

          # Install CLOUD SDK rpm package
          [r@localhost]# sudo yum install google-cloud-sdk



















































        II. Initialize gcloud


        Once your GCP Free Tier account is created and Cloud SDK installed. All you need is run gcloud init command to:

        1- Authorize Cloud SDK to access the GCP platform using your user account
        2- Set a new configuration including proper parameters like current project and default GCE region/zone etc..


        This image has an empty alt attribute; its file name is image-4.png


        If you don’t want browser’s auto launch for authorization you can use --console-only or --no-launch-browser 


          • The interactive workflow will ask you to hit the displayed link on a browser after entering your user credentials

        This image has an empty alt attribute; its file name is image-6.png

          • When you click allow, a code will be provided which you will past on your terminal to complete the authorizationThis image has an empty alt attribute; its file name is image-7.png
          • Once authenticated, you will be asked to create a project if none exists in your account. project_id is globally unique
          • Part 2
             Enter verification code: 4/1AX4XfWhnJLpVgMtjxxxx..
              You are logged in as: [bdba@gmail.com].
              This account has no projects. Would you like to create one? (Y/n)?  y   
            Enter a Project ID. Note that a Project ID CANNOT be changed later.Project IDs
            must be 6-30 characters in length and start with a lowercase letter. brokedba2000 Waiting for [operations/cp.9218677272527086685] to finish...done.
            Your current project has been set to: [brokedba2000].

          • If you have an error while creating the project because of error “Callers must accept Terms of Service” make sure you accepted the terms in the console.

            This image has an empty alt attribute; its file name is image-8.png
          • You can now verify your default configuration after the initialization
            $ gcloud config list
            [compute]
            region = us-east1
            zone = us-east1-b
            [core]
            account =
            bdba@gmail.com
            disable_usage_reporting = True
            project = brokedba2000
            Your active configuration is: [default]


          III.Test your first API request


          Command structure: is based on the below components

          gcloud <--global flags> [service|product] <group|area> <command> <--flags> <parameters >

          group
          may be
          access-approval | access-context-manager | active-directory | ai | ai-platform | anthos | api-gateway | apigee | app | artifacts | asset | assured | auth | bigtable | billing | builds | cloud-shell | components | composer | compute | config | container | data-catalog | database-migration | dataflow | dataproc | datastore | debug | deployment-manager | dns | domains | emulators | endpoints | essential-contacts | eventarc| filestore | firebase | firestore | functions | game | healthcare | iam | iap | identity | iot | kms | logging | memcache | metastore | ml | ml-engine | monitoring | network-management | network-security | notebooks | org-policies | organizations | policy-intelligence | policy-troubleshoot | privateca| projects | pubsub | recaptcha | recommender | redis | resource-manager | resource-settings | run | scc | scheduler | secrets | service-directory | services | source | spanner | sql | tasks | topic | workflows | workspace-add-ons
          command may be         cheat-sheet | docker | feedback | help | info | init|
          survey | version
          Optional flags --account | --billing-project | –configuration | –project |
          --flatten | --format | –filters | –quiet | --flags-file ..

          Topics: 
          `gcloud topic` provides supplementary help for topics not directly associated with individual commands.

          $ gcloud topic [TOPIC_NAME]
          Available commands for gcloud topic: accessibility Reference for `Accessibility` features. arg-files Supplementary help for arg-files to be used with *gcloud firebase test*. cli-trees CLI trees supplementary help. client-certificate Client certificate authorization supplementary help. command-conventions gcloud command conventions supplementary help. configurations Supplementary help for named configurations. datetimes Date/time input format supplementary help. escaping List/dictionary-type argument escaping supplementary help. filters Resource filters supplementary help. flags-file --flags-file=YAML_FILE supplementary help. formats Resource formats supplementary help. gcloudignore Reference for `.gcloudignore` files. projections Resource projections supplementary help. resource-keys Resource keys supplementary help. startup Supplementary help for gcloud startup options. uninstall Supplementary help for uninstalling Cloud SDK.[core]
        • Result related flags  :

          1- “--formats”: Will format gcloud output into Json, yaml, Table,raw value, or cvs including projections.
          2- “--filter”: Allows to pick the list of rows to return in the output in combination with formats.
          Example > list projects that were created after Jan 1st 2021 and only show 3 specific columns
        • $ gcloud projects list --format="table(projectNumber,projectId,createTime)"     --filter="createTime>2021-01-01"
          PROJECT_NUMBER  PROJECT_ID      CREATE_TIME
          260799562386    brokedba2000  2021-09-06T22:57:41.421Z
        • Command versions
          gcloud
          has different versions for its set of commands “alpha” and “beta”. Alpha means that the feature is typically not ready for production and might still be actively developed. Beta on the other hand is normally a completed feature, that is being tested to be production ready.
        • Examples 

          There are few requests that you can run to practice with gcloud. Below commands are good examples to start with.

        • list GCP regions in the US by selecting 3 fields on a tabular format and filtering the content on a specific pattern “us-

          $ gcloud compute regions list  --format="table[box](Name,CPUS,status)"    --filter="name~us-"

          +-----------------------------+
          ¦     NAME    ¦ CPUS ¦ STATUS ¦ ------------------------------- ¦ us-central1 ¦ 0/8  ¦ UP     ¦ ¦ us-east1    ¦ 0/8  ¦ UP     ¦ ¦ us-east4    ¦ 0/8  ¦ UP     ¦ ¦ us-west1    ¦ 0/8  ¦ UP     ¦ ¦ us-west2    ¦ 0/8  ¦ UP     ¦ ¦ us-west3    ¦ 0/8  ¦ UP     ¦
          ¦ us-west4    ¦ 0/8  ¦ UP     ¦ +-----------------------------+

        • Create a new project and assign it to the current configuration
        • $ gcloud projects create My-new-project --name="MY new LAB"  --labels=type=lab
          $ gcloud config set project My-new-project    
          -- Check project
          $ gcloud compute project-info describe –project My-new-project
        • Create and list Current vms in the current project :

          $ gcloud compute instances create myvm2 --machine-type=f1-micro --image-family debian-10 --image-project debian-cloud

          $ gcloud compute instances list  --filter="zone~ us-east1 OR -machineType:f1-micro"
          NAME   ZONE        MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
          myvm   us-east1-b  f1-micro                   10.142.0.2   34.139.111.13  RUNNING|

        • Create and list a bucket in google storage :
        • $ gsutil mb -l us-east1 gs://omarlittle
          $ gsutil ls gs://omarlittle/**
          ..
        • Note: You can also display help on popular commands within a service or group/area which
        • $ gcloud help compute instances create

          NAME
              gcloud compute instances create - create Compute Engine virtual machine
             instances


            Enable APIs or install components
         
          Not all APIs are enabled by default and not all CLOUD SDK components are installed by default . Manual enabling is necessary.
          -- APIs
          $ gcloud services list available
          $ gcloud services enable  compute.googleapis.com

          -- components
          $ gcloud components list
          $ gcloud components update
          $ gcloud components install COMPONENT_ID


        Conclusion:


        In this tutorial we learned how to install and configure CLOUD SDK. We also described the command syntax and tried few requests using gcloud and gsutil. Feel free to consult Gcloud  Command Reference for more details and examples on gcloud requests.

        Thanks for reading