Showing posts with label GCP. Show all posts
Showing posts with label GCP. Show all posts

Sunday, October 17, 2021

Terraform for dummies part 4: Launch an vm with a static website on GCP

Intro

After AWS,Oracle Cloud, and Azure, GCP is the 4th cloud platform in our terraform tutorial series, we will describe what it takes to authenticate and provision a compute engine using their terraform provider. The instance will also have an nginx website linked to its public IP. If you want to know about the differences GCP brings in terms of networking it’s wrapped up on my blog 
 
Note: GCP terraform provider authentication was a hell to get hold on and counter intuitive comparing to other Cloud platforms. I wasted a lot of time just trying to figure if I could avoid hardcoding project id.     

Here’s a direct link to my GitHub repo linked to this lab =>: terraform-examples/terraform-provider-gcp

Content :
I. Terraform setup
IV. Partial deployment
 V. Full deployment
Tips  & Conclusion

Overview and Concepts

Topology

The following illustration shows the layers involved between your workstation and GCP cloud while running the terraform actions along with the instance attributes we will be provisioning.

image

Besides describing my GitHub repo before starting this tutorial, I’ll just briefly discuss some principles.

  • Terraform Files
  • - Can be a single file or split into multiple tf or tf.json files, any other file extension is ignored.
    - Files are merged in alphabetical order but resource definition order doesn't matter (subfolders are not read).
    - Common configurations have 3 type of tf files and a statefile.
      1- main.tf: terraform declaration code (configuration) . The file name can be anything you choose       
      2- variables.tf: Resource variables needed for the deploy
      3- outputs.tf: displays the resources detail at the end of the deploy
      4- terraform.tfstate: keeps track of the state of the stack(resources) after each terraform apply run

  • Terraform resource declaration syntax looks like this:
  • Component "Provider_Resource_type" "MyResource_Name" { Attribute1 = value .. 
                                                           Attribute2 = value ..}

  • Where do I find a good GCP deployment sample?
  • The easiest way is to create/locate an instance from the console and then use the import function from terraform to generate each of the related components in HCL format (vpc, instance,subnet,etc..) based on their id.

    Example for a VPC >>
    1-  Create a shell resource declaration for the vpc in a file called vpc.tf 
    2-  Get the id of the vpc resource from your GCP portal
    3-  Run the Terraform import then run Terraform show to extract the vpc full declaration from GCP to the same file (vpc.tf)
    4- Now you can remove the id attribute with all non required attributes to create a vpc resource (Do that for each resource) 
    1- # vi vpc.tf 
      provider "google" {
    features {}
    }
      resource "google_compute_network" "terra_vpc" {
    }
    2- # terraform import google_compute_network.terra_vpc {{project}}/{{name}}
    3- # terraform show -no-color > vpc.tf

    Note:
    If you want to import all the existing resources in your account in bulk mode terraformer can help import both code and state from your GCP account automatically.

    Terraform lab content: I purposely split this lab in 2 for more clarity

    • VPC Deployment: To grasp the basics of a single resource deployment.
    • Instance Deployment: Includes the instance provisioning configured as web sever(includes above vpc) .


    I.Terraform setup

         I  tried the lab using WSL (Ubuntu) terminal  from windows but same applies to Mac.

       GCP authentication (least user friendly)

      To authenticate to GCP with Terraform you will need GCloud, service account credentials key file, and the projectId

       Prerequisites

      Using dedicated service accounts to authenticate with GCP is recommended practice (not user accounts or API keys)
    • GCLOUD authentication configured with your GCP credentials. Refer to my Blog post for more details
    • $ gcloud auth login --activate

      $ gcloud config list --format='table(account,project)'
      ACCOUNT  PROJECT
      -------------- -------------
      bdba@gmail.com  brokedba2000
      Service account: Either you create a service account with “owner role” in the console or run the below cli commands
      1 -- Create service account
      $
      gcloud iam service-accounts create terraform-sa --display-name="Terra_Service"
      $
      gcloud iam service-accounts list --filter="email~terraform" --format='value(email)'

      2 -- Bind it to a project and add owner role
      $ gcloud projects add-iam-policy-binding PROJECT_ID --member="serviceAccount:email" --role="roles/owner"

      3 -– Generate the Key file for the service account
      $ gcloud iam service-accounts keys create ~/gcp-key.json --iam-account=email
      - I’ll also assume the presence of an ssh key pair to attach to your vm instance. If not here is a command to generate a PEM based key pair.  
      $  ssh-keygen -P "" -t rsa -b 2048 -m pem -f ~/id_rsa_az
      Generating public/private rsa key pair.
      Your identification has been saved in /home/brokedba/id_rsa_az.
      Your public key has been saved in /home/brokedba/id_rsa_az.pub.


    II. Clone the repository



    III. Provider setup

    1. INSTALL AND SETUP THE GCP PROVIDER

      • Cd Into terraform-provider-gcp/create-vpc where our configuration resides (i.e vpc)
        $ cd /brokedba/gcp/terraform-examples/terraform-provider-gcp/create-vpc 
      • GCP provider plugin will be automatically installed by running  ”terraform init”.
      • $ terraform init
          Initializing the backend...
        
          Initializing provider plugins...
          - Finding latest version of hashicorp/google...
          - Installing hashicorp/google v3.88.0...
          * Installed hashicorp/google v3.88.0 (signed by HashiCorp)
        Terraform has been successfully initialized!
        $ terraform --version Terraform v1.0.3 on linux_amd64 + provider registry.terraform.io/hashicorp/google v3.88.0
      • Let's see what's in the create-vpc directory. Here, only *.tf files matter (click to see content)
      • $ tree
          .
          |-- outputs.tf        ---> displays resources detail after the deploy
          |-- variables.tf      ---> Resource variables needed for the deploy   
          |-- vpc.tf            ---> Our vpc terraform declaration 
        

      IV. Partial Deployment

        DEPLOY A SIMPLE VPC

          • Once the authentication is setup and provider installed , we can run terraform plan command to create an execution plan (quick dry run to check the desired end-state).
            $ terraform plan
            var.prefix The prefix used for all resources in this example
            Enter a value: Demo Terraform used selected providers to generate the following execution plan.
            Resource actions are indicated with the following symbols: + create
            ------------------------------------------------------------------------ Terraform will perform the following actions: # google_compute_network.terra_vpc will be created
            + resource "google_compute_firewall" "web-server"
            {..
            + name               = "allow-http-rule"
            + allow {
            + ports                 = [+ "80", + "22",+ "443",+ "3389",]
            + protocol = "tcp"
            ...
            # google_compute_firewall.web-server will be created + resource "google_compute_firewall" "web-server" { {..}
            # google_compute_subnetwork.terra_sub will be created + resource "google_compute_subnetwork" "terra_sub"
            {..
            ip_cidr_range = ["192.168.10.0/24” ]
            ...
            }
            Plan: 3 to add, 0 to change, 0 to destroy.
            - The output being too verbose I deliberately kept only relevant attributes for the VPC resource plan
          • Next, we can run ”terraform deploy” to provision the resources to create our VPC (listed in the plan)
          • $ terraform apply -auto-approve
            google_compute_network.terra_vpc: Creating...
            google_compute_firewall.web-server: Creating...
            google_compute_subnetwork.terra_sub: Creating...

            ... Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: project = "brokedba2000"
                   This image has an empty alt attribute; its file name is image-4.png

          Observations:

          - The deploy started by loading the resources variables in variables.tf which allowed the execution of vpc.tf
          - Finally terraform fetched the attributes of the created resources listed in outputs.tf

          Note: We’ll now destroy the VPC as the next instance deploy contains the same VPC specs.

            $ terraform destroy -auto-approve
            
            Destroy complete! Resources: 3 destroyed.
            


        V. Full deployment (Instance)

        1. OVERVIEW

          • After our small intro to VPC creation,  let's launch a vm and configure nginx in it in one command.
          • First we need to switch to the second directory terraform-provider-gcp/launch-instance/
            Here's the content:
          • $ tree ./terraform-provider-gcp/launch-instance
            .
            |-- cloud-init          --> SubFolder
            |   `--> centos_userdata.txt --> script to config a webserver the Web homepage
            | `--> sles_userdata.txt --> for SUSE
            | `--> ubto_userdata.txt --> for Ubunto
            | `--> el_userdata.txt --> for Enteprise linux distros
            |-- compute.tf ---> Compute engine Instance terraform configuration |-- outputs.tf ---> displays the resources detail at the end of the deploy |-- variables.tf ---> Resource variables needed for the deploy |-- vpc.tf ---> same vpc we deployed earlier

            Note: As you can see we have 2 additional files and one Subfolder. compute.tf is where the compute instance and all its attributes are declared. All the other “.tf” files come from my vpc example with some additions for variables.tf and output.tf

          • Cloud-init: is a cloud instance initialization method that executes scripts upon instance Startup. see below metadata entry of the vm instance definition (startup-script). There are 5 OS’ scripts  (Centos,Ubuntu,Windows,RHEL,SUSE) windows was not tested.
            ...variable "user_data" { default = "./cloud-init/centos_userdata.txt"} 
            $ vi compute.tf resource "google_compute_instance" "terravm" {
            metadata = { startup-script    = ("${file(var.user_data)}")
            ...    
          • In my lab, I used cloud-init to install nginx and write an html page that will replace the HomePage at Startup.
          • Make sure you your public ssh key is in your home directory or just modify the path below (see variables.tf)
          • $ vi compute.tf resource "google_compute_instance" "terravm" {
            metadata = {

            admin_ssh_key {

            ssh-keys = var.admin":${file("~/id_rsa_gcp.pub")}" ## Change me

        2. LAUNCH THE INSTANCE

          • Once in “launch-instance” directory, you can  run the plan command to validate the 9 resources required to launch our vm instance. The output has been truncated to reduce verbosity
          • $ terraform plan
              ------------------------------------------------------------------------
                Terraform will perform the following actions:
            
              ... # VPC declaration (see previous VPC deploy) 
            ...
            # google_compute_instance.terra_instance will be created
            + resource "google_compute_instance" "terra_instance" { + ... + hostname             = "terrahost"
            + machine_type         = "e2-micro"
            + name                  = "Terravm"
            + tags                  = [  + "web-server", ]
            + boot_disk {
            + initialize_params { + image  = "centos-cloud/centos-7"
            + network_interface {
            ...
            + network_ip            = "192.168.10.51"
            }
            + metadata             = {
              + "ssh-keys"       = <<-EOT ssh-rsa AAAABxxx…*
            EOT
               + "startup-script" = <<-EOT       
                  EOT}
            # google_compute_address.internal_reserved_subnet_ip will be created
            + resource "google_compute_address" "internal_reserved_subnet_ip" {

                  ...} ...
              } Plan: 5 to add, 0 to change, 0 to destroy.
          • Now let’s launch our CENTOS7 vm using terraform apply (I left a map of different OS ids in the variables.tf you can choose from)
            $ terraform apply -auto-approve
            ...
            google_compute_network.terra_vpc: Creating...
            google_compute_firewall.web-server: Creating...

            google_compute_subnetwork.terra_sub: Creating... google_compute_address.internal_reserved_subnet_ip: Creating...
            google_compute_instance.terra_instance: Creating... Apply complete! Resources: 5 added, 0 changed, 0 destroyed. Outputs: vpc_name = "terra-vpc"
            Subnet_Name = "terra-sub"
            Subnet_CIDR = "192.168.10.0/24"
            fire_wall_rules = toset([
            {…
            "ports" = tolist([
              "description" = "RDP-HTTP-HTTPS ingress trafic"
              "destination_port_ranges" = toset([
            "80",
              "3389",
            "443",
            "3389",])
            ]
            hostname = "terrahost.brokedba.com"
            project = "brokedba2000"
            private_ip = "192.168.10.51"
            public_ip = "35.227.81.2"
            SSH_Connection = "ssh connection to instance  TerraCompute ==> sudo ssh -i ~/id_rsa_gcp  centos@35.227.81.2"
            This image has an empty alt attribute; its file name is image-5.png

          This image has an empty alt attribute; its file name is image-6.png

            • Once the instance is provisioned, juts copy the public IP address(i.e 52.191.26.102) in chrome and Voila!
            • You can also tear down this configuration by simply running terraform destroy from the same directory

            Tips

            • You can fetch any of the specified attributes in outputs.tf  using terraform output command i.e:  
            • $ terraform output SSH_Connection
              ssh connection to instance TerraCompute ==> sudo ssh -i ~/id_rsa_gcp centos@ ’public_IP’
            • Terraform Console:
              Although terraform is a declarative language, there are still myriads of functions you can use to process strings/number/lists/mappings etc. There is an excellent all in one script with examples of most terraform functions >> here 
            • I added cloud-init files for different distros you can play with by adapting var.user_data & var.OS 

             


               CONCLUSION

            • We have demonstrated in this tutorial how to quickly deploy a web server instance using terraform in GCP and leverage Cloud-init (Startupscript) to configure the vm during the bootstrap .
            • We had to hardcode the projectId although it’s embedded in the config credentials (key file) which is makes it tedious and rigid
            • Remember that all used attributes in this exercise can be modified in the variables.tf file.
            • Route table and internet gateway didn’t need to be created
            • Improvement:  Validate that startup script works for windows too.
              Another improvement can be reached in terms of display of the security rules using formatlist
              stay tuned

        Thank you for reading!

        Tuesday, September 7, 2021

        Google SDK (CLI for GCP) installation and few CLI examples

        This image has an empty alt attribute; its file name is image-1.png

        Intro

        Google as most of the cloud providers today, offers a simple Cloud shell solution with all required tools to connect to their platform securely using APIs. However, If you still want to have it in your laptop along with other development tools, you can always install Google Cloud SDK (especially if it’s for educational purpose).

        Cloud SDK includes the gcloud, gsutil and bq command-line tool including few components that aren’t installed by default.  GCloud is the main command line used to manage cloud resources and enabling services.

        Requirement


        Whether on windows or Linux, the basic installation and use of Cloud SDK will require 2 elements:

         

          Note: To access the GCP APIs using a specific language (like C++, ruby etc), you can download the Cloud Client Libraries.

        I. Cloud SDK Installation

        • Windows

          1- Download and execute the following Cloud SDK installer(current version: 355)
          2- Follow the on-screen instructions (the installer is also used to upgrade existing installations) . 
          This image has an empty alt attribute; its file name is image-3.png
            

          3- Run the version command to confirm that Cloud SDK was installed correctly.
               

          C:\Users\brokedba gcloud --version
          Google Cloud SDK 355.0.0
          bq   2.0.71
          core 2021.08.27
          gsutil 4.67
          
          C:\Users\brokedba> where gcloud > C:\Program Files (x86)\Cloud SDK\google-cloud-sdk\bin\gcloud
          > C:\Program Files (x86)\Cloud SDK\google-cloud-sdk\bin\gcloud.cmd


        • Note: The installation can also be done through PowerShell in one liner command (gcloud,bq,gsutil commands can run from either Command Prompt or PowerShell).
          PS C:\Users\brokedba> (New-Object Net.WebClient).DownloadFile("https://dl.google.com/dl/cloudsdk/channels/rapid/GoogleCloudSDKInstaller.exe", "$env:Temp\GoogleCloudSDKInstaller.exe")

        • Linux
          There is either an all-in-one install using packages or using interactive shell script. Let’s start with the script
        • brokedba~$ curl -sL https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-355.0.0-linux-x86_64.tar.gz| sudo tar -xz && sudo bash ./google-cloud-sdk/install.sh

          # Or more recent approach
          $ curl https://sdk.cloud.google.com | bash

          -- Workflow
          Modify profile to update your $PATH and enable shell command
          completion?

          Do you want to continue (Y/n)?  y

          The Google Cloud SDK installer will now prompt you to update an rc file to bring the Google Cloud CLIs into your environment.

          Enter a path to an rc file to update, or leave blank to use
          [/home/brokedba/.bashrc]:

          brokedba~$ gcloud --version
          Google Cloud SDK 355.0.0
          bq 2.0.71
          core 2021.08.27 gsutil 4.67


          Ubuntu
          Option A

          We can use apt-get and install it as a package:

          1. Add the Cloud SDK distribution URI as a package source

          brokedba~$ echo "deb https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list

          2. Import the GCP public key 

          brokedba~$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

          3. Update and install the Cloud SDK

          brokedba~$ curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -

          Option B

          If you are fine with just the core components (gcloud,gsutil,bq, gsutil, kubectl, anthoscli, ..) you can install snap package which also handles the Autoupdate.

          brokedba~$ snap install google-cloud-sdk --classic


          ► REDHAT, Fedora, CENTOS, OLinux


          # RHEL/OL/CENTOS (7,8+), Fedora 24+
          # – Create a DNF repo with CLoud SDK information
          [@localhost]$ sudo tee -a /etc/yum.repos.d/google-cloud-sdk.repo << EOM
          name=Google Cloud SDK
          baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el8-x86_64
          enabled=1
          gpgcheck=1
          repo_gpgcheck=0
          gpgkey=
          https://packages.cloud.google.com/yum/doc/yum-key.gpg
          https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
          EOM

          # Install CLOUD SDK rpm package
          [r@localhost]# sudo yum install google-cloud-sdk



















































        II. Initialize gcloud


        Once your GCP Free Tier account is created and Cloud SDK installed. All you need is run gcloud init command to:

        1- Authorize Cloud SDK to access the GCP platform using your user account
        2- Set a new configuration including proper parameters like current project and default GCE region/zone etc..


        This image has an empty alt attribute; its file name is image-4.png


        If you don’t want browser’s auto launch for authorization you can use --console-only or --no-launch-browser 


          • The interactive workflow will ask you to hit the displayed link on a browser after entering your user credentials

        This image has an empty alt attribute; its file name is image-6.png

          • When you click allow, a code will be provided which you will past on your terminal to complete the authorizationThis image has an empty alt attribute; its file name is image-7.png
          • Once authenticated, you will be asked to create a project if none exists in your account. project_id is globally unique
          • Part 2
             Enter verification code: 4/1AX4XfWhnJLpVgMtjxxxx..
              You are logged in as: [bdba@gmail.com].
              This account has no projects. Would you like to create one? (Y/n)?  y   
            Enter a Project ID. Note that a Project ID CANNOT be changed later.Project IDs
            must be 6-30 characters in length and start with a lowercase letter. brokedba2000 Waiting for [operations/cp.9218677272527086685] to finish...done.
            Your current project has been set to: [brokedba2000].

          • If you have an error while creating the project because of error “Callers must accept Terms of Service” make sure you accepted the terms in the console.

            This image has an empty alt attribute; its file name is image-8.png
          • You can now verify your default configuration after the initialization
            $ gcloud config list
            [compute]
            region = us-east1
            zone = us-east1-b
            [core]
            account =
            bdba@gmail.com
            disable_usage_reporting = True
            project = brokedba2000
            Your active configuration is: [default]

           

          III.Test your first API request


          Command structure: is based on the below components

          gcloud <--global flags> [service|product] <group|area> <command> <--flags> <parameters >

          group
          may be
          access-approval | access-context-manager | active-directory | ai | ai-platform | anthos | api-gateway | apigee | app | artifacts | asset | assured | auth | bigtable | billing | builds | cloud-shell | components | composer | compute | config | container | data-catalog | database-migration | dataflow | dataproc | datastore | debug | deployment-manager | dns | domains | emulators | endpoints | essential-contacts | eventarc| filestore | firebase | firestore | functions | game | healthcare | iam | iap | identity | iot | kms | logging | memcache | metastore | ml | ml-engine | monitoring | network-management | network-security | notebooks | org-policies | organizations | policy-intelligence | policy-troubleshoot | privateca| projects | pubsub | recaptcha | recommender | redis | resource-manager | resource-settings | run | scc | scheduler | secrets | service-directory | services | source | spanner | sql | tasks | topic | workflows | workspace-add-ons
          command may be         cheat-sheet | docker | feedback | help | info | init|
          survey | version
          Optional flags --account | --billing-project | –configuration | –project |
          --flatten | --format | –filters | –quiet | --flags-file ..

          Topics: 
          `gcloud topic` provides supplementary help for topics not directly associated with individual commands.

          $ gcloud topic [TOPIC_NAME]
          Available commands for gcloud topic: accessibility Reference for `Accessibility` features. arg-files Supplementary help for arg-files to be used with *gcloud firebase test*. cli-trees CLI trees supplementary help. client-certificate Client certificate authorization supplementary help. command-conventions gcloud command conventions supplementary help. configurations Supplementary help for named configurations. datetimes Date/time input format supplementary help. escaping List/dictionary-type argument escaping supplementary help. filters Resource filters supplementary help. flags-file --flags-file=YAML_FILE supplementary help. formats Resource formats supplementary help. gcloudignore Reference for `.gcloudignore` files. projections Resource projections supplementary help. resource-keys Resource keys supplementary help. startup Supplementary help for gcloud startup options. uninstall Supplementary help for uninstalling Cloud SDK.[core]
        • Result related flags  :

          1- “--formats”: Will format gcloud output into Json, yaml, Table,raw value, or cvs including projections.
          2- “--filter”: Allows to pick the list of rows to return in the output in combination with formats.
          Example > list projects that were created after Jan 1st 2021 and only show 3 specific columns
        • $ gcloud projects list --format="table(projectNumber,projectId,createTime)"     --filter="createTime>2021-01-01"
          PROJECT_NUMBER  PROJECT_ID      CREATE_TIME
          260799562386    brokedba2000  2021-09-06T22:57:41.421Z
        • Command versions
          gcloud
          has different versions for its set of commands “alpha” and “beta”. Alpha means that the feature is typically not ready for production and might still be actively developed. Beta on the other hand is normally a completed feature, that is being tested to be production ready.
        • Examples 

          There are few requests that you can run to practice with gcloud. Below commands are good examples to start with.

        • list GCP regions in the US by selecting 3 fields on a tabular format and filtering the content on a specific pattern “us-

          $ gcloud compute regions list  --format="table[box](Name,CPUS,status)"    --filter="name~us-"

          +-----------------------------+
          ¦     NAME    ¦ CPUS ¦ STATUS ¦ ------------------------------- ¦ us-central1 ¦ 0/8  ¦ UP     ¦ ¦ us-east1    ¦ 0/8  ¦ UP     ¦ ¦ us-east4    ¦ 0/8  ¦ UP     ¦ ¦ us-west1    ¦ 0/8  ¦ UP     ¦ ¦ us-west2    ¦ 0/8  ¦ UP     ¦ ¦ us-west3    ¦ 0/8  ¦ UP     ¦
          ¦ us-west4    ¦ 0/8  ¦ UP     ¦ +-----------------------------+

        • Create a new project and assign it to the current configuration
        • $ gcloud projects create My-new-project --name="MY new LAB"  --labels=type=lab
          $ gcloud config set project My-new-project    
          -- Check project
          $ gcloud compute project-info describe –project My-new-project
        • Create and list Current vms in the current project :

          $ gcloud compute instances create myvm2 --machine-type=f1-micro --image-family debian-10 --image-project debian-cloud

          $ gcloud compute instances list  --filter="zone~ us-east1 OR -machineType:f1-micro"
          NAME   ZONE        MACHINE_TYPE  PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
          myvm   us-east1-b  f1-micro                   10.142.0.2   34.139.111.13  RUNNING|

        • Create and list a bucket in google storage :
        • $ gsutil mb -l us-east1 gs://omarlittle
          $ gsutil ls gs://omarlittle/**
          ..
        • Note: You can also display help on popular commands within a service or group/area which
        • $ gcloud help compute instances create

          NAME
              gcloud compute instances create - create Compute Engine virtual machine
             instances


            Enable APIs or install components
         
          Not all APIs are enabled by default and not all CLOUD SDK components are installed by default . Manual enabling is necessary.
          -- APIs
          $ gcloud services list available
          $ gcloud services enable  compute.googleapis.com

          -- components
          $ gcloud components list
          $ gcloud components update
          $ gcloud components install COMPONENT_ID


        Conclusion:


        In this tutorial we learned how to install and configure CLOUD SDK. We also described the command syntax and tried few requests using gcloud and gsutil. Feel free to consult Gcloud  Command Reference for more details and examples on gcloud requests.

        Thanks for reading