One of the fundamentals of cloud administration in AWS is knowing how to create a custom Virtual Private Cloud (VPC) that enables the launch of AWS resources such as EC2 instances into a virtual network. What gets deployed within a VPC varies across use cases, but VPCs are generally used as a foundation for the majority of AWS infrastructure components and services.

Automate for Scalability

In the current world of cloud computing, gone are the days of hand-building infrastructure components manually. Without automation, provisioning and configuration tasks will not only take days (or weeks…or months…) but are prone to errors and inconsistencies. All of the plumbing that makes up and supports an application stack can, and should be, provisioned and configured using automation tools and infrastructure-as-code (IaC), especially when deploying applications in the public cloud. Automation allows for the ability to scale and to track deployed resources. Automation tools also make it possible to build a self-service catalog that integrates with platforms like ServiceNow. A self-service catalog can be designed within these platforms to orchestrate multiple automation tools, like Terraform and Ansible, for provisioning and configuration management of cloud resources. A self-service catalog automates workflows and approvals to enable organizations to improve the customer experience, accelerate service delivery, and reduce operational costs. For this guide, we will be creating a custom VPC and deploying two EC2 VMs using Terraform. Terraform is an open-source IaC software tool that allows cloud architects to define components and their dependencies using relatively simple declarative configuration files. Terraform allows for provisioning, modification, and decommission of all cloud resources using a simple CLI workflow (write, plan, apply). Terraform has an open-source version which is free to install and use. At a larger, scale subscription versions – Terraform Cloud and Terraform Enterprise – can be used to manage deployments for different projects and teams, and integrate with other platforms. Let’s get to the code! All code for this example can be found on my GitHub repo at: The code is broken into three different modules:
  • Networking (define the VPC and all of its components)
  • SSH-Key (dynamically create an SSH-key pair for connecting to VMs)
  • EC2 (deploy a VM in the public subnet, and deploy another VM in a private subnet)

Module 1 – Networking

  What this code will do:
  • Create a custom VPC
  • Define VPC name
  • Create an Internet Gateway and a NAT gateway
  • Define CIDR blocks
  • Deploy two public subnets, across two different AZs
  • Deploy two private subnets, across two different AZs
  • Create two security groups (one for public, and one for private access)

Module 2 – SSHKey

  What this code will do:
  • Dynamically create an SSH Key pair that will be associated with the EC2 instances
  • This SSH Key will be created dynamically, and be deleted along with all the other resources provisioned with Terraform.

Module 3 – EC2

  What this code will do:
  • Create a t2.micro AWS Linux VM in the PUBLIC subnet for use as a bastion/gateway host.
    • Terraform will copy the SSH Key from your local system to the VM and apply appropriate file permissions to it.
    • This key will be used for connections to instances in the private subnet
  • Create a t2.micro AWS Linux VM in the PRIVATE subnet

Note: In order to follow this tutorial you will need to have Terraform and AWS CLI installed and configured.  To get started, clone this Github repo to your local system and run the following commands: “terraform init” 
  • This will initialize the working directory that contains a Terraform configuration code with modules and plugins from HashiCorp.

“terraform apply”

  • This will first show an execution plan and report the resources to be deployed in AWS (23 resources in this example).
  • Once you confirm by typing “yes,” Terraform will begin provisioning the VPC, EC2 instances, and the SSH-key pair in AWS.
Once Terraform has completed provisioning resources, it will output a string you can copy and paste to the command line in order connect to your EC2 instances. I defined this to be the output code as a convenience.   Outputs: private_connection_string = ssh -i <namespace>-key.pem ec2-user@<private IP address> public_connection_string = ssh -i <namespace>-key.pem ec2-user@<public IP address>  

Now you can connect to the public EC2 instance using the public connection string, and once you are logged in to that VM, you can connect to the private EC2 instance with the private connection string.


(connecting to EC2 in public subnet from local host)



(connecting to EC2 in private subnet from bastion host)

To see all the components provisioned with Terraform, log into the AWS web console, and click the VPC and EC2 dashboards (make sure you are in the correct AWS region).


Delete Components of VPC

  Imagine building all 23 of these AWS resources manually, and then later needing to make modifications to a resource in the VPC and trying to determine the dependencies between these resources to do so. Also, an environment may only need to be available briefly for dev and test, so how do you go about deleting all of these resources when they are no longer needed? How many times have you received an unexpected bill from AWS charging you for resources that you forgot to delete? To avoid that, let’s automate this process to delete these EC2 instances and all the components that make up the newly created VPC with one command. “terraform destroy” .
  • The “terraform destroy” command is used to destroy the Terraform-managed infrastructure. This will ask for confirmation before destroying.
  • Once you confirm by typing “yes,” Terraform will delete all of the 23 AWS resources it created earlier. (Note: This will only destroy resources provisioned from the current project, nothing else.)

Of course, you can use Terraform to deploy more complex things, like load balancers, auto scaling groups, and Kubernetes clusters (among other things), but the intention in this article was to demonstrate how a simple tool like Terraform can be used to deploy and keep track of the components of a VPC, including EC2 instances, in a public cloud.

Automation at Scale

  At scale, Terraform is part of a larger automation workflow and provides additional functionality that isn’t covered here, such as keeping track of the state of deployed resources. Terraform code can be managed and deployed the same way application code is deployed, through DevOps practices and automated CI/CD pipelines using tools like ServiceNow.    

(Example automation workflow with Terraform and ServiceNow using AHEAD’s Automation Hub solution.)  With the right tools in place, infrastructure can be 100% defined in code and change management can be performed through version control tools, such as GitHub. In public clouds, VMs can be handled as ephemeral resources the same way as containers, and these VMs can be versioned, deployed, and released like application code in an agile framework, such as part of a CI/CD pipeline. Using these practices, security should also be integrated and automated throughout the stack as well. Adopting these practices will eventually help you and your team achieve the operational model needed for deploying cloud-native applications and achieving cloud scalability. For more resources and best practices on scaling cloud adoption, download our complimentary Cloud @Scale Playbook.  

Contributing Author: Leon Levy

Subscribe to the AHEAD I/O Newsletter for a periodic digest of all things apps, opps, and infrastructure.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.