I'm in the middle of an international move, which means that it's kind of slow to connect to the VPS I had been using, since they are based in Digital Ocean's Singapore region. I have a couple chef recipes to help me automate, so moving my servers over to a region closer to me shouldn't be too painful, but even just installing rbenv and getting Chef up and running can be a bit of a pain. What better time to teach myself how to use Packer and Terraform?
So this is the agenda for today:
- Use Packer to setup users and install nginx openresty on a snapshot using Chef as the provisioner
- Create a Terraform configuration that will use the snapshot created above to startup a Droplet
Packer
First, make a directory to contain your packer configuration files and enter the directory:
crimson@dixneuf 14:07 ~/ $ mkdir packer
crimson@dixneuf 14:07 ~/ $ cd packer
Next, create a json file to hold the configuration settings. You can name it whatever you like, so best to name it something memorable to remember what it is later. I called mine bustermachine.json (because I like to name my VPS after Top wo Nerae 2, why not? ¯\_(ツ)_/¯). This is the basic configuration:
{
"builders": [{
"type": "digitalocean",
"api_token": "YOUR-API-TOKEN",
"region": "fra1",
"size": "512mb",
"image": "centos-7-2-x64",
"droplet_name": "bustermachine",
"snapshot_name": "bustermachine-img-{{timestamp}}"
}]
}
This will create a snapshot using the centos 7.2 image in the Frankfurt 1 region. The droplet size is set at 512MB, but the size can be scaled up when creating a new droplet from this image, so making the smallest size can't hurt.
The snapshot name must be unique and is what will appear in the DigitalOcean console, so it's a good idea to set it to something memorable + a timestamp. You can find additional configuration options in Packer's official documentation.
The configuration above will build an empty server with nothing running, so let's do some provisioning with chef-solo:
"provisioners": [{
"type": "chef-solo",
"cookbook_paths": ["cookbooks"],
"data_bags_path": "data_bags",
"run_list": [ "recipe[local-accounts]", "recipe[nginx]" ]
}]
The cookbooks_paths and data_bags_path are relative to the working directory (our ~/packer folder), but you can also define an absolute path to an existing chef repository on your local machine. What kind of recipes you want to run is up to you, but I'm just going to run one that sets up a user account and one that installs installs openresty nginx.
OK. Let's build it.
crimson@dixneuf 14:14 ~/packer $ packer build bustermachine.json
digitalocean output will be in this color.
==> digitalocean: Creating temporary ssh key for droplet...
==> digitalocean: Creating droplet...
==> digitalocean: Waiting for droplet to become active...
==> digitalocean: Waiting for SSH to become available...
==> digitalocean: Connected to SSH!
==> digitalocean: Provisioning with chef-solo
digitalocean: Installing Chef...
digitalocean: % Total % Received % Xferd Average Speed Time Time Time Current
digitalocean: Dload Upload Total Spent Left Speed
digitalocean: 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
sudo: sorry, you must have a tty to run sudo
digitalocean: 25 20058 25 5000 0 0 4172 0 0:00:04 0:00:01 0:00:03 4177
digitalocean: curl: (23) Failed writing body (1855 != 2759)
==> digitalocean: Destroying droplet...
==> digitalocean: Deleting temporary ssh key...
Build 'digitalocean' errored: Error installing Chef: Install script exited with non-zero exit status 1
==> Some builds didn't complete successfully and had errors:
--> digitalocean: Error installing Chef: Install script exited with non-zero exit status 1
==> Builds finished but no artifacts were created.
Uh oh. Since I'm getting the message sudo: sorry, you must have a tty to run sudo, it looks like CentOS' default sudo permissions are getting in the way of Chef's installation. We can work around this by defining "ssh_pty": true in the builder portion of our Packer configuration.
Now the configuration file looks like this. Now that we've got that taken care of, let's try building again:
crimson@dixneuf 14:20 ~/digitalocean/packer $ packer build bustermachine.json
digitalocean output will be in this color.
==> digitalocean: Creating temporary ssh key for droplet...
==> digitalocean: Creating droplet...
==> digitalocean: Waiting for droplet to become active...
==> digitalocean: Waiting for SSH to become available...
==> digitalocean: Connected to SSH!
==> digitalocean: Provisioning with chef-solo
digitalocean: Installing Chef...
....
digitalocean: Running handlers:
digitalocean: Running handlers complete
digitalocean: Chef Client finished, 14/14 resources updated in 31 seconds
==> digitalocean: Gracefully shutting down droplet...
==> digitalocean: Creating snapshot: bustermachine-img-1476642546
==> digitalocean: Waiting for snapshot to complete...
==> digitalocean: Error waiting for snapshot to complete:
Timeout while waiting to for droplet to become 'active'
==> digitalocean: Destroying droplet...
==> digitalocean: Deleting temporary ssh key...
Build 'digitalocean' errored: Error waiting for snapshot to complete:
Timeout while waiting to for droplet to become 'active'
==> Some builds didn't complete successfully and had errors:
--> digitalocean: Error waiting for snapshot to complete:
Timeout while waiting to for droplet to become 'active'
==> Builds finished but no artifacts were created.
Ok this time there was a timeout... but a look at Packer's Github issues shows that this problem is a known bug that will be fixed in the next version of Packer (I'm on version 0.10.2). Even though Packer timed out, the Digital Ocean console shows that our snapshot was created ok.
Since we've successfully created our first snapshot, let's move on to creating a Droplet in Terraform.
Terraform
First make a directory to hold our configuration files and from where we will execute all our terraform commands:
crimson@dixneuf 14:07 ~/ $ mkdir terraform
crimson@dixneuf 14:07 ~/ $ cd terraform
Before we continue, though, the next step requires that we know the ID of the snapshot we just created, which is different from the slug name that appears in the console. We can use the Digital Ocean API to look that up:
curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer " /
"https://api.digitalocean.com/v2/snapshots"
{
snapshots: [
{
id: 20321411,
name: "bustermachine-img-1476642546",
regions: ["fra1"],
created_at: "2016-10-16T18:31:29Z",
resource_id: 29342757,
resource_type: "droplet",
min_disk_size: 20,
size_gigabytes: 1.35
}
],
links: { },
meta: { total: 1 }
}
In my case it's id: 20321411, so we'll have to use that ID in our Terraform config file. Let's make that file now and name it config.tf:
variable "do_token" {} # Configure the DigitalOcean Provider provider "digitalocean" { token = "${var.do_token}" }
This first part just lets Terraform know that we intend to use Digital Ocean as our provider, but we have to pass it our API token. We can do this in two ways. One way is to create a file called terraform.tfvars to contain our variables, or we can pass the variable using the command line when we call terraform plan later on (terraform plan -var 'do_token=foo'). I recommend checking out the documentation.
Next we need to define the resources we intend to create. Here's my config for creating a Droplet named vingtsept using the snapshot ID I obtained earlier (20321411) in the image definition:
# Create a web server resource "digitalocean_droplet" "vingtsept" { image = "20321411" name = "vingtsept" region = "fra1" size = "1gb" ssh_keys = [4055393] }
Note that, although our snapshot was created from a 512MB Droplet, we can create a larger 1GB Droplet from it (but making a smaller Droplet from a bigger sized snapshot is not possible).
I already had an ssh key registered on Digital Ocean so I set the ssh key id (also obtainable via the Digital Ocean API), but if you need to upload a new one you can also use Terraform to do it:
resource "digitalocean_ssh_key" "default" { name = "dixneuf" public_key = "${file("/Users/crimson/.ssh/id_rsa.pub")}" }
Now that we've finished creating our config file, let's try running terraform plan to check our configuration:
crimson@dixneuf 15:28 ~/terraform $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.
Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.
+ digitalocean_droplet.vingtsept
image: "20321411"
ipv4_address: ""
ipv4_address_private: ""
ipv6_address: ""
ipv6_address_private: ""
locked: ""
name: "vingtsept"
region: "fra1"
size: "1gb"
ssh_keys.#: "1"
ssh_keys.0: "4055393"
status: ""
Plan: 1 to add, 0 to change, 0 to destroy.
Looks ok, so let's create the Droplet already.
crimson@dixneuf 15:28 ~/terraform $ terraform apply
digitalocean_droplet.vingtsept: Creating...
image: "" => "20321411"
ipv4_address: "" => ""
ipv4_address_private: "" => ""
ipv6_address: "" => ""
ipv6_address_private: "" => ""
locked: "" => ""
name: "" => "vingtsept"
region: "" => "fra1"
size: "" => "1gb"
ssh_keys.#: "" => "1"
ssh_keys.0: "" => "4055393"
status: "" => ""
digitalocean_droplet.vingtsept: Still creating... (10s elapsed)
digitalocean_droplet.vingtsept: Still creating... (20s elapsed)
digitalocean_droplet.vingtsept: Still creating... (30s elapsed)
digitalocean_droplet.vingtsept: Still creating... (40s elapsed)
digitalocean_droplet.vingtsept: Creation complete
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
And there it is. The Droplet is up and running. Makes for incredibly painless server setup if you ask me!