2016/10/17

Using Packer and Terraform on Digital Ocean

I'm in the middle of an international move, which means that it's kind of slow to connect to the VPS I had been using, since they are based in Digital Ocean's Singapore region. I have a couple chef recipes to help me automate, so moving my servers over to a region closer to me shouldn't be too painful, but even just installing rbenv and getting Chef up and running can be a bit of a pain. What better time to teach myself how to use Packer and Terraform?

So this is the agenda for today:

  • Use Packer to setup users and install nginx openresty on a snapshot using Chef as the provisioner
  • Create a Terraform configuration that will use the snapshot created above to startup a Droplet

Packer

First, make a directory to contain your packer configuration files and enter the directory:


crimson@dixneuf 14:07 ~/ $ mkdir packer 
crimson@dixneuf 14:07 ~/ $ cd packer 


Next, create a json file to hold the configuration settings. You can name it whatever you like, so best to name it something memorable to remember what it is later. I called mine bustermachine.json (because I like to name my VPS after Top wo Nerae 2, why not? ¯\_(ツ)_/¯). This is the basic configuration:


{
  "builders": [{
    "type": "digitalocean",
    "api_token": "YOUR-API-TOKEN",
    "region": "fra1", 
    "size": "512mb",
    "image": "centos-7-2-x64",
    "droplet_name": "bustermachine",
    "snapshot_name": "bustermachine-img-{{timestamp}}"
  }]
}


This will create a snapshot using the centos 7.2 image in the Frankfurt 1 region. The droplet size is set at 512MB, but the size can be scaled up when creating a new droplet from this image, so making the smallest size can't hurt.
The snapshot name must be unique and is what will appear in the DigitalOcean console, so it's a good idea to set it to something memorable + a timestamp. You can find additional configuration options in Packer's official documentation.

The configuration above will build an empty server with nothing running, so let's do some provisioning with chef-solo:


  "provisioners": [{
    "type": "chef-solo",
    "cookbook_paths": ["cookbooks"],
    "data_bags_path": "data_bags",
    "run_list": [ "recipe[local-accounts]", "recipe[nginx]" ]
  }]


The cookbooks_paths and data_bags_path are relative to the working directory (our ~/packer folder), but you can also define an absolute path to an existing chef repository on your local machine. What kind of recipes you want to run is up to you, but I'm just going to run one that sets up a user account and one that installs installs openresty nginx.

OK. Let's build it.


crimson@dixneuf 14:14 ~/packer  $ packer build bustermachine.json
digitalocean output will be in this color.

==> digitalocean: Creating temporary ssh key for droplet...
==> digitalocean: Creating droplet...
==> digitalocean: Waiting for droplet to become active...
==> digitalocean: Waiting for SSH to become available...
==> digitalocean: Connected to SSH!
==> digitalocean: Provisioning with chef-solo
    digitalocean: Installing Chef...
    digitalocean: % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
    digitalocean: Dload  Upload   Total   Spent    Left  Speed
    digitalocean: 0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
    sudo: sorry, you must have a tty to run sudo
    digitalocean:  25 20058   25  5000    0     0   4172      0  0:00:04  0:00:01  0:00:03  4177
    digitalocean: curl: (23) Failed writing body (1855 != 2759)
==> digitalocean: Destroying droplet...
==> digitalocean: Deleting temporary ssh key...
Build 'digitalocean' errored: Error installing Chef: Install script exited with non-zero exit status 1

==> Some builds didn't complete successfully and had errors:
--> digitalocean: Error installing Chef: Install script exited with non-zero exit status 1

==> Builds finished but no artifacts were created.


Uh oh. Since I'm getting the message sudo: sorry, you must have a tty to run sudo, it looks like CentOS' default sudo permissions are getting in the way of Chef's installation. We can work around this by defining "ssh_pty": true in the builder portion of our Packer configuration.

Now the configuration file looks like this. Now that we've got that taken care of, let's try building again:


crimson@dixneuf 14:20 ~/digitalocean/packer  $ packer build bustermachine.json
digitalocean output will be in this color.

==> digitalocean: Creating temporary ssh key for droplet...
==> digitalocean: Creating droplet...
==> digitalocean: Waiting for droplet to become active...
==> digitalocean: Waiting for SSH to become available...
==> digitalocean: Connected to SSH!
==> digitalocean: Provisioning with chef-solo
    digitalocean: Installing Chef...

....

    digitalocean: Running handlers:
    digitalocean: Running handlers complete
    digitalocean: Chef Client finished, 14/14 resources updated in 31 seconds
==> digitalocean: Gracefully shutting down droplet...
==> digitalocean: Creating snapshot: bustermachine-img-1476642546
==> digitalocean: Waiting for snapshot to complete...
==> digitalocean: Error waiting for snapshot to complete: 
Timeout while waiting to for droplet to become 'active'
==> digitalocean: Destroying droplet...
==> digitalocean: Deleting temporary ssh key...
Build 'digitalocean' errored: Error waiting for snapshot to complete: 
Timeout while waiting to for droplet to become 'active'

==> Some builds didn't complete successfully and had errors:
--> digitalocean: Error waiting for snapshot to complete: 
Timeout while waiting to for droplet to become 'active'

==> Builds finished but no artifacts were created.


Ok this time there was a timeout... but a look at Packer's Github issues shows that this problem is a known bug that will be fixed in the next version of Packer (I'm on version 0.10.2). Even though Packer timed out, the Digital Ocean console shows that our snapshot was created ok.

Since we've successfully created our first snapshot, let's move on to creating a Droplet in Terraform.

Terraform

First make a directory to hold our configuration files and from where we will execute all our terraform commands:


crimson@dixneuf 14:07 ~/ $ mkdir terraform
crimson@dixneuf 14:07 ~/ $ cd terraform


Before we continue, though, the next step requires that we know the ID of the snapshot we just created, which is different from the slug name that appears in the console. We can use the Digital Ocean API to look that up:


curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer " / 
"https://api.digitalocean.com/v2/snapshots"
{
  snapshots: [
    {
      id: 20321411,
      name: "bustermachine-img-1476642546",
      regions: ["fra1"],
      created_at: "2016-10-16T18:31:29Z",
      resource_id: 29342757,
      resource_type: "droplet",
      min_disk_size: 20,
      size_gigabytes: 1.35
    }
  ],
  links: { },
  meta: { total: 1 }
}


In my case it's id: 20321411, so we'll have to use that ID in our Terraform config file. Let's make that file now and name it config.tf:

variable "do_token" {}

# Configure the DigitalOcean Provider
provider "digitalocean" {
    token = "${var.do_token}"
}

This first part just lets Terraform know that we intend to use Digital Ocean as our provider, but we have to pass it our API token. We can do this in two ways. One way is to create a file called terraform.tfvars to contain our variables, or we can pass the variable using the command line when we call terraform plan later on (terraform plan -var 'do_token=foo'). I recommend checking out the documentation.
Next we need to define the resources we intend to create. Here's my config for creating a Droplet named vingtsept using the snapshot ID I obtained earlier (20321411) in the image definition:

# Create a web server
resource "digitalocean_droplet" "vingtsept" {
    image = "20321411"
    name = "vingtsept"
    region = "fra1"
    size = "1gb"
    ssh_keys = [4055393]
}

Note that, although our snapshot was created from a 512MB Droplet, we can create a larger 1GB Droplet from it (but making a smaller Droplet from a bigger sized snapshot is not possible).
I already had an ssh key registered on Digital Ocean so I set the ssh key id (also obtainable via the Digital Ocean API), but if you need to upload a new one you can also use Terraform to do it:

resource "digitalocean_ssh_key" "default" {
    name = "dixneuf"
    public_key = "${file("/Users/crimson/.ssh/id_rsa.pub")}"
}

Now that we've finished creating our config file, let's try running terraform plan to check our configuration:


crimson@dixneuf 15:28 ~/terraform  $ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but
will not be persisted to local or remote state storage.

The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ digitalocean_droplet.vingtsept
    image:                "20321411"
    ipv4_address:         ""
    ipv4_address_private: ""
    ipv6_address:         ""
    ipv6_address_private: ""
    locked:               ""
    name:                 "vingtsept"
    region:               "fra1"
    size:                 "1gb"
    ssh_keys.#:           "1"
    ssh_keys.0:           "4055393"
    status:               ""

Plan: 1 to add, 0 to change, 0 to destroy.


Looks ok, so let's create the Droplet already.


crimson@dixneuf 15:28 ~/terraform  $ terraform apply
digitalocean_droplet.vingtsept: Creating...
  image:                "" => "20321411"
  ipv4_address:         "" => ""
  ipv4_address_private: "" => ""
  ipv6_address:         "" => ""
  ipv6_address_private: "" => ""
  locked:               "" => ""
  name:                 "" => "vingtsept"
  region:               "" => "fra1"
  size:                 "" => "1gb"
  ssh_keys.#:           "" => "1"
  ssh_keys.0:           "" => "4055393"
  status:               "" => ""
digitalocean_droplet.vingtsept: Still creating... (10s elapsed)
digitalocean_droplet.vingtsept: Still creating... (20s elapsed)
digitalocean_droplet.vingtsept: Still creating... (30s elapsed)
digitalocean_droplet.vingtsept: Still creating... (40s elapsed)
digitalocean_droplet.vingtsept: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: terraform.tfstate


And there it is. The Droplet is up and running. Makes for incredibly painless server setup if you ask me!

2016/10/05

Minna no Go Gengo: A Summary / Review in English (chapter 2)

It took me way longer than I expected to sit down and write this, but now that I have a bit of downtime before my next flight, I'd like to continue my summary of Minna no Go Gengo.

How to build multi-platform tools for your workplace
Author: @mattn

This chapter encourages readers to build multi-platform tools (Windows, Mac, Linux, etc) to support different devices coworkers use and gives some guidelines on how this can be done effectively in Go. Here is a bit of a break down of what each of the sections are and what kind of info they cover:

2.1 Why build internal tools in Go

Go makes it possible to statically build a runnable module for various OSes, so there is no need to ask users to install the Go runtime on their machines. Thanks to this, there is no worry that a different runtime implementation on a different OS will behave differently. Distributing a single binary file is all that is needed to let others use a Go program. Both of these things make Go a really good choice for internal tooling.

2.2 Implicit rules to follow

Rule one is use path/filepath to interact with the filesystem and not the path package. These two packages might be confusing to new users of Go: while path/filepath is pretty explanatory, path is a package meant for resolving relative paths in a http or ftp context. Because the path package does not recognize "\" as a path separator even on Windows, accessing a url like http://localhost:8080/data/..\main.go on a web server that makes use of the path package to locate static files could be used to expose the raw contents of other files on the filesystem

Rule 2 is to use defer to clean up resources. This is pretty well documented elsewhere, so I don't think I really need to elaborate.

The next recommendation is of particular concert to anyone who deals with Japanese or languages containing multibyte characters. Anyone interacting with programs that make use of the ANSI API in Windows to produce output will have to make use of an appropriate encoding package like golang.org/x/text/encoding/japanese to convert the input from ShiftJIS to UTF-8.

2.3 Using TUI on Windows

Linux based Text-based User Interfaces use a lot of escape sequences, many of which don't display properly in Windows. In Go you can use a library called termbox to make the process of making multi-platform TUI applications easier. Another recommended program is one of the author's own tools: go-colorable which can be used which can help produce coloured text in log output, etc.

2.4 Handling OS Specific Processes

Use runtime.GOOS to determine the OS from within a program.

This section also covers build constraints, but this topic is already well covered in English in the documentation, so I won't go into detail here.

2.5 Rely on existing tools instead of trying too hard

While it is technically possible to daemonize processes in Go by using the syscall package to call fork(2), the multithreaded nature of Go makes this a bit tricky. So it is generally recommended to use external tools to handle the daemonizing of a Go program. For Linux for example check out daemonize, supervisord and upstart and for Windows check out nssm

For Unix a regular user can't listen on port 80 or 433 so a lot of unix servers are configured to start as root and use setuid(2) to demote the permissions. However it's not recommended that you use setuid(2) in Go because it only affects the current thread. Instead use nginx or another server to reverse proxy requests from 80 or 433 to another port that Go can listen on.

2.6 Go likes its single binaries

Go makes deployment as easy as placing a single binary file on a server, but in the case of larger programs like web applications (for example) sometimes templates, pictures and other files are necessary. Try using go-bindata to pack static files as assets in a binary so that you don't have to sacrifice ease of deployment.

2.7 Making Windows applications

This section covers how to toggle whether or not your Go program displays a command prompt or not using the -ldflags="-H windowsgui" with go build and also how to link resource files (like the application's icon) using IDI_MYAPP ICON "myapp.ico"

And here are some recommended packages for building multi-platform compatible GUIs:

2.8 Configuration files

The first part of this section covers different file formats like INI, JSON, YAML, TOML and covers their strengths and weaknesses.

Aside from file format, file location on each platform can also be a source of confusion when configuring applications. On UNIX systems the standard was to place each file in the home directory like $HOME/.myapp originally, but more recently the XDG Base Directory Specification recommends that config files be placed under $HOME/.config/.

Similarly on Windows it's no problem if you use %USERPROFILE%\.config\, but the author mentioned he often places config files under %APPDATA%\my-app\.


Well that's the gist of it. I haven't really built software for Windows before mostly just because it seemed like too much trouble, but this chapter sure made it look like Go is making that whole process much easier for those of us who are used to developing for Linux.

For anyone who missed my (much briefer) summary of chapter 1, you can find it here.