Terraform, VPC, and why you want a tfstate file per env

Hey kids!  If you’ve been following along at home, you may have seen my earlier posts on getting started with terraform and figuring out what AWS network topology to use.  You can think of this one as like if those two posts got drunk and hooked up and had a bastard hell child.

Some context: our terraform config had been pretty stable for a few weeks.  After I got it set up, I hardly ever needed to touch it.  This was an explicit goal of mine.  (I have strong feelings about delegation of authority and not using your orchestration layer for configuration, but that’s for another day.)

And then one day I decided to test drive Aurora in staging, and everything exploded.

Trigger warning: rants and scary stories about computers ahead.  The first half of this is basically a post mortem, plus some tips I learned about debugging terraform outages.  The second half is about why you should care about multiple state files, how to set up and manage multiple state files, and the migration process.

First, the nightmare

This started when I made a simple tf module for Aurora, spun it up in staging, assigned a CNAME, turned on multi-AZ support for RDS, was really just fussing around with minor crap in staging, like you do.  So I can’t even positively identify which change it was that triggered it, but around 11 pm Thursday night the *entire fucking world* blew up.  Any terraform command I tried to run would just crash and dump a giant crash.log.

Terraform crashed! This is always indicative of a bug within Terraform.

So I start debugging, right?  I start backing out changes bit by bit.  I removed the Aurora module completely.  I backed out several hours worth of changes.  It became clear that the tfstate file must be “poisoned” in some way, so  I disabled remote state storage and starting performing surgery on the local tfstate file, extracting all the Aurora references and anything else I could think of, just trying to get tf plan to run without crashing.

What was especially crazy-making was the fact that I could apply *any* of the modules or resources to *any* of the environments independently.  For example:

 $ for n in modules/* ; do terraform plan -target=module.staging_$n ; done

… would totally work!  But “terraform plan” in the top level directory took a giant dump.

I stabbed at this a bunch of different ways.  I scrolled through a bunch of 100k line crash logs and grepped around for things like “ERROR”.  I straced the terraform runs, I deconstructed tens of thousands of lines of tfstates.  I spent far too much time investigating the resources that were reporting “errors” at the end of the run, which — spoiler alert — are a total red herring.  Stuff like this —

 15 module.staging_vpc.aws_eip.nat_eip.0: Refreshing state... (ID: eipalloc-bd6987da)
 16 Error refreshing state: 34 error(s) occurred:
 17 
 18 * aws_s3_bucket.hound-terraform-state: unexpected EOF
 19 * aws_rds_cluster_instance.aurora_instances.0: unexpected EOF
 20 * aws_s3_bucket.hound-deploy-artifacts: unexpected EOF
 21 * aws_route53_record.aurora_rds: connection is shut down

It’s all totally irrelevant, disregard.

It’s like 5 am by this point which is why I feel only slightly less fucking retarded about the fact that @stack72 had to gently point out that all you have to do is find the “panic” in the crash log, because terraform is written in Go so OF COURSE THERE IS A PANIC buried somewhere in all that dependency graph spew.  God, I felt like such an idiot.

The painful recovery

I spent several hours trying to figure out how to recover gracefully by isolating and removing whatever was poisoned in my state.  Unfortunately, some of the techniques I used to try and systematically validate individual components or try to narrow down the scope of the problem ended up making things much, much worse.

For example: applying individual modules (“terraform apply -target={module}”) can be extremely problematic.  I haven’t been able to get an exact repro yet, but it seems to play out something like this: if you’re applying a module that depends on other resources that get dynamically generated and passed into it, but you aren’t applying the modules that do that work in the same run, terraform will sometimes just create them again.

Like, a whole duplicate set of all your DNS zones with the same domain names but different zone ids, or a duplicate set of VPCs with the same VPC names and subnets, routes, etc, and you only find out if it gets to a point where it tries to create one of those rare resources where AWS actually enforces unique names, like autoscaling groups.

And yes, I do run tf plan.  Religiously.  But when you’re already in a bad state and you’re trying to do unhealthy things with mutated or corrupt tfstate files … shit happens.  And thus, by the time this was all over:

I ended up deleting my entire infrastructure by hand, three times.

I’m literally talking about clicking select-all-delete in the console on fifty different pages, writing grungy shell scripts to cycle through and delete every single AWS resource, purging local cache files, tracking down resources that exist but don’t show up on the console via the CLI, etc.

Of course every time I purged and started from scratch, it had to create a new zone ID for the root domain, so I had to go back and update my registrar with the new nameservers because each AWS zone ID is associated with a different set of resolvers.  Meanwhile our email, website, API etc were all unresolvable.

If we were in production, this would have been one of the worst outages of my career, and that’s … saying a lot.

So … Hi!  I learned a bunch of things from this.  And I am *SO GLAD* I got to learn them before we have any customers!

The lessons learned

(in order of least important to most important)

1. Beware of accidental duplicates.

It actually really pisses me off how easily you can just nonchalantly and unwaryingly create a duplicate VPC or Route53 zone with the same name, same delegation, same subnets, etc.  Like why would anyone ever WANT that behavior?  I blame AWS for this, not Hashicorp, but jesus christ.

So that’s going on my “Wrapper Script TODO” list: literally check AWS for any existing VPC or Route53 zone with the same name, and bail on dupes in the plan phase.

(And by the way — this outage was hardly the first time I’ve run into this.  I’ve had tf  unexpectedly spin up duplicate VPCs many, many, many times.  It is *not* because I am running apply from the wrong directory, I’ve checked for any telltale stray .terraforms.  I usually don’t even notice for a while so it’s hard to figure out what exactly I did to cause it, but definitely seems related to applying modules or subsets of a full run.  Anyway, I am literally setting up a monitoring check for duplicate VPC names which is ridiculous but whatever, maybe it will help me track this down.)

(Also, I really wish there was a $TF_ROOT.)

2. Tag your TF-owned resources, and have a kill script.

This is one of many great tips from @knuckolls that I am totally stealing.  Every taggable resource that’s managed by terraform, give it a tag like “Terraform: true”, and write some stupid boto script that will just run in a loop until it’s successfully terminates everything with that tag + maybe an env tag.  (You probably want to explicitly exclude some resources, like your root DNS zone id, S3 buckets, data backups.)

But if you get into a state where you *can’t* run terraform destroy, but you *could* spin up from a clean slate, you’re gonna want this prepped.  And I have now been in that situation at least four or five times not counting this one.  Next time it happens, I’ll be ready.

Which brings me to my last and most important point — actually the whole reason I’m even writing this blog post.  (she says sheepishly, 1000 words later.)

3. Use separate state files for each environment.

Sorry, let me try  this again in a font that better reflects my feelings on the subject:

USE SEPARATE STATE FILES.

FOR EVERY ENVIRONMENT.

This is about limiting your blast radius.  Remember: this whole thing started when I made some simple changes to staging.

To STAGING.

But all my environments shared a state file, so when something bad happened to that state file they all got equally fucked.

If you can’t safely test your changes in isolation away from prod, you don’t have infrastructure as code.

Look, you all know how I feel about terraform by now.  I love composable infra, terraform is the leader in the space, I love the energy of the community and the responsiveness of the tech leads.  I am hitching my wagon to this horse.

It is still as green as the motherfucking Shire and you should assume that every change you make could destroy the world.  So your job as a responsible engineer is to add guard rails, build a clear promotion path for validating changesets into production, and limit the scope of the world it is capable of destroying.  This means separate state files.

So Monday I sat down and spent like 10 hours pounding out a version of what this could look like.  There aren’t many best practices to refer to, and I’m not claiming my practices are the bestest, I’m just saying I built this thing and it makes sense to me and I feel a lot better about my infra now.  I look forward to seeing how it breaks and what kinds of future problems it causes for me.


HOWTO: Migrating to multiple state files

Previous config layout, single state file

If you want to see some filesystem ascii art about how everything was laid out pre-Monday, here you go.

terraform-layout-old

Basically: we used s3 remote storage, in a bucket with versioning turned on.  There was a top-level .tf file for each environment {production,dev,staging}, top-level .tf files for env-independent resources {iam,s3}, and everything else in a module.

Each env.tf file would call out to modules to build its VPC, public/private subnets, IGW, NAT gateway, security groups, public/private route53 subdomains, an auto-scaling group for each service (including launch config, int or ext ELB, bastion hosts, external DNS, tag resources, and so forth.

New config layout, with state file per environment

In the new world there’s one directory per environment and one base directory, each of which has their own remote state file (you source the init.sh file to initialize a new environment, after that it just works if you run tf commands from that directory).  “Base” has a few resources that don’t correspond to any environment — s3 buckets, certain IAM roles and policies, the root route53 zone.

Here’s an ascii representation of the new filesystem layout: terraform-layout-current

All env subdirectories have a symlink to ../variables.tf and ../initialize.tf.  Variables.tf declares has the default variables that are shared by all environments — things like

$var.region, $var.domain, $var.tf_s3_bucket

Initialize.tf contains empty variable declarations for the variables that will be populated in each env’s .tfvars file, things like like

$var.cidr, $var.public_subnets, $var.env, $var.subdomain_int_name

Other than that, each environment just invokes the same set of modules the same way they did before.

The thing that makes all this possible?  Is this little sweetheart, terraform_remote_state:

resource "terraform_remote_state" "master_state" {
  backend = "s3"
  config {
    bucket = "${var.tf_s3_bucket}"
    region = "${var.region}"
    key = "${var.master_state_file}"
  }
}

It was not at all apparent to me from the docs that you could not only store your remote state, but also query values from it.  So I can set up my root DNS zones in the base environment, and then ask for the zone identifiers in every other module after that.

module "subdomain_dns" {
  source = "../modules/subdomain_dns"
  root_public_zone_id = "${terraform_remote_state.master_state.output.route53_public_zone}"
  root_private_zone_id = "${terraform_remote_state.master_state.output.route53_internal_zone}"
  subdomain_int = "${var.subdomain_int_name}"
  subdomain_ext = "${var.subdomain_ext_name}"
}

How. Fucking. Magic. is that.

SERIOUSLY.  This feels so much cleaner and better.  I removed hundreds of lines of code in the refactor.

(I hear Atlas doesn’t support symlinks, which is unfortunate, because I am already in love with this model.  If I couldn’t use symlinks, I would probably use a Makefile that copied the file into each env subdir and constructed the tf commands.)

Rolling it out

Switching from single statefile to multiple state files was by far the trickiest part of the refactor.  First, I started by building a new dev environment from scratch just to prove that it would work.

Second, I did a “terraform destroy -target=module.staging”, then recreated it from the env_staging directory by running “./init.sh ; terraform plan -var-file=./staging.tfvars”.  Super easy, worked on the first try.

For production and base though, I decided to try doing a live migration from the shared state file to separate state files without any production impact.  This was mostly for my own entertainment and to prove that it could be done.  And it WAS doable, and I did it, but it was preeeettty delicate work and took about as long as literally everything else combined. (~5 hours?).  Soooo much careful massaging of tfstate files.

(Incidentally, if you ever have a syntax error in a 35k line JSON file and you want to find what line it’s on, I highly recommend http://jsonlint.com.  Or perhaps just reconsider your life choices.)

Stuff I still don’t like

There’s too much copypasta between environments, even with modules.  Honestly, if I could pass around maps and interpolate

$var.env

into every resource name, I could get rid of _so_ much code.  But Paul Hinze says that’s a bad idea that would make the graph less predictable and he’s smarter than me so I believe him.

TODO

There is lots left to do, around safety and tooling and sanity checks and helping people not accidentally clobber things.  I haven’t bothered making it safe for multiple engineers  because right now it’s just me.

This is already super long so I’m gonna wrap it up.  I still have a grab bag of tf tips and tricks and “things that took me hours/days to figure out that lots of people don’t seem to know either”, so I’ll probably try and dump that out too before I’ve moved on to the next thing.

Hope this is helpful for some of you.  Love to hear your feedback, or any other creative ways that y’all have gotten around the single-statefile hazard!

P.S. Me after the refactor was done =>

 

 

Terraform, VPC, and why you want a tfstate file per env

AWS Networking, Environments and You

Last week we hit an important milestone in the life of our baby startup: a functional production environment, with real data flowing from ingestion to storage to serving queries out!  (Of course everything promptly exploded , but whatever.)

This got me started thinking about how to safely stage and promote terraform changes, in order to isolate the blast radius of certain destructive changes.  I tweeted some random thing about it,

… and two surprisingly fun things happened.  I learned some new things about AWS that I didn’t know (after YEARS of using it).  And hilariously, I learned that I and lots of other people still believe things about AWS that aren’t actually true anymore.

That’s why I decided to write this.  I’m not gonna get super religious on you, but I want to recap the current set of common and/or best practices for managing environments, network segmentation, and security automation in AWS, and I also want to remind you that this post will be out of date as soon as I publish it, because things are changing pretty crazy fast.

Multiple Environments: Why I Should Care?

An environment exists to provide resource isolation.  You might want this for security reasons, or testability, or performance, or just so you can give your developers a place to fuck around and not hurt anything.

Maybe you run a bunch of similar production environments so you can give your customers security guarantees or avoid messy performance cotenancy stuff.  Then you need a template for stamping out lots of these environments, and you need to be *really* sure they can’t leak into each other.

Or maybe you are more concerned about the vast oceans of things you need to care about beyond whatever unit tests or functional tests are running on your laptop.  Like: capacity planning, load testing, validating storage changes against production workloads, exercising failovers, etc.  Any scary changes you  have, you need a production-like env to practice in.

Bottom line: If you can’t spin up a full copy of your infra and test it, you don’t actually have “infrastructure as code”.  You just have … some code, and duct tape.

https://twitter.com/phinze/status/710227204349624321

The basics are simple:

  • Non-production environments must be walled off from production as strongly as possible.  You should NEVER be able to accidentally to connect to a prod db from staging (or from one prod env to another).
  • Production and non-production environments (or all other prod envs) should share as much of the same tooling and code paths as possible.  Like, some amount asymptotically approaching 100%.  Any gaps there will inevitably, eventually bite you in the ass.

Managing Multiple Environments in AWS

There are baaaasically three patterns that people use to manage multiple environments in AWS these days:

  1. One AWS billing account and one flat network (VPC or Classic), with isolation performed by routes or security groups.
  2. Many AWS accounts with consolidated billing.  Each environment is a separate account (often maps to one acct per customer).
  3. One AWS billing account and many VPCs, where each environment ~= its own VPC.

Let’s start old school with a flat network.

1:  One Account, One Flattish Network

This is what basically everyone did before VPC.  (And ummm let’s be honest, lots of us kept it up for a while because GOD networking is such a pain.)

In EC2 Classic the best you got was security groups.  And — unlike VPC security groups — you couldn’t stack them, or change the security group of an instance type without destroying it, and there was a crazy low hard cap on the (# of secgroup rules * # of secgroups).  You could kind of gently “suggest” environments with things like DNS subdomains and config management, and sometimes you would see people literally just roll over and resort to  $ENV variables.

Most people either a) gave up and just had a flat network, or b) this happened.

At Parse we did a bunch of complicated security groups plus chef environments that let us spin up staging clusters and the occasional horrible silo’d production stack for exceptional customer requirements.  Awful.

VPC has made this better.  Even if you’re still using a flat network model.  You can now manage your route tables, stack security groups and IAM rules and reapply to existing nodes without destroying the node or dropping your internet connections.  You can define private network subnets with NAT gateways, etc.

https://twitter.com/sudarkoff/status/710264656367996928

Some people tell me they are using a single VPC with network ACLs to separate environments for ease of use, because you can reuse security groups across environments.  Honestly this seems a bit more like a bug than a feature to me, because a) you give up isolation and b) I don’t see how that helps you have a versioned, tested stack.

https://twitter.com/tomtheguvnor/status/710226892633333762

Ok moving on to the opposite of the spectrum, the crazy kids who are doing tens (or TENS OF THOUSANDS) of linked accounts.

2:  One AWS Account Per Environment

A surprising number of people adopted this model in the bad old EC2 Classic days, not because they necessarily wanted it but because they need a stronger security model and looser resource caps.  This is why AWS released Consolidated Billing way back in 2010.

I actually learned a lot this week about the multi-account model!  Like that you can create IAM roles that span accounts, or that you can share AMIs between accounts..  This model is complicated, but there are some real benefits.  Like real, actual, hardcore security/perf isolation.  And you will run into fewer resource limits than jamming everything into a single account, and revoking/managing IAM creds is clearer.

Security nerds love this model, but it’s not clear that …. literally anyone else does.

Some things that make me despise it without even trying it:

  • AWS billing is ALREADY a horrendous hairball coughed up by the world’s ugliest cat, I can’t even imagine trying to wrangle multiple accounts.
  • It’s more expensive, you incur extra billing costs.
  • Having to explicitly list any resource you want to share between any accounts just makes me want to tear my hair out strand by strand while screaming on a street corner
  • Account creation API still has manual steps, like getting certs/keypairs.
  • You cannot make bulk changes to accounts, and AWS doesn’t like you having thousands of linked accounts.  Also limits your flexibility with Reserved Instances.

Here is a pretty reasonable blog p0st laying out some of the benefits though, and as you can see, there are plenty of other crazy people who like it.  Mostly security nerds.

3:  One AWS Account, One VPC Per Environment

I have saved the best for last.  I think this is the best model, and the one I am adopting.  You spin up a production VPC, a staging VPC, dev VPC, Travis-CI VPC.  EVERYBODY GETS A VPC!#@!

One of those things that everybody seems to “know” but isn’t true is that you can’t have lots of VPCs.  Yes, it is capped at 5 by default, and many people have stories about how they couldn’t get it raised, and that used to be true.  But the hard cap is now 200, not 10, so VPC awayyyyyyyy my pretties!

Here’s another reason to love VPC <-> env mapping: orchestration is finally coming around to the party.  Even recently people were still trying to make chef-metal a thing, or developing their own coordination software from scratch with Boto, or just using the console and diffing and committing to git.

Dude, stop.  We are past the point where you should default to using Terraform or CloudFormation for the bones of your infrastructure, for the things that rarely change.  And once you’ve done that you’re most of the way to a reusable, testable stack.

Most of the cotenancy problems that account-per-env solved are a lot less compelling to me now that VPCs exist.

VPCs are here to help you think about infrastructure like regular old code.  Lots of VPCs are approximately as easy to manage as as one VPC.  Unlike lots of accounts, which are there to give you headaches and one-offs and custom scripts and pointy-clicky shit and complicated horrible things to work around.

VPCs have some caveats of their own.  Like, you can only assign a /16 to any VPC.  If you’re using 4 availability zones and public subnets + natted private subnets, that’s only ~8k IPs per subnet/AZ pair.  Shrug.

You can peer security groups across different VPCs, but not across regions (yet).  Also, if you’re a terraform user, be aware that it handles VPC peering fine but doesn’t handle multiple accounts very well.

Lots of people seem to have had issues with security group limits per VPC, even though the limit is 500 and says it can be raised by request.  I’m …. not sure what to think of that.  I *feel* like if you’re building a thing with > 500 sec group rules on a single VPC, you’re probably doing something wrong.

Test my code you piece of shit I dare you

Here’s the thing that got me excited about this from the start though, which is having the ability to do things like test terraform modules on a staging VPC from a branch before promoting the clean changes to master.  If you plan on doing things like running bleeding-edge software in production *cough* you need allllll the guard rails and test coverage you can possibly scare up.  VPCs help you get this in a massive way.

Super quick example, say you’re using adding a NAT gateway to your staging cluster, you would use the remote git source with your changes:

[code language=”javascript”]
// staging
module "aws_vpc" {
source = "git::ssh://git@github.com/houndsh/infra.git//terraform/modules/aws_vpc?ref=charity.adding-nat-gw"
env = ${var.env}

}
[/code]

And then once you’ve validated the change, you simply merge your branch to master and run terraform plan/apply to production.

[code language=”javascript”]
// production
module "aws_vpc" {
source = "github.com/houndsh/infra/terraform/modules/aws_vpc"
env = "${var.env}"

}
[/code]

And for GOD’S SAKE USE DIFFERENT STATE FILES FOR EACH VPC / ENVIRONMENT okayyyyyy but that is a different rant, not an AWS rant, so let’s move along.

In Conclusion

There are legit reasons to use all three of these models, and infinite variations upon them, and your use case is not my use case, blah blah.

But moving from one VPC to multiple VPCs is really not a big deal.  I know a lot of us bear scars, but it is nothing like the horror that was trying to move from EC2 Classic to VPC.

https://twitter.com/mistofvongola/status/710239882912747524

VPC has a steeper learning curve than Classic, but it is sooooo worth it.  Every day I’m rollin up on some new toy you get with VPC (ICMP for ELBs! composable IAM host roles! new VPC NAT gateway!!!).  The rate at which they’re releasing new shit for VPC is staggering, I can barely keep up.

Alright, said I wasn’t gonna get religious but I totally lied.

VPC is the future and it is awesome, and unless you have some VERY SPECIFIC AND CONVINCING reasons to do otherwise, you should be spinning up a VPC per environment with orchestration and prob doing it from CI on every code commit, almost like it’s just like, you know, code.

That you need to test.

Cause it is.

(thank you to everybody who chatted with me and taught me things and gave awesome feedback!!  knuckle tatts credit to @mosheroperandi)

 

AWS Networking, Environments and You