This feels like as good a time as any to note that I am not as much of an extremist as people seem to think I am when it comes to Friday deploys, or deploy freezes in general.
(Sometimes I wonder why people think I’m such an extremist, and then I remember that I did write a post about murdering puppies. Ok, ok. Point taken.)
Take this recent thread from LinkedIn, where Michael Davis posted an endorsement of my Puppies article along with his own thoughts on holiday code freezes, followed by a number of smart, thoughtful comments on why this isn’t actually attainable for everyone. Payam Azadi talks about an “icing” and “defrosting” period where you ease into and out of deploy freezes (never heard of this, I like it!), and a few other highly knowledgeable folks chime in with their own war stories and cautionary tales.
It’s a great thread, with lots of great points. I recommend reading it. I agree with all of them!!
If you can’t move swiftly with confidence, you need workarounds
For the record, I do not believe that everyone should get rid of deploy freezes, on Fridays or otherwise.
If you do not have the ability to move swiftly with confidence, which in practice means “you can generally find problems in your new code before your customers do”, which generally comes down to the quality and usability of your observability tooling, and your ability to explore high cardinality dimensions in real time (which most teams do not have), then deploy freezes before a holiday or a big event, or hell, even weekends, are probably the sensible thing to do.
If you can’t do the “right” thing, you find a workaround. This is what we do, as engineers and operators.
Deploy freezes are a hack, not a virtue
Look, you know your systems better than I do. If you say you need to freeze deploys, I believe you.
Honestly, I feel like I’ve always been fairly pragmatic about this. The one thing that does get my knickers in a twist is when people adopt a holier-than-thou posture towards their Friday deploy freezes. Like they’re doing it because they Care About People and it’s the Right Thing To Do and some sort of grand moral gesture. Dude, it’s a fucking hack. Just admit it.
It’s the best you can do with the hand you’ve been dealt, and there’s no shame in that! That is ALL I’m saying. Don’t pat yourself on the back, act a little sheepish, and I am so with you.
I think we can have nice things
I think there’s a lot of wisdom in saying “hey, it’s the holidays, this is not the time to be rushing new shit out the door absent some specific forcing function, alright?”
My favorite time of year to be at work (back when I worked in an office) was always the holidays. It was so quiet and peaceful, the place was empty, my calendar was clear, and I could switch gears and work on completely different things, out of the critical line of fire. I feel like I often peaked creatively during those last few weeks of the year.
I believe we can have the best of both worlds: a yearly period of peace and stability, with relatively low change rate, and we can evade the high stakes peril of locks and freezes and terrifying January recoveries.
How? Two things.
Don’t freeze deploys. Freeze merges.
To a developer, ideally, the act of merging their changes back to main and those changes being deployed to production should feel like one singular atomic action, the faster the better, the less variance the better. You merge, it goes right out. You don’t want it to go out, you better not merge.
The worst of both worlds is when you let devs keep merging diffs, checking items off their todo lists, closing out tasks, for days or weeks. All these changes build up like a snowdrift over a pile of grenades. You aren’t going to find the grenades til you plow into the snowdrift on January 5th, and then you’ll find them with your face. Congrats!
If you want to freeze deploys, freeze merges. Let people work on other things. I assure you, there is plenty of other valuable work to be done.
Don’t freeze deploys unless your goal is to test deploy freezes
The second thing is a corollary. Don’t actually freeze deploys, unless your SREs and on call folks are bored and sitting around together, going “wouldn’t this be a great opportunity to test for memory leaks and other systemic issues that we don’t know about due to the frequency and regularity of our deploys?”
If that’s you, godspeed! Park that deploy engine and sit on the hood, let’s see what happens!
People always remember the outages and instability that we trigger with our actions. We tend to forget about the outages and instability we trigger with our inaction. But if you’re used to deploying every day, or many times a day: first, good for you. Second, I bet you a bottle of whiskey that something’s gonna break if you go for two weeks without deploying.
I bet you the good shit. Top shelf. 🥃
This one is so easy to mitigate, too. Just run the deploy process every day or two, but don’t ship new code out.
Alright. Time for me to go fly to my sister’s house. Happy holidays everyone! May your pagers be silent and your bellies be full, and may no one in your family or friend group mention politics this year!
💜💙💚💛🧡❤️💖
charity
Me and Bubba and Miss Pinky Persnickety
P.S. The title is hyperbole! I was frustrated! I felt like people were intentionally misrepresenting my point and my beliefs, so I leaned into it. Please remember that I grew up on a farm and we ended up eating most of our animals. Possibly I am still adjusting to civilized life. Also, I have two cats and I love them very much and have not eaten either of them yet.
Two years ago I wrote a piece in The New Stack about the Future of Ops Careers. Towards the end, I wrote:
The reality is that jack-of-all-trades systems infrastructure jobs are slowly vanishing: the world doesn’t need thousands of people who can expertly tune postfix, SpamAssassin, and ClamAV—the world has Gmail. (…)
Building infrastructure and operational expertise used to be bundled together into a single role. But the industry is now bifurcating along an infrastructure fault line, and the overlap between infrastructure-oriented engineers and operationally-minded engineers is swiftly eroding. Engineers who love this work increasingly have a choice to make. Either you can 1) go deep on infrastructure by joining a company that does infrastructure as a service, or 2) go broad on operability by joining a company to help them do as little infrastructure as possible.
I described the second category as “operations engineering minus the infrastructure,” dedicated to evaluating and assembling a production stack of third-party platform providers, enabling software engineers to self-serve their services and own their own code in production. I said:
Your job will be to aggressively minimize the cycles your org devotes to infrastructure by finding effective ways to outsource or minimize infra labor. Your job is to NOT go deep if there is any workable alternative.
Your job will be to work cross-functionally with all the other software engineering teams, looking for ways to speed up their time to value and helping them own their own code in production.
Your job will be to move past the kludgey old models of “outsourcing” to sophisticated understandings of how and where to leverage abstractions that can radically accelerate development.
That second category I was describing now has a name. We call those teams “platform engineering.”
The fifty-year arc of software careers
In the beginning, there were people who wrote and ran software. At some point, we spun away ops skills from dev skills into two different professions, but that turned out to be a ginormous mistake, so along came DevOps to reunify them. Nowadays, ops as an independent profession is in the process of fading out. Companies are spinning down their ops teams left and right. Engineers who formerly identified as sysadmins or operations have turned into DevOps engineers, and soon there will just be “software people” again. This is the way of things.
Please note that this is NOT the same thing as saying “ops is dead,” or “ops skills are no longer valuable or needed1.” Our systems are only getting more complex, more difficult to operate, and simultaneously more critical to life on earth, which means that operational excellence has never been more desperately needed (and if you don’t respect that, 🌈 you deserve to suffer 🌈).
The industry story of the past three to five years has been us trying to figure out how to help software engineers own their own code in production2, phasing out dedicated ops teams, and aggressively outsourcing as much infrastructure as possible.
As we should. Developer cycles are the scarcest resource in your company, and you want to spend as many of those as possible on your core product: the crown jewel, the code that makes you a business. Money is cheaper than engineering cycles, and teams that are focused on their core business will always outperform teams whose focus is spread across dozens of non-revenue-generating projects. Let someone else build and run all the dependencies and adjacencies.
Before: some engineers wrote code, and some engineers ran code.
Now: all engineers write code, and all engineers run the code they write.
Platform engineering is what stands between you and darkness
When you start talking about putting software engineers on call for their own code, and generally being more involved in production, some percentage of the time you will hear back a guttural wail of despair: “You can’t expect me to know EVERYTHING about EVERYTHING!”
Quite right; we can’t. Platform engineering teams are part of the answer to this perfectly reasonable complaint. It’s not that you’re being asked to do or understand more in toto, but the distribution of labor and responsibility is shifting:
Before: some engineers wrote code, and some engineers ran code.
Now: all engineers write code, and all engineers run the code they write—but we divide the areas of responsibility by layer or function.
The emergence of a minimum viable self-serve tier
In the earliest days of a company, your first few engineers end up bootstrapping an infrastructure by reading AWS docs or blog posts, or asking a friend for recommendations to get started. They might start by setting up a managed container service, or configuring Terraform, and for a while everybody deploys and owns their own code, just as god intended.
But cognitive limits kick in pretty quickly. The maze of APIs and SDKs and components out there is simply bewildering, even for an experienced ops hand. Before long, it becomes someone’s job to make good decisions, pick a suite of compute and storage options that serve the team’s needs, and write some tooling that pulls everything into a coherent whole—which, at a minimum, lets you:
Run tests and generate new artifacts
Deploy artifacts, version them, and roll back
Instrument, monitor, and debug
Store data somewhere, manage schemas and migrations
Adjust capacity as needed
Define and commit all components (and their relationships) as code
Once these are built, it should be trivial for an engineer to come along and spin up a new service using templates and components from existing services. It should be much simpler and easier to use the blessed paths than anything else, and there should be friction if you go off the beaten path.
Congratulations! You’ve just been platformed 🎉. One of the key principles of any developer platform is that it should be easy to do the right things, and hard to do the wrong things.
The differences between platform engineering and traditional ops
Platform teams are typically staffed by engineers who are comfortable writing software. Not just scripting and automation, but writing tests and doing code reviews. Platform teams also operate much more like product development teams do, with product managers (and occasionally, designers, developer advocates, or UX researchers).
This doesn’t mean that everybody on a platform team has to have originally been a software engineer; in fact, a super common failure condition for platform teams is simply thinking all they need to do is hire software engineers to build developer tools. A strong platform team has an equally deep grounding in operations experience and software development. Individuals who are experts in both areas are fairly rare, but you can pull together a strong, well-rounded team by assembling a mix of SWEs (with some ops experience) and ops or DevOps engineers (with some software experience) and having them learn and grow from each other.
Platform teams are decidedly cloud-native; they actually mostly involve platforms built atop the cloud itself—PaaS, IaaS, everything-aaS, serverless, and so forth.
Ops/DevOps teams are oriented around managing infrastructure, often several generations of infrastructure. Their turf is everything from data centers and bare metal up through virtualization, containers, and the cloud (they aren’t so much cloud-native as cloud-enabled). They measure themselves on things like SLOs and the DORA metrics. You know they’re doing a good job if the system is up/available and users are happy.
Platform teams are oriented around providing a good experience for developers to self-serve and self-manage their code. The more swiftly and easily developers can move, the better your platform team. Operational excellence, in the platform model, is actually more the responsibility of the other engineering teams (and/or an adjacent SRE team) than that of the platform team.
Platform teams typically work higher up the stack than operations, DevOps, or SRE teams do, and they involve a great deal less infrastructure. On the contrary, platform teams are bent on paying other people to run as much shit as possible, preserving their own scarce development cycles for their core product.
Here is a somewhat tongue-in-cheek table of the similarities and differences between the archetypes.
Platform engineers vs. DevOps engineers
Platform Engineer
Ops (or DevOps) Engineer
% of job spent writing code
> 50%
< 50%
Rest of time spent
Gathering product requirements, doing user research, architecture discussions, optimizing internal workflows, researching new tools and developer productivity ideas, reviewing other teams’ diffs for impact, performance tuning, helping other engineers own & scale their code, fixing CI/CD pipelines.
Fixing cron jobs, automating old setup docs, converting PXE/rsync to Chef/Puppet, converting Chef/Puppet to Terraform, converting VMs to containers, deploying software, debugging broken deploys, writing monitoring checks, doing retros, building out new services, pairing with software engineers to understand and debug their code, investigating weird shit, documentation, etc.
Responsible for
Enabling internal teams to self-serve their ability to run and own their code in production. Creating standard, reusable components and processes. Defining golden paths.
Infrastructure capacity planning, scaling, performance tuning, upgrading. Reliability and resiliency, SLOs and monitoring/alerting. Delivering quality experience to customers.
Builds for
Internal developer teams
Customers
Development style
Infrastructure as a product
Infrastructure as code
Works with product managers
Yes
No
Works with UX researchers or designers
Sometimes
No
Dashboards & graphs
Uses APM, observability, tracing. Cares a lot about instrumentation and OpenTelemetry.
Uses metrics, logs, dashboards; monitoring, alerting, and agent/sidecar/blackbox telemetry.
What ‘coding’ means to them
Developing new features & services, writing tests. These are (primarily) software people who do systems.
Automation, configuration, DSLs, extending and debugging existing code. These are systems people who do software.
Preferred language
Go, Rust
Python, Ruby
Time spent in Linux
Hardly any
A lot
Succeeds when
Developers can easily choose good defaults, self-serve their infra, and own their own code in production.
Infrastructure is scalable, secure, cost-effective, reliable, and customers are happy.
Native terrain
Serverless, *aaS, APIs for everything (cloud-native and above).
Instances, VMs, containers, regions, multi-cloud (everything “below,” but up to and including the cloud).
Databases
Uses hosted DBs
Runs their own, blending automation & DBA expertise
SSH
No
Yes
Shell
REPL
bash/zsh
Mantra
“Run Less Software”
“Cattle, Not Pets”
What about DevOps vs. SRE?
Countless words have been spilled on the difference between DevOps and SRE3, which I won’t rehash.
Here’s what I’ll say: DevOps, to me, feels like a relevant concept for companies that have a lot of infrastructure to wrangle. Companies that do in fact have dev teams and ops teams, or dev teams and DevOps teams (🙄), tend to have a lot of operational shit to automate, test, and run. They use config management, virtualization, and containers, often managing several generations worth of technology, possibly even down to data centers and bare metal. DevOps is for companies that have some combination of bare metal, VMs, regions, AZs, multi-cloud, networking devices, self-managed databases, etc.
DevOps is capacious. It contains multitudes. DevOps writes code, and DevOps has a fuckload of code to manage.
It is also on its way to becoming irrelevant. We are swiftly entering a post-DevOps world.
SRE, to me, feels different. I associate SRE with very large companies, where they mostly have software engineers owning their own code in production, but maybe still struggle with it a bit. SREs are often embedded within software engineering teams or product groups, and they focus a lot on, well, reliability, as the name suggests.
This means they do less infrastructure jockeying or automating (although they still do some coding). They typically have a lot to say about instrumentation, monitoring and observability, and cross-functional coordination. They run incident response and do blameless retros, and they tend to be experts at scaling.
If a company has both a DevOps team and SRE, typically I expect to see the SRE team more on the frontlines, involved with incidents, telemetry, etc., and DevOps teams more on the backburner, slinging pipes and plumbing.
Observability engineering as a case study
In the same piece I referenced earlier, I also wrote about the role of observability teams. I said they should largely no longer be running their own monitoring and graphing software in-house. Yet there is still a place for observability teams to exist: they remain a critical link between outsourced solutions and internal developer needs.
That team should write libraries, generate examples, and drive standardization; ushering in consistency, predictability, and usability. They should partner with internal teams to evaluate use cases. They should partner with your vendors as roadmap stakeholders. They might also write glue code and helper modules to connect disparate data sources and create cohesive visualizations. Basically, that team becomes an integration point between your organization and the outsourced work.
I originally wrote this about observability, but it could just as easily be used to describe platform engineering as a whole. This is the role—being the bridge between other vendors and your own core software. It’s a very high-leverage place to sit.
Ops is dead, long live ops
I’ve spent a lot of time thinking about this because we’ve had such a hard time nailing down exactly who the Honeycomb customer is. Sometimes our buyer is an ops team buying it for their SWEs, sometimes it’s SREs in the midst of an outage, sometimes it’s a VP or director of engineering, or an architect, or a CTO, or a “full stack” engineering team, or even a product manager. It is hard to form a snappy answer out of that list.
The first couple questions every new go-to-market candidate asks us are “who is your buyer?” and “how do we help them?” To which I respond with a five minute ramble where I list every above persona and each of their pain points. Hardly the concrete answer they would like to receive.
As it goes, sociotechnical trends come and go. A year ago, Christine and I were speculating that platform engineering might be on the verge of consolidating the necessary ingredients that makes up our ideal buyer:
Writing and shipping code, and needing to understand their own code
Positioned to help other teams with their instrumentation patterns and tooling
Firmly cloud-native+ and untethered to hardware or traditional infrastructure
To my delight, since that conversation, these trends have only accelerated—and I, for one, welcome our new platform engineering overlords to the observability table. ☺️
If you’d like to learn more about platform engineering, we’ll be running a Twitter space on ✨ October 20th ✨ at 12:00 p.m. PT. Come join us! I’ll be there along with two colleagues and we’ll be answering your questions and shedding more light on the topic.
1I do hear people saying that, and it used to make me fucking furious, but now I just smugly remind myself how much self-inflicted suffering they are in for. Disrespecting operational expertise is the shortest path to never again sleeping through the night.
2It is rather incredible how rapidly this idea has taken off. When we started talking about putting developers on call for their code in 2016, people got seriously angry with us. Before that, the only twitter mention I could find of putting devs on call was one by (of course) Adrian Cockcroft, but by 2019-2020 it had stopped being controversial and soon became common wisdom.
There exist some wonderful teams out there who have valid, well thought through, legitimate reasons for enforcing “NO FRIDAY DEPLOYS” week in and week out, for not hooking CI/CD up to autodeploy, and for not shipping one person’s changes at a time.
“Maybe that works for some teams, like baby startups, but not real software systems that people rely on” (false-uniqueness bias)
“We’ve already invested a ton of engineering hours into building a deployment framework that doesn’t ship especially fast and doesn’t ship one change at a time, but it works, and we don’t want to have to redo everything from scratch.” (surrogation, ostrich effect, irrational escalation, ikea-effect,law of the instrument)
“I get why it’s important, but the most important thing right now is for us to ship all of these features. At some point we’ll have the spare time to fix our deploys.” (hyperbolic discounting)
“Deploys are just inherently scary; there’s nothing that can be done about that. You should do them sparingly and with someone monitoring it closely.” (availability bias, dread aversion, functional fixedness)
“There is nothing anyone could say or do to convince me that the best thing I can do as a manager to protect their weekends is not blocking Friday deploys. I just feel it in my gut. I just know.” (… I got nuthin)
We’re humans. 💜 We leap to conclusions with the wetware we have doing the best it can based on heuristics that feel objectively true, but are ultimately just emotional reactions based on past lived experience. And then we retroactively enshrine those goofy gut feelings with the language of noble motive and moral values.
“I tell people not to deploy to production … because I care so deeply about my team and their ability to have a quiet weekend.”
Barf. 🙄 That’s just like saying you tell your kid not to brush his teeth at night, because you care SO DEEPLY about him and his ability to go to bed calm and happy.
Once the retcon engine in your brain gets running, it comes up with all sorts of reasons. Plausible-sounding reasons! But every single argument of the items in the list above is materially false.
Deploy myths are never going away for good; they appeal to too many of our cognitive biases. But what if there was one simple thing you could do that would invert many of these cognitive biases and cause people to grapple with the question in a new way? What if you could kickstart a recalculation?
My next post will pick up right here. I’ll tell you all about the One Simple Trick you can do to fix your deploys and set you on the virtuous path of high-performing teams.
Til then, here’s what I’ve previously written on the topic.
Availability bias: The tendency to overestimate the likelihood of events with greater “availability” in memory, which can be influenced by how recent the memories are or how unusual or emotionally charged they may be.
Continued influence effect: The tendency to believe previously learned misinformation even after it has been corrected. Misinformation can still influence inferences one generates after a correction has occurred.
Conservatism bias: The tendency to revise one’s belief insufficiently when presented with new evidence.
Default effect: When given a choice between several options, the tendency to favor the default one.
Dread aversion: Just as losses yield double the emotional impact of gains, dread yields double the emotional impact of savouring
False-uniqueness bias: The tendency of people to see their projects and themselves as more singular than they actually are.
Functional fixedness: Limits a person to using an object only in the way it is traditionally used
Hyperbolic discounting: Discounting is the tendency for people to have a stronger preference for more immediate payoffs relative to later payoffs. Hyperbolic discounting leads to choices that are inconsistent over time – people make choices today that their future selves would prefer not to have made, despite using the same reasoning
IKEA effect: The tendency for people to place a disproportionately high value on objects that they partially assembled themselves, such as furniture from IKEA, regardless of the quality of the end product
Illusory truth effect: A tendency to believe that a statement is true if it is easier to process, or if it has been stated multiple times, regardless of its actual veracity.
Irrational escalation: The phenomenon where people justify increased investment in a decision, based on the cumulative prior investment, despite new evidence suggesting that the decision was probably wrong. Also known as the sunk cost fallacy
Law of the instrument: An over-reliance on a familiar tool or methods, ignoring or under-valuing alternative approaches. “If all you have is a hammer, everything looks like a nail”
Mere exposure effect: The tendency to express undue liking for things merely because of familiarity with them
Negativity bias: Psychological phenomenon by which humans have a greater recall of unpleasant memories compared with positive memories
Non-adaptive choice switching: After experiencing a bad outcome with a decision problem, the tendency to avoid the choice previously made when faced with the same decision problem again, even though the choice was optimal
Omission bias: The tendency to judge harmful actions (commissions) as worse, or less moral, than equally harmful inactions (omissions).
Ostrich effect: Ignoring an obvious (negative) situation
Plan continuation bias: Failure to recognize that the original plan of action is no longer appropriate for a changing situation or for a situation that is different than anticipated
Prevention bias: When investing money to protect against risks, decision makers perceive that a dollar spent on prevention buys more security than a dollar spent on timely detection and response, even when investing in either option is equally effective
Pseudocertainty effect: The tendency to make risk-averse choices if the expected outcome is positive, but make risk-seeking choices to avoid negative outcomes
Salience bias: The tendency to focus on items that are more prominent or emotionally striking and ignore those that are unremarkable, even though this difference is often irrelevant by objective standards
Selective perception bias: The tendency for expectations to affect perception
Status-quo bias: If no special action is taken, the default action that will happen is that the code will go live. You will need an especially compelling reason to override this bias and manually stop the code from going live, as it would by default.
Slow-motion bias: We feel certain that we are more careful and less risky when we slow down. This is precisely the opposite of the real world risk factors for shipping software. Slow is dangerous for software; speed is safety. The more frequently you ship code, the smaller the diffs you ship, the less dangerous each one actually becomes. This is the most powerful and difficult to overcome of all of our biases, because there is no readily available counter-metaphor for us to use. (Riding a bike is the best I’ve come up with. 😔)
Surrogation: Losing sight of the strategic construct that a measure is intended to represent, and subsequently acting as though the measure is the construct of interest
Time-saving bias: Underestimations of the time that could be saved (or lost) when increasing (or decreasing) from a relatively low speed and overestimations of the time that could be saved (or lost) when increasing (or decreasing) from a relatively high speed.
Zero-risk bias: Preference for reducing a small risk to zero over a greater reduction in a larger risk.
I’ve fallen way behind on my blog posts — my goal was to write one per month, and I haven’t published anything since MAY. Egads. So here I am dipping into the drafts archives! This one was written in April of 2016, when I was noodling over my CraftConf 2016 talk on “DevOps for Developers (see slides).”
So I got to the part in my talk where I’m talking about how to interview and hire software engineers who aren’t going to burn the fucking house down, and realized I could spend a solid hour on that question alone. That’s why I decided to turn it into a blog post instead.
Stop telling ops people to code better, start telling SWEs to ops better
Our industry has gotten very good at pressing operations engineers to get better at writing code, writing tests, and software engineering in general these past few years. Which is great! But we have not been nearly so good at pushing software engineers to level up their systems skills. Which is unfortunate, because it is just as important.
Most systems suffer from the syndrome of running too much software. Tossing more software into the heap is as likely to cause more problems as often as it solves them.
We see this play out at companies stacked with good software engineers who have built horrifying spaghetti messes of their infrastructure, and then commence paging themselves to death.
The only way to unwind this is to reset expectations, and make it clear that
you are still responsible for your code after it’s been deployed to production, and
operational excellence is everyone’s job.
Operations is the constellation of tools, practices, policies, habits, and docs around shipping value to users, and every single one of us needs to participate in order to do this swiftly and safely.
Every software engineering interviewing loop should have an ops component.
Nobody interviews candidates for SRE or ops nowadays without asking some coding questions. You don’t have to be the greatest programmer in the world, but you can’t be functionally illiterate. The reverse is less common: asking software engineers basic, stupid questions about the lifecycle of their code, instrumentation best practices, etc.
It’s common practice at lots of companies now to have a software engineer in the loop for hiring SREs to evaluate their coding abilities. It should be just as common to have an ops engineer in the loop for a SWE hire, especially for any SWE who is being considered for a key senior position. Those are the people you most rely on to be mentors and role models for junior hires. All engineers should embrace the ethos of owning their code in production, and nobody should be promoted or hired into a senior role if they don’t.
And yes, that means all engineers!Even your iOS/Android engineers and website developers should be interested in what happens to their code after they hit deploy.They should care about things like instrumentation, and what kind of data they may need later to debug their problems, and how their features may impact other infrastructure components.
You need to balance out your software engineers with engineers who don’t react to every problem by writing more code. You need engineers who write code begrudgingly, as a last resort. You’ll find these priceless gems in ops and SRE.
ops questions for software engineers
The best questions are broad and start off easy, with plenty of reasonable answers and pathways to explore. Even beginners can give a reasonable answer, while experts can go on talking for hours.
For example: give them the specs for a new feature, and ask them to talk through the infrastructure choices and dependencies to support that feature. Do they ask about things like which languages, databases, and frameworks are already supported by the team? Do they understand what kind of monitoring and observability tools to use, do they ask about local instrumentation best practices?
Or design a full deployment pipeline together. Probe what they know about generating artifacts, versioning, rollbacks, branching vs master, canarying, rolling restarts, green/blue deploys, etc. How might they design a deploy tool? Talk through the tradeoffs.
Some other good starting points:
“Tell me about the last time you caused a production outage. What happened, how did you find out, how was it resolved, and what did you learn?”
“What are some of your favorite tools for visibility, instrumentation, and debugging?
“Latency seems to have doubled over the last 6 hours. Where do you start looking, how do you start debugging?”
And this chestnut: “What happens when you type ‘google.com’ into a web browser?” You would be fucking *astonished* how many senior software engineers don’t know a thing about DNS, HTTP, SSL/TLS, cookies, TCP/IP, routing, load balancers, web servers, proxies, and on and on.
Another question I really like is: “what’s your favorite API (or database, or language) and why?” followed up by “… and what are the worst things about it?” (True love doesn’t mean blind worship.)
Remember, you’re exploring someone’s experience and depth here, not giving them a pass-fail quiz. It’s okay if they don’t know it all. You’re also evaluating them on communication skills, which is severely underrated by most people but is actually as a key technical skill.
Signals to look for
You’re not looking for perfection. You are teasing out signals for things like, how will this person perform on a team where software engineers are expected to own their code? How much do they know about the world outside the code they write themselves? Are they curious, eager, and willing to learn, or fearful, incurious and begrudging?
Do they expect networks to be reliable? Do they expect databases to respond, retries to succeed? Are they offended by the idea of being on call? Are they overly clever or do they look to simplify? (God, I hate clever software engineers 🙃.)
It’s valuable to get a feel for an engineer’s operational chops, but let’s be clear, you’re doing this for one big reason: to set expectations. By making ops questions part of the interview, you’re establishing from the start that you run an org where operations is valued, where ownership is non-optional. This is not an ivory tower where software engineers can merrily git push and go home for the day and let other people handle the fallout
It can be toxic when you have an engineer who thinks all ops work is toil and operations engineering is lesser-than. It tends to result in operations work being done very poorly. This is your best chance to let those people self-select out.
You know what, I’m actually feeling uncharacteristically optimistic right now. I’m remembering how controversial some of this stuff was when I first wrote it, five years ago in 2016. Nowadays it just sounds obvious. Like table stakes.
every dashboard is a sunk cost every dashboard is an answer to some long-forgotten question every dashboard is an invitation to pattern-match the past instead of interrogate the present every dashboard gives the illusion of correlation every dashboard dampens your thinking https://t.co/OIEowa1COa
… which stirred up some Feelings for many people. 🙃 So I would like to explain my opinions in more detail.
Static vs dynamic dashboards
First, let’s define the term. When I say “dashboard”, I mean STATIC dashboards, i.e. collections of metrics-based graphs that you cannot click on to dive deeper or break down or pivot. If your dashboard supports this sort of responsive querying and exploration, where you can click on any graph to drill down and slice and dice the data arbitrarily, then breathe easy — that’s not what I’m talking about. Those are great. (I don’t really consider them dashboards, but I have heard a few people refer to them as “dynamic dashboards”.)
Actually, I’m not even “against” static dashboards. Every company has them, including Honeycomb. They’re great for getting a high level sense of system functioning, and tracking important stats over long intervals. They are a good starting point for investigations. Every company should have a small, tractable number of these which are easily accessible and shared by everyone.
Debugging with dashboards: it’s a trap
What dashboards are NOT good at is debugging, or understanding or describing novel system states.
I can hear some of you now: “But I’ve debugged countless super-hard unknown problems using only static dashboards!” Yes, I’m sure you have. If all you have is a hammer, you CAN use it to drive screws into the wall, but that doesn’t mean it’s the best tool. And It takes an extraordinary amount of knowledge and experience to be able to piece together a narrative that translates low-level system statistics into bugs in your software and back. Most software engineers don’t have that kind of systems experience or intuition…and they shouldn’t have to.
Why are dashboards bad for debugging? Think of it this way: every dashboard is an answer to a question someone asked at some point. Your monitoring system is probably littered with dashboards, thousands and thousands of them, most of whose questions have been long forgotten and many of whose source data streams have long since gone silent.
So you come along trying to investigate something, and what do you do? You start skimming through dashboards, eyes scanning furiously, looking for visual patterns — e.g. any spikes that happened around the same time as your incident. That’s not debugging, that’s pattern-matching. That’s … eyeball racing.
if we did math like we do dashboards
Imagine you’re in a math competition, and you get handed a problem to solve. But instead of pulling out your pencil and solving the equation, step by step, you start hollering out guesses.
“27!”
“19992.41!”
“1/4325!”
That’s what flipping through dashboards feels like to me. You’re riffling through a bunch of graphs that were relevant to some long-ago situation, without context or history, without showing their work. Sometimes you’ll spot the exact scenario, and — huzzah! — the number you shout is correct! But when it comes to unknown scenarios, the odds are not in your favor.
Debugging looks and feels very different from flipping through answers. You ask a question, examine the answer, and ask another question based on the result. (“Which endpoints were erroring? Are all of the requests erroring, or only some? What did they have in common?”, etc.)
You methodically put one foot in front of the other, following the trail of bread crumbs, until the data itself leads you to the answer.
The limitations of metrics and dashboards
Unfortunately, you cannot do that with metrics-based dashboards, because you stripped away the connective tissue of the event back when you wrote the metrics out to disk.
If you happened to notice while skimming through dashboards that your 404 errors spiked at 14:03, and your /payment and /import endpoints started erroring at 14.03, and your database started returning a bunch of mysql errors shortly after 14:00, you’ll probably assume that they’re all related and leap to find more evidence that confirms it.
But you cannot actually confirm that those events are the same ones, not with your metrics dashboards. You cannot drill down from errors to endpoints to error strings; for that, you’d need a wide structured data blob per request. Those might in fact be two or three separate outages or anomalies happening at the same time, or just the tip of the iceberg of a much larger event, and your hasty assumptions might extend the outage for much longer than was necessary.
With metrics, you tend to find what you’re looking for. You have no way to correlate attributes between requests or ask “what are all of the dimensions these requests have in common?”, or to flip back and forth and look at the request as a trace. Dashboards can be fairly effective at surfacing the causes of problems you’ve seen before (raise your hand if you’ve ever been in an incident review where one of the follow up tasks was, “create a dashboard that will help us find this next time”), but they’re all but useless for novel problems, your unknown-unknowns.
Other complaints about dashboards:
They tend to have percentiles like 95th, 99th, 99.9th, 99.99th, etc. Which can cover over a multitude of sins. You really want a tool that allows you to see MAX and MIN, and heatmap distributions.
A lot of dashboards end up getting created that are overly specific to the incident you just had — naming specific hosts, etc — which just creates clutter and toil. This is how your dashboards become that graveyard of past outages.
The most useful approach to dashboards is to maintain a small set of them; cull regularly, and think of them as a list of starter queries for your investigations.
“I like to compare the dashboards to the big display in a hospital room: heartbeat, pressure, oxygenation, etc. Those can tell you when a thing is wrong, but the context around the patient chart (and the patient themselves) is what allows interpretation to be effective. If all we have is the display but none of the rest, we’re not getting anywhere close to an accurate picture. The risk with the dashboard is having the metrics but not seeing or knowing about the rest changing.”
In conclusion
Dashboards aren’t universally awful. The overuse of them just encourages sloppy thinking, and static ones make it impossible for you to follow the plot of an outage, or validate your hypotheses. 🤒 There’s too many of them, and not enough shared consensus. (It would help if, like, new dashboards expired within a month if nobody looked at them again.)
If what you have is “nothing”, even shitty dashboards are far better than no dashboards. But shitty dashboards have been the only game in town for far too long. We need more vendors to think about building for queryability, explorability, and the ability to follow a trail of breadcrumbs. Modern systems are going to demand more and more of this approach.
Nothing < Dashboards < a Queryable, Exploratory Interface
If everyone out there who slaps “observability” on their web page also felt the responsibility to add an observability-enabling interface to their tool, one that would let users explore and identify unknown-unknowns, we would all be in a far better place. 🙂
Maybe I need to write a blog post called "On Call For Managers". If you're asking engineers to be on call for their code — and you should — you owe in return:
– enough time to fix what's broken – hands to do the work – closely track how often they are interrupted/woken – ..etc
There are few engineering topics that provoke as much heated commentary as oncall. Everybody has a strong opinion. So let me say straight up that there are few if any absolutes when it comes to doing this well; context is everything. What’s appropriate for a startup may not suit a larger team. Rules are made to be broken.
That said, I do have some feelings on the matter. Especially when it comes to the compact between engineering and management. Which is simply this:
It is engineering’s responsibility to be on call and own their code. It is management’s responsibility to make sure that on call does not suck. This is a handshake, it goes both ways, and if you do not hold up your end they should quit and leave you.
As for engineers who write code for 24×7 highly available services, it is a core part of their job is to support those services in production. (There are plenty of software jobs that do not involve building highly available services, for those who are offended by this.) Tossing it off to ops after tests pass is nothing but a thinly veiled form of engineering classism, and you can’t build high-performing systems by breaking up your feedback loops this way.
Someone needs to be responsible for your services in the off-hours. This cannot be an afterthought; it should play a prominent role in your hiring, team structure, and compensation decisions from the very start. These are decisions that define who you are and what you value as a team.
Some advice on how to organize your on call efforts, in no particular order.
It is easier to keep yourself from falling into an operational pit of doom than it is to claw your way out of one. Make good operational hygiene a priority from the start. Value good, clean, high-level abstractions that allow you to delegate large swaths of your infrastructure and operational burden to third parties who can do it better than you — serverless, AWS, *aaS, etc. Don’t fall into the trap of disrespecting operations engineering labor, it’s the only thing that can save you.
Invest in good release and deploy tooling. Make this part of your engineering roadmap, not something you find in the couch cushions. Get code into production within minutes after merging, and watch how many of your nightmares melt away or never happen.
Invest in good instrumentation and observability. Impress upon your engineers that their job is not done when tests pass; it is not done until they have watched users using their code in production. Promote an ownership mentality over the full software life cycle. This is how dev.to did it.
Construct your feedback loops thoughtfully. Try to alert the person who made the broken change directly. Never send an alert to someone who isn’t fully equipped and empowered to fix it.
When an engineer is on call, they are not responsible for normal project work — period. That time is sacred and devoted to fixing things, building tooling, and creating guard-rails to protect people from themselves. If nothing is on fire, the engineer can take the opportunity to fix whatever has been annoying them. Allow for plenty of agency and following one’s curiosity, wherever it may lead, and it will be a special treat.
Closely track how often your team gets alerted. Take ANY out-of-hours-alert seriously, and prioritize the work to fix it. Night time pages are heart attacks, not diabetes.
Consider joining the on call rotation yourself! If nothing else, generously pinch hit and be an eager and enthusiastic backup on the regular.
Reliability work and technical debt are not secondary to product work. Budget them into your roadmap, right alongside your features and fixes. Don’t plan so tightly that you have no flex for the unexpected. Don’t be afraid to push back on product and don’t neglect to sell it to your own bosses. People’s lives are in your hands; this is what you get paid to do.
Consider making after-hours on call fully-elective. Why not? What is keeping you from it? Fix those things. This is how Intercom did it.
Depending on your stage and available resources, consider compensating for it.This doesn’t have to be cash, it could be a Friday off the week after every on call rotation. The more established and funded a company you are, the more likely you should do this in order to surface the right incentives up the org chart.
Once you’ve dug yourself out of firefighting mode, invest in SLOs (Service Level Objectives). SLOs and observability are the mature way to get out of reactive mode and plan your engineering work based on tradeoffs and user impact.
I believe it is thoroughly possible to construct an on call rotation that is 100% opt-in, a badge of pride and accomplishment, something that brings meaning and mastery to people’s engineering roles and ties them emotionally to their users. I believe that being on call is something that you can genuinely look forward to.
But every single company is a unique complex sociotechnical snowflake. Flipping the script on whether on call is a burden or a blessing will require a unique solution, crafted to meet your specific needs and drawing on your specific history. It will require tinkering. It will take maintenance.
Above all: ✨RAISE YOUR STANDARDS✨ for what you expect from yourselves. Your greatest enemy is how easily you accept the status quo, and then make up excuses for why it is necessarily this way. You can do better. I know you can.
treat every alarm like a heart attack. _fix_ the motherfucker.
i do not care if this causes product development to screech to a halt. amortize it over a slightly longer period of time and it will more than pay for itself. https://t.co/JSck2u86ff
There is lots and lots of prior art out there when it comes to making on call work for you, and you should research it deeply. Watch some talks, read some pieces, talk to some people. But then you’ll have to strike out on your own and try something. Cargo-culting someone else’s solution is always the wrong answer.
Any asshole can write some code; owning and tending complex systems for the long run is the hard part. How you choose to shoulder this burden will be a deep reflection of your values and who you are as a team.
And if your on call experience is mandatory and severely life-impacting, and if you don’t take this dead seriously and fix it ASAP? I hope your team will leave you, and go find a place that truly values their time and sleep.
My company has recently begun pushing for us to build and staff out what I can only describe as “command centers”. They’re picturing graphs, dashboards…people sitting around watching their monitors all day just to find out which apps or teams are having issues. With your experience in monitoring and observability, and your opinions on teams supporting their own applications…do you think this sounds like a bad idea? What are things to watch out for, or some ways this might all go sideways?
— Anonymous
Jesus motherfucking Christ on a stick. Is it 1995 where you work? That’s the only way I can try and read this plan like it makes sense.
It’s a giant waste of money and no, it won’t work. This path leads into a death spiral where alarms are going off constantly (yet somehow never actually catching the real problems), people getting burned out, and anyone competent will either a) leave or b) refuse to be on call. Sideways enough for you yet?
Snark aside, there are two foundational flaws with this plan.
1) watching graphs is pointless. You can automate that shit, remember? ✨Computers!✨ Furthermore, this whole monitoring-based approach will only ever help you find the known unknowns, the problems you already know to look for. But most of your actual problems will be unknown unknowns, the ones you don’t know about yet.
2) those people watching the graphs… When something goes wrong, what exactly can they do about it? The answer, unfortunately, is “not much”. The only people who can swiftly diagnose and fix complex systems issues are the people who build and maintain those systems, and those people are busy building and maintaining, not watching graphs.
That extra human layer is worse than useless; it is actively harmful. By insulating developers from the consequences of their actions, you are concealing from them the information they need to understand the consequences of their actions. You are interfering with the most basic of feedback loops and causing it to malfunction.
The best time to find a bug is as soon as possible after writing it, while it’s all fresh in your head. If you let it fester for days, weeks, or months, it will be exponentially more challenging to find and solve. And the best people to find those bugs are the people who wrote them
Helpful? Hope so. Good luck. And if they implement this anyway — leave. You deserve to work for a company that won’t waste your fucking time.
I work as an engineering manager for a company whose non-technology leadership insists there has to be a way to measure the individual productivity of a software engineer. I have the opposite belief. I don’t believe you can measure the productivity of “professional” careers, or thought workers (ex: how do measure productivity of a doctor, lawyer, or chemist?).
For software engineering in particular, I feel that metrics can be gamed, don’t tell the whole story, or in some cases, are completely arbitrary. Do you measure individual developer productivity? If so, what do you measure, and why do you feel it’s valuable? If you don’t and share similar feelings as mine, how would you recommend I justify that position to non-technology leadership?
Thanks for your time.
Anonymous Engineering Manager
Dear Anon,
Once upon a time I had a job as a sysadmin, 100% remote, where all work was tracked using RT tasks. I soon realized that the owner didn’t have a lot of independent technical judgment, and his main barometer for the caliber of our contributions was the number of tasks we closed each day.
I became a ticket-closing machine. I’d snap up the quick and easy tasks within seconds. I’d pattern match and close in bulk when I found a solution for a group of tasks. I dove deep into the list of stale tickets looking for ones I could close as “did not respond” or “waiting for response”, especially once I realized there was no penalty for closing the same ticket over and over.
My boss worshiped me. I was bored as fuck. Sigh.
I guess what I’m trying to say is, I am fully in your camp. I don’t think you can measure the “productivity” of a creative professional by assigning metrics to their behaviors or process markers, and I think that attempting to derive or inflict such metrics can inflict a lot of damage.
In fact, I would say that to the extent you can reduce a job to a set of metrics, that job can be automated away. Metrics are for easy problems — discrete, self-contained, well-understood problems. The more challenging and novel a problem, the less reliable these metrics will be.
Your execs should fucking well know this: how would THEY like to be evaluated based on, like, how many emails they send in a day? Do they believe that would be good for the business? Or would they object that they are tasked with the holistic success of the org, and that their roles are too complex to reduce to a set of metrics without context?
This actually makes my blood boil. It is condescending as fuck for leadership to treat engineers like task-crunching interchangeable cogs. It reveals a deep misunderstanding of how sociotechnical systems are developed and sustained (plus authoritarian tendencies, and usually a big dollop of personal insecurity).
But what is the alternative?
In my experience, the “right” answer, i.e. the best way to run consistently high-performing teams, involves some combination of the following:
Outcome-based management that practices focusing on impact, plus
Team level health metrics, combined with
Engineering ladder and regular lightweight reviews, and
Managers who are well calibrated across the org, and encouraged to interrogate their own biases openly & with curiosity.
The right way to look at performance is at the team level. Individual engineers don’t own or maintain code; teams do. The team is the irreducible unit of ownership. So you need to incentivize people to think about work and spending their time cooperatively, optimizing for what is best for the team.
Some of the hardest and most impactful engineering work will be all but invisible on any set of individual metrics. You want people to trust that their manager will have their backs and value their contributions appropriately at review time, if they simply act in the team’s best interest. You do not want them to waste time gaming the metrics or courting personal political favor.
This is one of the reasons that managers need to be technical — so they can cultivate their own independent judgment, instead of basing reviews on hearsay. Because some resources (i.e. your budget for individual bonuses) are unfortunately zero-sum, and you are always going to rely on the good judgment of your engineering leaders when it comes to evaluating the relative impact of individual contributions.
“I would say that Joe’s contribution this quarter had greater impact than Jane’s. But is that really true? Jane did a LOT of mentoring and other “glue” work, which tends to be under-acknowledged as leadership work, so I just want to make sure I am evaluating this fairly … Does anyone else have a perspective on this? What might I be missing?” — a manager keeping themselves honest in calibrations
I do think every team should be tracking the 4 DORA metrics — time elapsed between merge and deploy, frequency of deploy, time to recover from outages, duration of outages — as well as how often someone is paged outside of business hours. These track pretty closely to engineering productivity and efficiency.
But leadership should do its best to be outcome oriented. The harder the problem, the more senior the contributor, the less business anyone has dictating the details of how or why. Make your agreements, then focus on impact.
This is harder on managers, for sure — it’s easier to count the hours someone spends at their desk or how many lines of code they commit than to develop a nuanced understanding of the quality and timbre of an engineer’s contributions to the product, team and the company over time. It is easier to micromanage the details than to negotiate a mutual understanding of what actually matters, commit to doing your part … and then step away, trusting them to fill in the gaps.
But we should expect this; it’s worth it. It is in those gaps where we feel trusted to act that we find joy and autonomy in our labor, where we do our best work as skilled artisans.
I just read this piece, which is basically a very long subtweet about my Friday deploy threads. Go on and read it: I’ll wait.
Here’s the thing. After getting over some of the personal gibes (smug optimism? literally no one has ever accused me of being an optimist, kind sir), you may be expecting me to issue a vigorous rebuttal. But I shan’t. Because we are actually in violent agreement, almost entirely.
I have repeatedly stressed the following points:
I want to make engineers’ lives better, by giving them more uninterrupted weekends and nights of sleep. This is the goal that underpins everything I do.
Anyone who ships code should develop and exercise good engineering judgment about when to deploy, every day of the week
Every team has to make their own determination about which policies and norms are right given their circumstances and risk tolerance
A policy of “no Friday deploys” may be reasonable for now but should be seen as a smell, a sign that your deploys are risky. It is also likely to make things WORSE for you, not better, by causing you to adopt other risky practices (e.g. elongating the interval between merge and deploy, batching changes up in a single deploy)
This has been the most frustrating thing about this conversation: that a) I am not in fact the absolutist y’all are arguing against, and b) MY number one priority is engineers and their work/life balance. Which makes this particularly aggravating:
Lastly there is some strange argument that choosing not to deploy on Friday “Shouldn’t be a source of glee and pride”. That one I haven’t figured out yet, because I have always had a lot of glee and pride in being extremely (overly?) protective of the work/life balance of the engineers who either work for me, or with me. I don’t expect that to change.
Hold up. Did you catch that clever little logic switcheroo? You defined “not deploying on Friday” as being a priori synonymous with “protecting the work/life balance of engineers”. This is how I know you haven’t actually grasped my point, and are arguing against a straw man. My entire point is that the behaviors and practices associated with blocking Friday deploys are in fact hurting your engineers.
I, too, take a lot of glee and pride in being extremely, massively, yes even OVERLY protective of the work/life balance of the engineers who either work for me, or with me.
AND THAT IS WHY WE DEPLOY ON FRIDAYS.
Because it is BETTER for them. Because it is part of a deploy ecosystem which results in them being woken up less and having fewer weekends interrupted overall than if I had blocked deploys on Fridays.
It’s not about Fridays. It’s about having a healthy ecosystem and feedback loop where you trust your deploys, where deploys aren’t a big deal, and they never cause engineers to have to work outside working hours. And part of how you get there is by not artificially blocking off a big bunch of the week and not deploying during that time, because that breaks up your virtuous feedback loop and causes your deploys to be much more likely to fail in terrible ways.
The other thing that annoys me is when people say, primly, “you can’t guarantee any deploy is safe, but you can guarantee people have plans for the weekend.”
Know what else you can guarantee? That people would like to sleep through the fucking night, even on weeknights.
When I hear people say this all I hear is that they don’t care enough to invest the time to actually fix their shit so it won’t wake people up or interrupt their off time, seven days a week. Enough with the virtue signaling already.
You cannot have it both ways, where you block off a bunch of undeployable time AND you have robust, resilient, swift deploys. Somehow I keep not getting this core point across to a substantial number of very intelligent people. So let me try a different way.
Let’s try telling a story.
A tale of two startups
Here are two case studies.
Company X
Company X is a three-year-old startup. It is a large, fast-growing multi-tenant platform on a large distributed system with spiky traffic, lots of user-submitted data, and a very green database. Company X deploys the API about once per day, and does a global deploy of all services every Tuesday. Deploys often involve some firefighting and a rollback or two, and Tuesdays often involve deploying and reverting all day (sigh).
Pager volume at Company X isn’t the worst, but usually involves getting woken up a couple times a week, and there are deploy-related alerts after maybe a third of deploys, which then need to be triaged to figure out whose diff was the cause.
Company Z
Company Z is a three-year-old startup. It is a large, fast-growing multi-tenant platform on a large distributed system with spiky traffic, lots of user-submitted data, and a very green house-built distributed storage engine. Company Z automatically triggers a deploy within 30 minutes of a merge to master, for all services impacted by that merge. Developers at company Z practice observability-driven deployment, where they instrument all changes, ask “how will I know if this change doesn’t work?” during code review, and have a muscle memory habit of checking to see if their changes are working as intended or not after they merge to master.
Deploys rarely result in the pager going off at Company Z; most problems are caught visually by the engineer and reverted or fixed before any paging alert can fire. Pager volume consists of roughly one alert per week outside of working hours, and no one is woken up more than a couple times per year.
Same damn problem, better damn solutions.
If it wasn’t extremely obvious, these companies are my last two jobs, Parse (company X, from 2012-2016) and Honeycomb (company Z, from 2016-present).
They have a LOT in common. Both are services for developers, both are platforms, both are running highly elastic microservices written in golang, both get lots of spiky traffic and store lots of user-defined data in a young, homebrewed columnar storage engine. They were even built by some of the same people (I built infra for both, and they share four more of the same developers).
At Parse, deploys were run by ops engineers because of how common it was for there to be some firefighting involved. We discouraged people from deploying on Fridays, we locked deploys around holidays and big launches. At Honeycomb, none of these things are true. In fact, we literally can’t remember a time when it was hard to debug a deploy-related change.
What’s the difference between Company X and Company Z?
So: what’s the difference? Why are the two companies so dramatically different in the riskiness of their deploys, and the amount of human toil it takes to keep them up?
I’ve thought about this a lot. It comes down to three main things.
Observability
Observability-driven development
Single merge per deploy
1. Observability.
I think that I’ve been reluctant to hammer this home as much as I ought to, because I’m exquisitely sensitive about sounding like an obnoxious vendor trying to sell you things. 😛 (Which has absolutely been detrimental to my argument.)
When I say observability, I mean in the precise technical definition as I laid out in this piece: with high cardinality, arbitrarily wide structured events, etc. Metrics and other generic telemetry will not give you the ability to do the necessary things, e.g. break down by build id in combination with all your other dimensions to see the world through the lens of your instrumentation. Here, for example, are all the deploys for a particular service last Friday:
Each shaded area is the duration of an individual deploy: you can see the counters for each build id, as the new versions replace the old ones,
2. Observability-driven development.
This is cultural as well as technical. By this I mean instrumenting a couple steps ahead of yourself as you are developing and shipping code. I mean making a cultural practice of asking each other “how will you know if this is broken?” during code review. I mean always going and looking at your service through the lens of your instrumentation after every diff you ship. Like muscle memory.
3. Single merge per deploy.
The number one thing you can do to make your deploys intelligible, other than observability and instrumentation, is this: deploy one changeset at a time, as swiftly as possible after it is merged to master. NEVER glom multiple changesets into a single deploy — that’s how you get into a state where you aren’t sure which change is at fault, or who to escalate to, or if it’s an intersection of multiple changes, or if you should just start bisecting blindly to try and isolate the source of the problem. THIS is what turns deploys into long, painful marathons.
headlamps, illuminating whatever’s in front of my face: this is the image in my mind when i think about instrumenting my code
And NEVER wait hours or days to deploy after the change is merged. As a developer, you know full well how this goes. After you merge to master one of two things will happen. Either:
you promptly pull up a window to watch your changes roll out, checking on your instrumentation to see if it’s doing what you intended it to or if anything looks weird, OR
you close the project and open a new one.
When you switch to a new project, your brain starts rapidly evicting all the rich context about what you had intended to do and and overwriting it with all the new details about the new project.
Whereas if you shipped that changeset right after merging, then you can WATCH it roll out. And 80-90% of all problems can be, should be caught right here, before your users ever notice — before alerts can fire off and page you. If you have the ability to break down by build id, zoom in on any errors that happen to arise, see exactly which dimensions all the errors have in common and how they differ from the healthy requests, see exactly what the context is for any erroring requests.
Healthy feedback loops == healthy systems.
That tight, short feedback loop of build/ship/observe is the beating heart of a healthy, observable distributed system that can be run and maintained by human beings, without it sucking your life force or ruining your sleep schedule or will to live.
Most engineers have never worked on a system like this. Most engineers have no idea what a yawning chasm exists between a healthy, tractable system and where they are now. Most engineers have no idea what a difference observability can make. Most engineers are far more familiar with spending 40-50% of their week fumbling around in the dark, trying to figure out where in the system is the problem they are trying to fix, and what kind of context do they need to reproduce.
Most engineers are dealing with systems where they blindly shipped bugs with no observability, and reports about those bugs started to trickle in over the next hours, days, weeks, months, or years. Most engineers are dealing with systems that are obfuscated and obscure, systems which are tangled heaps of bugs and poorly understood behavior for years compounding upon years on end.
That’s why it doesn’t seem like such a big deal to you break up that tight, short feedback loop. That’s why it doesn’t fill you with horror to think of merging on Friday morning and deploying on Monday. That’s why it doesn’t appall you to clump together all the changes that happen to get merged between Friday and Monday and push them out in a single deploy.
It just doesn’t seem that much worse than what you normally deal with. You think this raging trash fire is, unfortunately … normal.
How realistic is this, though, really?
Maybe you’re rolling your eyes at me now. “Sure, Charity, that’s nice for you, on your brand new shiny system. Ours has years of technical debt, It’s unrealistic to hold us to the same standard.”
Yeah, I know. It is much harder to dig yourself out of a hole than it is to not create a hole in the first place. No doubt about that.
Harder, yes. But not impossible.
I have done it.
Parse in 2013 was a trash fire. It woke us up every night, we spent a lot of time stabbing around in the dark after every deploy. But after we got acquired by Facebook, after we started shipping some data sets into Scuba, after (in retrospect, I can say) we had event-level observability for our systems, we were able to start paying down that debt and fixing our deploy systems.
We started hooking up that virtuous feedback loop, step by step.
We reworked our CI/CD system so that it built a new artifact after every single merge.
We put developers at the steering wheel so they could push their own changes out.
We got better at instrumentation, and we made a habit of going to look at it during or after each deploy.
We hooked up the pager so it would alert the person who merged the last diff, if an alert was generated within an hour after that service was deployed.
We started finding bugs quicker, faster, and paying down the tech debt we had amassed from shipping code without observability/visibility for many years.
Developers got in the habit of shipping their own changes, and watching them as they rolled out, and finding/fixing their bugs immediately.
It took some time. But after a year of this, our formerly flaky, obscure, mysterious, massively multi-tenant service that was going down every day and wreaking havoc on our sleep schedules was tamed. Deploys were swift and drama-free. We stopped blocking deploys on Fridays, holidays, or any other days, because we realized our systems were more stable when we always shipped consistently and quickly.
Allow me to repeat. Our systems were more stable when we always shipped right after the changes were merged. Our systems were less stable when we carved out times to pause deployments. This was not common wisdom at the time, so it surprised me; yet I found it to be true over and over and over again.
This is literally why I started Honeycomb.
When I was leaving Facebook, I suddenly realized that this meant going back to the Dark Ages in terms of tooling. I had become so accustomed to having the Parse+scuba tooling and being able to iteratively explore and ask any question without having to predict it in advance. I couldn’t fathom giving it up.
The idea of going back to a world without observability, a world where one deployed and then stared anxiously at dashboards — it was unthinkable. It was like I was being asked to give up my five senses for production — like I was going to be blind, deaf, dumb, without taste or touch.
Look, I agree with nearly everything in the author’s piece. I could have written that piece myself five years ago.
But since then, I’ve learned that systems can be better. They MUST be better. Our systems are getting so rapidly more complex, they are outstripping our ability to understand and manage them using the past generation of tools. If we don’t change our ways, it will chew up another generation of engineering lives, sleep schedules, relationships.
Observability isn’t the whole story. But it’s certainly where it starts. If you can’t see where you’re going, you can’t go very far.
Get you some observability.
And then raise your standards for how systems should feel, and how much of your human life they should consume. Do better.
Because I couldn’t agree with that other post more: it really is all about people and their real lives.
Listen, if you can swing a four day work week, more power to you (most of us can’t). Any day you aren’t merging code to master, you have no need to deploy either. It’s not about Fridays; it’s about the swift, virtuous feedback loop.
And nobody should be shamed for what they need to do to survive, given the state of their systems today.
But things aren’t gonna get better unless you see clearly how you are contributing to your present pain. And congratulating ourselves for blocking Friday deploys is like congratulating ourselves for swatting ourselves in the face with the flyswatter. It’s a gross hack.
Maybe you had a good reason. Sure. But I’m telling you, if you truly do care about people and their work/life balance: we can do a lot better.
I made a vow this year to post one blog post a month, then I didn’t post anything at all from May to September. I have some catching up to do. 😑 I’ve also been meaning to transcribe some of the twitter rants that I end up linking back to into blog posts, so if there’s anything you especially want me to write about, tell me now while I’m in repentance mode.
This is one request I happened to make a note of because I can’t believe I haven’t already written it up! I’ve been saying the same thing over and over in talks and on twitter for years, but apparently never a blog post.
The question is: what is the proper role of alerting in the modern era of distributed systems? Has it changed? What are the updated best practices for alerting?
@mipsytipsy I've seen your thoughts on dashboards vs searching but haven't seen many thoughts from you on Alerting. Let me know if I've missed a blog somewhere on that! 🙂
It’s a great question. I want to wax philosophically about some stuff, but first let me briefly outline the way to modernize your alerting best practices:
implement observability
implement SLOs and/or end-to-end checks that traverse key code paths and correlate to user-impacting events
create a secondary channel (tasks, ticketing system, whatever) for “things that on call should look at soon, but are not impacting users yet” which does not page anyone, but which on call is expected to look at (at least) first thing in the morning, last thing in the evening, and midday
move as many paging alerts as possible to the secondary channel, by engineering your services to auto-remediate or run in degraded mode until they can be patched up
wake people up only for SLOs and health checks that correlate to user-impacting events
Or, in an even shorter formulation: delete all your paging alerts, then page only on e2e alerts that mean users are in pain. Rely on debugging tools for debugging, and paging only when users are in pain.
To understand why I advocate deleting all your paging alerts, and when it’s safe to delete them, first we need to understand why have we accumulated so many crappy paging alerts over the years.
Monoliths, LAMP stacks, and death by pagebomb
Here, let’s crib a couple of slides from one of my talks on observability. Here are the characteristics of older monolithic LAMP-stack style systems, and best practices for running them:
The sad truth is, that when all you have is time series aggregates and traditional monitoring dashboards, you aren’t really debugging with science so much as you are relying on your gut and a handful of dashboards, using intuition and scraps of data to try and reconstruct an impossibly complex system state.
This works ok, as long as you have a relatively limited set of failure scenarios that happen over and over again. You can just pattern match from past failures to current data, and most of the time your intuition can bridge the gap correctly. Every time there’s an outage, you post mortem the incident, figure out what happened, build a dashboard “to help us find the problem immediately next time”, create a detailed runbook for how to respond to it, and (often) configure a paging alert to detect that scenario.
Over time you build up a rich library of these responses. So most of the time when you get paged you get a cluster of pages that actually serves to help you debug what’s happening. For example, at Parse, if the error graph had a particular shape I immediately knew it was a redis outage. Or, if I got paged about a high % of app servers all timing out in a short period of time, I could be almost certain the problem was due to mysql connections. And so forth.
Things fall apart; the pagebomb cannot stand
However, this model falls apart fast with distributed systems. There are just too many failures. Failure is constant, continuous, eternal. Failure stops being interesting. It has to stop being interesting, or you will die.
Instead of a limited set of recurring error conditions, you have an infinitely long list of things that almost never happen …. except that one time they do. If you invest your time into runbooks and monitoring checks, it’s wasted time if that edge case never happens again.
Frankly, any time you get paged about a distributed system, it should be a genuinely new failure that requires your full creative attention. You shouldn’t just be checking your phone, going “oh THAT again”, and flipping through a runbook. Every time you get paged it should be genuinely new and interesting.
Oh damn this talk looks baller. 😍 "Failure is important, but it is no longer interesting" — @this_hits_home… Netflix once again shining the light on where the rest of us need to get to over the next 3-5 years. 🙌🏅🎬 https://t.co/OY40Y0BTSa
And thus you should actually have drastically fewer paging alerts than you used to.
A better way: observability and SLOs.
Instead of paging alerts for every specific failure scenario, the technically correct answer is to define your SLOs (service level objectives) and page only on those, i.e. when you are going to run out of budget ahead of schedule. But most people aren’t yet operating at this level of sophistication. (SLOs sound easy, but are unbelievably challenging to do well; many great teams have tried and failed. This is why we have built an SLO feature into Honeycomb that does the heavy lifting for you. Currently alpha testing with users.)
If you haven’t yet caught the SLO religion, the alternate answer is that “you should only page on high level end-to-end alerts, the ones which traverse the code paths that make you money and correspond to user pain”. Alert on the three golden signals: request rate, latency, and errors, and make sure to traverse every shard and/or storage type in your critical path.
That’s it. Don’t alert on the state of individual storage instances, or replication, or anything that isn’t user-visible.
(To be clear: by “alert” I mean “paging humans at any time of day or night”. You might reasonably choose to page people during normal work hours, but during sleepy hours most errors should be routed to a non-paging address. Only wake people up for actual user-visible problems.)
Here’s the thing. The reason we had all those paging alerts was because we depended on them to understand our systems.
Once you make the shift to observability, once you have rich instrumentation and the ability to swiftly zoom in from high level “there might be a problem” to identifying specifically what the errors have in common, or the source of the problem — you no longer need to lean on that scattershot bunch of pagebombs to understand your systems. You should be able to confidently ask any question of your systems, understand any system state — even if you have never encountered it before.
With observability, you debug by systematically following the trail of crumbs back to their source, whatever that is. Those paging alerts were a crutch, and now you don’t need them anymore.
Everyone is on call && on call doesn’t suck.
I often talk about how modern systems require software ownership. The person who is writing the software, who has the original intent in their head, needs to shepherd that code out into production and watch real users use it. You can’t chop that up into multiple roles, dev and ops. You just can’t. Software engineers working on highly available systems need to be on call for their code.
But the flip side of this responsibility belongs to management. If you’re asking everyone to be on call, it is your sworn duty to make sure that on call does not suck. People shouldn’t have to plan their lives around being on call. People shouldn’t have to expect to be woken up on a regular basis. Every paging alert out of hours should be as serious as a heart attack, and this means allocating real engineering resources to keeping tech debt down and noise levels low.
And the way you get there is first invest in observability, then delete all your paging alerts and start over from scratch.
The company is growing like crazy, your engineering team keeps rising to the challenge, and you are ferociously proud of them. But some cracks are beginning to show, and frankly you’re a little worried. You have always advocated for engineers to have broad latitude in technical decisions, including choosing languages and tools. This autonomy and culture of ownership is part of how you have successfully hired and retained top talent despite the siren song of the Faceboogles.
But recently you saw something terrifying that you cannot unsee: your company is using all the languages, all the environments, all the databases, all the build tools. Shit!!! Your ops team is in full revolt and you can’t really blame them. It’s grown into an unsupportable nightmare and something MUST be done, but you don’t know what or how — let alone how to solve it while retaining the autonomy and personal agency that you all value so highly.
I hear a version of this everywhere I’ve gone for the past year or two. It’s crazy how often I’ve seen it. I’ve been meaning to write my answer up for ages, and here it (finally) is.
First of all: you aren’t alone. This is extremely common among high-performing teams, so congratulations. Really!
There actually seems to be a direct link between teams that give engineers lots of leeway to own their technical decisions and that team’s ability to hire and retain top-tier talent, particularly senior talent. Everything is a tradeoff, obviously, but accepting somewhat more chaos in exchange for a stronger sense of individual ownership is usually the right one, and leads to higher-performing teams in the long run.
Second, there is actually already a well-trod path out of this hole to a better place, and it doesn’t involve sacrificing developer agency. It’s fairly simple! Just five short steps, which I will describe to you now.
How to build a golden path and reverse software sprawl
Assemble a small council of trusted senior engineers.
Task them with creating a recommended list of default components for developers to use when building out new services. This will be your Golden Path, the path of convergence (and the path of least resistance).
Tell all your engineers that going forward, the Golden Path will be fully supported by the org. Upgrades, patches, security fixes; backups, monitoring, build pipeline; deploy tooling, artifact versioning, development environment, even tier 1 on call support. Pave the path with gold. Nobody HAS to use these components … but if they don’t, they’re on their own. They will have to support it themselves.
Work with team leads to draw up an umbrella plan for adopting the Golden Path for their current projects as well as older production services, as much as is reasonable or possible or desirable. Come up with a timeline for the whole eng org to deprecate as many other tools as possible. Allocate real engineering time to the effort. Hell, make a party out of it!
After the cutoff date (and once things have stabilized), establish a regular process for reviewing and incorporating feedback about the blessed Path and considering any proposed changes, additions or removals.
There you go. That’s it. Easy, right??
(It’s not easy. I never said it was easy, I said it was simple. 👼🏼)
Your engineers are currently used to picking the best tool for the job by optimizing locally. What data store has a data model that is easiest for them to fit to their needs? Which language is fastest for I/O throughput? What are they already proficient in? What you need to do is start building your muscles for optimizing globally. Not in isolation of other considerations, but in conjunction with them. It will always be a balancing act between optimizing locally for the problem at hand and optimizing globally for operability and general sanity.
(Oh, incidentally, requiring an engineer to write up a proposal any time they want to use a non-standard component, and then defend their case while the council grills them in person — this will be nothing but good for them, guaran-fucking-teed.)
Let’s go into a bit more detail on each of the five points. But quick disclaimer: this is not a prescription. I don’t know your system, your team, your cultural land mines or technical interdependencies or anything else about your situation. I am just telling stories here.
1. Assemble your council
Three is a good number for a council. More than that gets unwieldy, and may have trouble reaching consensus. Less than three and you run into SPOFs. You never want to have a single person making unilateral decisions because a) the decision-making process will be weaker, b) it sets that person up for too much interpersonal friction, and c) it denies your other engineers the opportunity to practice making these kinds of decisions.
Your council members need technical breadth more than depth, and should be widely respected by engineers.
At least one member should have a long history with the company so they know lots of stupid little details about what’s been tried before and why it failed.
At least one member should be deeply versed in practical data and operability concerns.
They should all have enough patience and political skill to drive consensus for their decisions. Absolutely no bombthrowers.
If you’re super lucky, you just tap the three senior technologists who immediately come to mind … your mind and everyone else’s. If you don’t have this kind of automatic consensus, you may want to let teams or orgs nominate their own representative so they feel they have some say.
2. Task the council with defining a Golden Path
Your council cannot vanish for a week and then descend from the mountain lugging lists engraved on stone tablets. The process of discovery and consensus is what validates the result.
The process must include talking to and gathering feedback from your engineers, talking to experts outside the company, talking to teams at other companies who are farther along using that technology, coming up with detailed pro/con lists and reasons for their choices. Maybe sometimes it includes prototyping something or investigating the technical depths … but yeah no mostly it’s just the talking.
You need your council members to have enough political skill to handle these conversations deftly, building support and driving consensus through the process. Everybody doesn’t have to love the outcome, but it shouldn’t be a *surprise* to anyone by the end.
3. Know where you’re going
Your council should create a detailed written plan describing which technologies are going to be supported … and a stab at what “supported” means. (Ask the experts in each component what the best practices are for backups, versioning, dependency management, etc.)
You might start with something like this:
* Backend lang: Go 1.11 ## we will no longer be supporting
backend scripting languages
* Frontend lang: ReactJS v 16.5
* Primary db: Aurora v 2.0 ## Yes, we know postgres is "better",
but we have many mysql experts and 0 pg experts except the one guy
who is going to complain about this. You know who you are.
* Deploy pipeline: github -> jenkins + docker -> S3 -> custom k8s
deploy tooling
* Message broker: kafka v 2.10, confluent build
* Mail: SES
* .... etc
Circulate the draft regularly for feedback, especially with eng managers. Some team reorganization will probably be necessary to bear the new weight of your support specifications, and managers will need some lead time to wrangle this.
This is also a great time to reconceive of the way on call works at your company. But I am not going to go into all that here.
4. Set a date, draft a plan: go!
Get approval from leadership to devote a certain amount of time to consolidating your stack and paying down a lump sum of tech debt. It depends on your stage of decay, but a reasonable amount of time might be “25% of engineering time for three months“. Whatever you agree to, make sure it’s enough to make the world demonstrably better for the humans who run it; you don’t want to leave them with a tire fire or you’ll blow your credibility.
The council and team leads should come up with a rough outer estimate for how long it would take to rewrite everything and move the whole stack on to the Golden Stack. (It’s probably impossible and/or would take years, but that’s okay.) Next, look for the quick wins or swollen, inflamed pain points.
If you are running two pieces of functionally similar software, like postgres and mysql, can you eliminate one?
If you are managing something yourself that AWS could manage for you (e.g. postfix instead of SES, or kafka instead of kinesis), can you migrate that?
If you are managing anything yourself that is not core to your business value, in fact, you should try to not manage it.
If you are running any services by hand on an AWS instance somewhere, could you try using a service?
If you are running your own monitoring software, etc … can you not?
If you have multiple versions of a piece of software, can you upgrade or consolidate on one version?
The hardest parts are always going to be the ones around migrating data or rewriting components. Not everything is worth doing or can afford to be done in the time span of your project time, and that’s okay.
Next, brainstorm up some carrots. Can you write templates so that anybody who writes a service using your approved library, magically gets monitoring checks without having to configure anything? Can you write a wrapper so they get a bunch of end-to-end tests for free? Anything you can do to delight people or save them time and effort by using your preferred components is worth considering.
(By the way, if you don’t have any engineers devoted to internal tooling, you’re probably way overdue at this point.)
Pay down as much debt as you can, but be pragmatic: it’s better to get rid of five small things than one large thing, from a support perspective. Your main goal is to shrink the number of types of software your team has to support, particularly databases.
Do look for ways to make it fun, like … running a competition to see who can move the most tools to AWS in a week, or throwing a hack week party, or giving dorky prizes like trophies that entitle you to put your manager on call instead of you for a day, etc.
5. Make the process sustainable
After your target date has come and gone, you probably want to hold a post mortem retrospective and do lots of listening. (Well — first might I recommend a bubble bath and a bottle of champagne? But then a post mortem.)
Nothing is ever fixed forever. The company’s needs are going to expand and contract, and people will come and go, because change is the only constant. So you need to bake some flex into your system. How are you going to handle the need for changes to the Golden Path? Monthly discussions? An email list? Quarterly meetings with a formal agenda? I’ve seen people do all of these and more, it doesn’t really matter afaict.
Nobody likes a cabal, though, so the original council should gradually rotate out. I recommend replacing one person at a time, one per quarter, and rotating in another senior engineer in their place. This provides continuity while giving others a chance to learn these technical and political skills.
In the end, engineers are still free to use any tool or component at any time, just like before, only now they are solely responsible for it, which puts pressure on them not to do it unless REALLY necessary. So if someone wants to propose adding a new tool to the default golden path, they can always add it themselves and gain some experience in it before bringing it to the council to discuss a formal place for it.
That’s all folks
See, wasn’t that simple?
(It’s never simple.)
I dearly wish more people would write up their experiences with this sort of thing in detail. I think engineering teams are too reluctant to show their warts and struggles to the world — or maybe it’s their executives who are afraid? Dunno.
Regardless, I think it’s actually a highly effective recruiting tool when teams aren’t afraid to share their struggles. The companies that brag about how awesome they are are the ones who come off looking weak and fragile. Whereas you can always trust the ones who are willing to laugh about all the ways they screwed up. Right?
In conclusion, don’t feel like an asshole for insisting on some process here. There should be friction around adding new components to your stack. (Add in haste, repent at leisure, as they say.) Anybody who argues with you probably needs to be exposed to way, way more of the support load for that software. That’s my professional opinion.
Anyway. You win or you die. Good luck with your sprawl.
On twitter this week, @srhtcn noted that “Many incidents happen during or right after release” and asked for advice on ways to fix this.
And he’s right! Rolling out new software is the proximate cause for the overwhelming majority of incidents, at companies of all sizes. Upgrading software is both a necessity and a minor insanity, considering how often it breaks things.
I’m not going to recap the history of continuous integration and delivery, suffice it to say that we now know that smaller and more frequent changes are much safer than larger and less frequent changes.
But it’s still risky. And most issues are still caused by humans and our pesky need for “improvements”. So what can be done?
It’s not ok for software releases to be scary and hazardous
First of all: If releasing is risky for you, you need to fix that. Make this a priority. Track your failures, practice post mortems, evaluate your on call practices and culture. Know if you’re getting better or worse. This is a project that will take weeks if not months until you can be confident in the results.
You have to fix it though, because these things are self-reinforcing. If shipping changes is scary and fraught, people will do it less and it will get even MORE scary and treacherous.
Likewise, if you turn it into a non-cortisol inducing event and set expectations, engineers will ship their code more often in smaller diffs and therefore break the world less.
Fixing deploys isn’t about eliminating errors, it’s about making your pipeline resilient to errors. It’s fundamentally about detecting common failures and recovering from them, without requiring human intervention.
Value your tools more
As an short term patch, you should run deploys in the mornings or whenever everyone is around and fresh. Then take a hard look at your deploy pipeline.
In too many organizations, deploy code is a technical backwater, an accumulation of crufty scripts and glue code, forked gems and interns’ earnest attempts to hack up Capistrano. It usually gives off a strong whiff of “sloppily evolved from many 2 am patches with no code review”.
This is insane. Deploy software is the most important software you have. Treat it that way: recruit an owner, allocate real time for development and testing, bake in metrics and track them over time.
If it doesn’t have an owner, it will never improve. And you will need to invest in frequent improvements even after you’re over this first hump.
Signal high organizational value by putting one of your best engineers on it.
Recruit help from the design side of the house as well. The “right” thing to do must be the fastest, easiest thing to do, with friendly prompts and good docs. No “shortcuts” for people to reach for at the worst possible time. You need user research and design here.
Track how often deploys fail and why. Managers should pay close attention to this metric, just like the one for people getting interrupted or woken up, and allocate time to fixing this early whenever it sags. Before it gets bad.
Allocate real time for development, testing, and training — don’t expect the work to get shoved into people’s “spare time” or post mortem cleanup time. Make sure other managers understand the impact of this work and are on board. Make this one of your KPIs.
In other words, make deploy tools a first class citizen of your technical toolset. Make the work prestigious and valued — even aspirational. If you do performance reviews, recognize the impact there.
(Btw, “how we hardened our deploys” is total Velocity-bait (&& other practitioner conferences) as well as being great for recruiting and general visibility in blog post form. People love these stories; there definitely aren’t enough of them.)
Turn software engineers into software owners
The canonical CI/CD advice starts with “ship early, ship often, ship smaller change sets”. That’s great advice: you should definitely do those things. But they are covered plenty elsewhere. What’s software ownership?
Software ownership is the natural end state of DevOps. Software engineers, operations engineers, platform engineers, mobile engineers — everyone who writes code should be own the full lifecycle of their software.
Software owners are people who:
Write code
Can deploy and roll back their own code
Are able to debug their own issues in prod (via instrumentation, not ssh)
If you’re lacking any one of those three ingredients, you don’t have ownership.
Why ownership? Because software ownership makes for better engineers, better software, and a better experience for customers. It shortens feedback loops and means the person debugging is usually the person with the most context on what has recently changed.
Some engineers might balk at this, but you’ll be doing them a favor. We are all distributed systems engineers now, and distributed systems require a much higher level of operational literacy. May as well start today.
Fail fast, fix fast
This is about shifting your mindset from one of brittleness and a tight grip, to one of flexibility where failures are no big deal because they happen all the time, don’t impact users, and give everyone lots of practice at detecting and recovering from them.
Here are a few of the best practices you should adopt with this practice.
The engineer who writes the code and merges the PR should also run the deploy
Everyone who writes code must be trained in how to deploy, roll back & revert to last known good state (before escalating if necessary). They should also know the basics of instrumentation, feature flagging and debugging in prod..
After deploying you MUST go verify: are your changes behaving as expected? Does anything else look .. unexpected? You have the most context on what to expect; just two minutes spent verifying that things look reasonable will catch the overwhelming majority of errors before users even notice.
(You need solid observability for your instrumentation in order to expect your engineers to do this kind of side-by-side comparison, something with high cardinality support (like honeycomb) that lets you drill down to the individual event level. It limits the amount of ownership you can reasonably expect if your software engineers are flying blind.)
Make operability a high-value skill set. Never promote someone to “senior engineer” if they can’t deploy and debug their own code.
Everyone who puts software in production needs to understand and feel responsible for the full lifecycle of their code, not just how it works in their IDE.
Baking: it’s not just for cookies
Shipping something to production is a process of incrementally gaining confidence, not a switch you can flip.
You can’t trust code until it’s been in prod a while, until you’ve seen it perform under a wide range of load and concurrency scenarios, in lots of partial failure modes. Only over time can you develop confidence in it not being terrible.
Nothing is production except production. Don’t rely on never failing; expect failure, embrace failure. Practice failure! Build guard rails around your production systems to help you find and fix problems quickly.
The changes you need to make your pipeline more resilient are roughly the same changes you need to safely test in production. These are a few of your guard rails.
Use feature flags to switch new code paths on and off
Build canaries for your deploy process, so you can promote releases gracefully and automatically to larger subsets of your traffic as you gain confidence in them
Create cohorts. Deploy to internal users first, then any free tier, etc in order of ascending importance. Don’t jump from 10% to 25% to 50% and then 100% — some changes are related to saturating backend resources, and the 50%-100% jump will kill you.
Have robots check the health of your software as it rolls out to decide whether to promote the canary. Over time the robot checks will mature and eventually catch a ton of problems and regressions for you.
The quality of code is not knowable before it hits production. You may able to spot some problems, but you can never guarantee a lack of then. It takes time to bake a new release and gain incremental confidence in new code.
In summary.
Get someone to own the deploy software
Value the work
Create a culture of software ownership
LOOK at what you’ve done after you do it
Be suspicious of new versions until they prove themselves
Last week was the West Coast Velocity conference. I had a terrific time — I think it’s the
best Velocity I’ve been to yet. I also slipped in quite late, the evening before last, to catch Gareth’s session on DevOps vs SRE.
And it was worth it! Holy crap, this was such a fun barnburner of a talk, with Gareth schizophrenically arguing both for and against the key premise of the talk, which was about “Google Infrastructure for Everyone Else (GIFEE)” and whether SRE is a) the highest, noblest goal that we should all aspire towards, or b) mostly irrelevant to anyone outside the Google confines.
Which Gareth won? Check out the slides and judge for yourself. 🙃
At some point in his talk, though, Gareth tossed out something like “Charity probably already has a blog post on this drafted up somewhere.” And I suddenly remembered “Fuck! I DO!” it’s been sitting in my Drafts for months god dammit.
So this is actually a thing I dashed off back in April, after CraftConf. Somebody asked me for my opinion on the internet — always a dangerous proposition — and I went off on a bit of a rant about the differences and similarities between DevOps and SRE, as philosophies and practices.
Time passed and I forgot about it, and then decided it was too stale. I mean who really wants to read a rehash of someone’s tweetstorm from two months ago?
Well Gareth, apparently.
Anyway: enjoy.
SRE vs DevOps: TWO PHILOSOPHIES ENTER, BOTH ARE PHENOMENALLY SUCCESSFUL AND MUTUALLY DUBIOUS OF ONE ANOTHER
So in case you haven’t noticed, Google recently published a book about Site Reliability Engineering: How Google Runs Production Systems. It contains some really terrific wisdom on how to scale both systems and orgs. It contains chapters written by dear friends of mine. It’s a great book, and you should buy it and read it!
It also has some really fucking obnoxious blurbs. Things like about how “ONLY GOOGLE COULD HAVE DONE THIS”, and an whiff of snobbery throughout the book as though they actually believe this (which is far worse if true).
You can’t really blame the poor blurb’ers, but you can certainly look askance at a massive systems engineering org when it seems as though they’ve never heard of DevOps, or considered how it relates to SRE practices, and may even be completely unaware of what the rest of the industry has been up to for the past 10-plus years. It’s just a little weird.
So here, for the record, is what I said about it.
1) a lot of the philosophical volleying between devops / SRE comes down to a failure to recognize the overwhelming power of context.
Google is a great company with lots of terrific engineers, but you can only say they are THE
The Google SRE Bible
BEST at what they do if you’re defining what they do tautologically, i.e. “they are the best at making Google run.” Etsyans are THE BEST at running Etsy, Chefs are THE BEST at building Chef, because … that’s what they do with their lives.
Context is everything here. People who are THE BEST at Googling often flail and flame out in early startups, and vice versa. People who are THE BEST at early-stage startup engineering are rarely as happy or impactful at large, lumbering, more bureaucratic companies like Google. People who can operate equally well and be equally happy at startups and behemoths are fairly rare.
And large companies tend to get snobby and forget this. They stop hiring for unique strengths and start hiring for lack of weaknesses or “Excellence in Whiteboard Coding Techniques,” and congratulate themselves alot about being The Best. This becomes harmful when it translates into to less innovation, abysmal diversity numbers, and a slow but inexorable drift into dinosaurdom.
2) operations engineering is a specialized skill set *at large scale* or *on hard ops problems*. many -- most? companies don't have those.
Everybody thinks their problems are hard, but to a seasoned engineer, most startup problems are not technically all that hard. They’re tedious, and they are infinite, but anyone can figure this shit out. The hard stuff is the rest of it: feverish pace, the need to reevaluate and reprioritize and reorient constantly, the total responsibility, the terror and uncertainty of trying to find product/market fit and perform ten jobs at once and personally deliver to your promises to your customers.
At a large company, most of the hardest problems are bureaucratic. You have to come to terms with being a very tiny cog in a very large wheel where the org has a huge vested interest in literally making you as replicable and replaceable as possible. The pace is excruciatingly slow if you’re used to a startup. The autonony is … well, did I mention the politics? If you want autonomy, you have to master the politics.
3) the outcomes associated with operations (reliability, scalability, operability) are the responsibility of *everyone* from support to CEO.
Everyone. Operational excellence is everyone’s job. Dude, if you have a candidate come in and they’re a jerk to your office manager or your cleaning person, don’t fucking hire that person because having jerks on your team is an operational risk (not to mention, you know, like moral issues and stuff).
But the more engineering-focused your role is, the more direct your impact will be on operational outcomes.
4) therefore, the more literate you are with operational skills, the more effective and powerful you can be -- esp as a software engineer.
As a software engineer, developing strong ops chops makes you powerful. It makes you better at debugging and instrumentation, building resiliency and observability into your own systems and interdependent systems, and building systems that other people can come along and understand and maintain long after you’re gone.
As an operations engineer, those skills are already your bread and butter. You can increase your power in other ways, like by leveling up at software engineering skills like test coverage and automation, or DBA stuff like query optimization and storage engine internals, or by helping the other teams around you level up on their skills (communication and persuasion are chronically underrecognized as core operations engineering skills).
5) specialization is not a bad thing. specialization is how we scale and do capitalism! the problem is when this becomes compartmentalizing.
This doesn’t mean that everyone can or should be able to do everything. (I can’t even SAY the words “full stack engineer” without rolling my eyes.) Generalists are awesome! But past a certain inflection point, specialization is the only way an org can scale.
It’s the only way you make room for those engineering archetypes who only want to dive deep, or who really really love refactoring, or who will save the world then disappear for weeks. Those engineers can be incredibly valuable as part of a team … but they are most valuable in a large org where you have enough generalists to keep the oars rowing along in the meantime.
6) so: Google SRE has an incredibly powerful set of best practices, that enable them to run the largest site in the world incredibly well.
So, back to Google. They’ve done, ahem, rather well for themselves. Made shitbuckets of money, pushed the boundaries of tech, service hardly ever goes down. They have operational demands that most of us never have seen and never will, and their engineers are definitely to be applauded for doing a lot of hard technical and cultural labor to get there.
Mostly because it comes off a little tone deaf in places. I’m not personally pissed off by
the google SRE book, actually, just a little bemused at how legitimately unaware they seem to be about … anything else that the industry has been doing over the past 10 years, in terms of cultural transformation, turning sysadmins into better engineers, sharing on-call rotations, developing processes around empathy and cross-functionality, engineering best practices, etc.
DevOps for the rest of us
If you try and just apply Google SRE principles to your own org according to their prescriptive model, you’re gonna be in for a really, really bad time.
However, it happens that Jen Davis and Katherine Daniels just published a book called Effective DevOps, which covers a lot of the same ground with a much more varied and inclusive approach. And one of the things they return to over and over again is the power of context, and how one-size-fits-all solutions simply don’t exist, just like unisex OSFA t-shirts are a dirty fucking lie.
Google insularity is … a thing. On the one hand it’s great that they’re opening up a bit! On the other hand it’s a little bit like when somebody barges onto a mailing list and starts spouting without skimming any of the archives. And don’t even get me started on what happens when you hire long, longterm ex-Googlers back into to the real world.
So, so many of us have had this experience of hiring ex-Googlers who automatically assume that the way Google does a thing is CORRECT, not just contextually appropriate. Not just right for Google, but right for everyone, always. Which is just obviously untrue. But the reassimilation process can be quite long and exhausting when the Kool-Aid is so strong.
8) DevOps as a philosophy is much more sensitive to context than SRE philosophy, because it grew from a broader collaborative base.
Because yeah, this is a conversation and a transformation that the industry has been having for a long time now. Compared with the SRE manifesto, the DevOps philosophy is much more crowd-sourced, more flexible, and adaptable to organizations of all stages of developments, with all different requirements and key business differentiators, because it’s benefited from loud, mouthy contributors who aren’t all working in the same bubble all along.
And it’s like Google isn’t even aware this was happening, which is weird.
9) that's it, basically all i'm saying is "all blanket statements are false" including probably this one 🙂 #devops#sre
Orrrrrr, maybe I’m just a wee bit annoyed that I’ve been drawn into this position of having to defend “DevOps”, after many excellent years spent being grumpy about the word and the 10000010101 ways it is used and abused.
(Tell me again about your “DevOps Engineering Team”, I dare you.)
(^^ thanks to @kellan and others who particularly influenced/clarified my thinking around #8, the crowdsourcing of devops)
P.S. I highly encourage you to go read the epic hours-long rant by @matthiasr that kicked off the whole thing. some of which I definitely endorse and some of which not, but I think we could go drink whiskey and yell about this for a week or two easy breezy <3
Anyway what the fuck do I know, I’ve never worked in the Google lair, so maybe I am just under-equipped to grasp the true glory, majesty and superiority of their achievements over us all.
Or maybe they should go read Katherine and Jen’s book and interact with the “UnGoogled” once in a while. ☺️