First of all, confusion over terminology is understandable, because there are some big players out there actively trying to confuse you! Big Monitoring is indeed actively trying to define observability down to “metrics, logs and traces”. I guess they have been paying attention to the interest heating up around observability, and well… they have metrics, logs, and tracing tools to sell? So they have hopped on the bandwagon with some undeniable zeal.
But metrics, logs and traces are just data types. Which actually has nothing to do with observability. Let me explain the difference, and why I think you should care about this.
“Observability? I do not think it means what you think it means.”
Observability is a borrowed term from mechanical engineering/control theory. It means, paraphrasing: “can you understand what is happening inside the system — can you understand ANY internal state the system may get itself into, simply by asking questions from the outside?” We can apply this concept to software in interesting ways, and we may end up using some data types, but that’s putting the cart before the horse.
It’s a bit like saying that “database replication means structs, longints and elegantly diagrammed English sentences.” Er, no.. yes.. missing the point much?
This is such a reliable bait and switch that any time you hear someone talking about “metrics, logs and traces”, you can be pretty damn sure there’s no actual observability going on. If there were, they’d be talking about that instead — it’s far more interesting! If there isn’t, they fall back to talking about whatever legacy products they do have, and that typically means, you guessed it: metrics, logs and traces.
Metrics in particular are actually quite hostile to observability. They are usually pre-aggregated, which means you are stuck with whatever questions you defined in advance, and even when they aren’t pre-aggregated they permanently discard the connective tissue of the request at write time, which destroys your ability to correlate issues across requests or track down any individual requests or drill down into a set of results — FOREVER.
Which doesn’t mean metrics aren’t useful! They are useful for many things! But they are useful for things like static dashboards, trend analysis over time, or monitoring that a dimension stays within defined thresholds. Not observability. (Liz would interrupt here and say that Google’s observability story involves metrics, and that is true — metrics with exemplars. But this type of solution is not available outside Google as far as we know..)
Ditto logs. When I say “logs”, you think “unstructured strings, written out to disk haphazardly during execution, “many” log lines per request, probably contains 1-5 dimensions of useful data per log line, probably has a schema and some defined indexes for searching.” Logs are at their best when you know exactly what to look for, then you can go and find it.
Again, these connotations and assumptions are the opposite of observability’s requirements, which deals with highly structured data only. It is usually generated by instrumentation deep within the app, generally not buffered to local disk, issues a single event per request per service, is schemaless and indexless (or inferred schemas and autoindexed), and typically containing hundreds of dimensions per event.
Traces? Now we’re getting closer. Tracing IS a big part of observability, but tracing just means visualizing events in order by time. It certainly isn’t and shouldn’t be a standalone product, that just creates unnecessary friction and distance. Hrmm … so what IS observability again, as applied to the software domain??
As a reminder, observability applied to software systems means having the ability to ask any question of your systems — understand any user’s behavior or subjective experience — without having to predict that question, behavior or experience in advance.
Observability is about unknown-unknowns.
At its core, observability is about these unknown-unknowns.
Plenty of tools are terrific at helping you ask the questions you could predict wanting to ask in advance. That’s the easy part. “What’s the error rate?” “What is the 99th percentile latency for each service?” “How many READ queries are taking longer than 30 seconds?”
- Monitoring tools like DataDog do this — you predefine some checks, then set thresholds that mean ERROR/WARN/OK.
- Logging tools like Splunk will slurp in any stream of log data, then let you index on questions you want to ask efficiently.
- APM tools auto-instrument your code and generate lots of useful graphs and lists like “10 slowest endpoints”.
But if you *can’t* predict all the questions you’ll need to ask in advance, or if you *don’t* know what you’re looking for, then you’re in o11y territory.
- This can happen for infrastructure reasons — microservices, containerization, polyglot storage strategies can result in a combinatorial explosion of components all talking to each other, such that you can’t usefully pre-generate graphs for every combination that can possibly degrade.
- And it can happen — has already happened — to most of us for product reasons, as you’ll know if you’ve ever tried to figure out why a spike of errors was being caused by users on ios11 using a particular language pack but only in three countries, and only when the request hit the image export microservice running build_id 789782 if the user’s last name starts with “MC” and they then try to click on a particular button which then issues a db request using the wrong cache key for that shard.
Gathering the right data, then exploring the data.
Observability starts with gathering the data at the right level of abstraction, organized around the request path, such that you can slice and dice and group and look for patterns and cross-correlations in the requests.
To do this, we need to stop firing off metrics and log lines willynilly and be more disciplined. We need to issue one single arbitrarily-wide event per service per request, and it must contain the *full context* of that request. EVERYTHING you know about it, anything you did in it, all the parameters passed into it, etc. Anything that might someday help you find and identify that request.
Then, when the request is poised to exit or error the service, you ship that blob off to your o11y store in one very wide structured event per request per service.
In order to deliver observability, your tool also needs to support high cardinality and high dimensionality. Briefly, cardinality refers to the number of unique items in a set, and dimensionality means how many adjectives can describe your event. If you want to read more, here is an overview of the space, and more technical requirements for observability
You REQUIRE the ability to chain and filter as many dimensions as you want with infinitely high cardinality for each one if you’re going to be able to ask arbitrary questions about your unknown unknowns. This functionality is table stakes. It is non negotiable. And you cannot get it from any metrics or logs tool on the market today.
Why this matters.
Alright, this is getting pretty long. Let me tell you why I care so much, and why I want people like you specifically (referring to frontend engineers and folks earlier in their careers) to grok what’s at stake in the observability term wars.
We are way behind where we ought to be as an industry. We are shipping code we don’t understand, to systems we have never understood. Some poor sap is on call for this mess, and it’s killing them, which makes the software engineers averse to owning their own code in prod. What a nightmare.
Meanwhile developers readily admit they waste >40% of their day doing bullshit that doesn’t move the business forward. In large part this is because they are flying blind, just stabbing around in the dark.
We all just accept this. We shrug and say well that’s just what it’s like, working on software is just a shit salad with a side of frustration, it’s just the way it is.
But it is fucking not. It is un fucking necessary. If you instrument your code, watch it deploy, then ask “is it doing what I expect, does anything else look weird” as a habit? You can build a system that is both understandable and well-understood. If you can see what you’re doing, and catch errors swiftly, it never has to become a shitty hairball in the first place. That is a choice.
🌟 But observability in the original technical sense is a necessary prerequisite to this better world. 🌟
If you can’t break down by high cardinality dimensions like build ids, unique ids, requests, and function names and variables, if you cannot explore and swiftly skim through new questions on the fly, then you cannot inspect the intersection of (your code + production + users) with the specificity required to associate specific changes with specific behaviors. You can’t look where you are going.
Observability as I define it is like taking off the blindfold and turning on the light before you take a swing at the pinata. It is necessary, although not sufficient alone, to dramatically improve the way you build software. Observability as they define it gets you to … exactly where you already are. Which of these is a good use of a new technical term?
And honestly, it’s the next generation who are best poised to learn the new ways and take advantage of them. Observability is far, far easier than the old ways and workarounds … but only if you don’t have decades of scar tissue and old habits to unlearn.
The less time you’ve spent using monitoring tools and ops workarounds, the easier it will be to embrace a new and better way of building and shipping well-crafted code.
Observability matters. You should care about it. And vendors need to stop trying to confuse people into buying the same old bullshit tools by smooshing them together and slapping on a new label. Exactly how long do they expect to fool people for, anyway?