I’ve written before about how lifting is good practice for writing: It teaches you the value of consistency and incremental progress, it shows you how long it really doesn’t take to do more than you ever thought you could when you began. I’ve written a number of books, rarely on more than 1000 words a day; a couple weeks ago I broke 300 pounds on the deadlift at the age of 42, when I’d hurt myself more than once on much lower weights in my 30s. If you go up by 5-pound increments long enough, you’ll get there.
Unless, of course, you won’t. Data people love a linear regime, at least those of us who came up before the age of the Rectified Linear Unit. But ultimately, in real systems, figuring out where the regime is linear is at least as much art as science. The ability to hit a word count consistently is a key tool for a long-form writer, but it won’t get the book done on its own; that requires the extremely nonlinear and basically unmeasurable process of editing. The exercise science people are much more systematic and quantitative in how they adapt to the ceiling on linear progress, but ultimately they have to go nonlinear as well. At that level of your practice, you start thinking in cycles, in seasons; to have a season of abundance, you need a season of rest. The number doesn’t lose its meaning — it’s still some kind of measure of strength, of progress — but, when it can’t just go up, you’re forced to consider what it really meant that it was going up in the first place. What were you really after? Did you get it?
This is how people think about their own practices. The pandemic, it seems, has accelerated the quantification of work. When work gets quantified, managers learn to understand it in terms of data exhaust rather than the actual work product. It’s so easy to let the number be the most important thing, when it’s not you it’s measuring.
I worked at a company that calculated a quantitative score for the “operating capacity” of the employees in my division. Massive effort was devoted to “business intelligence” around this: Complex, configurable, high-throughput data pipelines for each team, bespoke KPI catalogues for tasks created by statistical sampling.
The output was widely known to bear an inverted U-shaped relationship to performance: It was high for median performers and low at both extremes. This is because median performers devoted a lot of time to the kind of tasks that were readily measured by the business intelligence instrumentation; poor performers might or might not put a lot of time in, but one way or another they didn’t complete many of those tasks; and the really good performers were put on more complex tasks whose output was hard to measure.
The solution was just to badger the high performers to measure their tasks better. Operating capacity had become too big to fail; there’d been too much investment in data engineering and measurement to just scrap it.
Every so often someone would get a brainwave: Many of the high performers were usually being put on technical tasks, some of them actually writing software. Why not just go to the engineering division and ask them how they measured people’s productivity? Surely they’d have some good ideas.
Of course they fucking didn’t. The engineers got paid way too much for anyone to sap their precious time with any of this shit.
I’m a data scientist; I love quantification, I love a linear regime. But I’m here to tell you, if someone tries to turn the single thing you spend the plurality of your waking hours on into one number? No matter how high it is, they do not value what you do.