Criticisms of What We Owe the Future
too long, boring, sensational, won't age well, unfocused, etc
I got a lot out of Will MacAskill’sWhat We Owe The Future. But I also have some complaints…
It’ll be totally outdated in 2-10 years
MacAskill cautions that knowing how to make a long term impact is hard:
Suppose that a highly educated person in the year 1500 tried to make the longterm future go well. They would be aware of some relevant things, such as the persistence of laws, religions, and political institutions. But many issues wouldn’t occur to them. The ideas that the earth’s habitable life span could be a billion years and that the universe could be so utterly enormous, yet almost entirely uninhabited, would not have been on the table. Crucial conceptual tools for dealing with uncertainty, such as probability theory and expected value, had not yet been developed. They would not have been exposed to the arguments for a moral worldview in which the interests of all people are equal. They wouldn’t have known what they didn’t know.
How do you get around this problem?
Here’s what I would do:
Look at historical examples of people/projects who tried and succeeded to have a long-term impact.
Look at historical examples of people/projects who tried and failed to have a long-term impact.
Copy the successes and don’t do what the failures did.
A lot of the book is doing the first step: looking at historical writers, political leaders, and thinkers, who managed to have a long term impact. But MacAskill seems to simply take these examples as proofs-of-concept of longtermism, then chooses longtermist policies from first principles.
Here are some things MacAskill talks about which aren’t going to date well:
Current events like the Ukraine-Russia conflict and COVID-19.
There are tons of references to young organisations (metaculus, 80,000 Hours, Open Philanthropy) which will likely be forgotten in 20-40 years.
He wades in on ongoing academic debates like “why did the glyptodonts go extinct?” and “how contingent was the abolition of slavery?”
Most of the ideas being discussed are pretty new and are rapidly evolving. Eg, the state of debate around AI will be totally different in a couple of years. Effective Altruism and Longtermism itself have little track record of staying power.
Admittedly, MacAskill isn’t trying to write a book that will stay relevant forever. He’s trying to write a book that will be relevant now but have long term second-order effects. MacAskill thinks that the present is a special moment in history where we can have an unusually strong impact on the long term future (more on that below), so in this sense MacAskill sees longtermism as a short-term issue.
Which is logically coherent, but seems really weird to me.
Too long and boring
This is certainly subjective, but most of the middle part of WWOTF was pretty bleak to read. Here’s a particularly bad passage:
The reason the Holocene has been conducive to agriculture is that it is warm, so frost does not destroy the growing season; it has higher carbon dioxide levels, which is good for crop yields; and it is climatically stable. If there were a collapse, we would, due to climate change, probably live in an environment one to three degrees warmer than today’s. But this seems unlikely to make a major difference: generally it is cold and low–carbon dioxide environments that make global agriculture near impossible, not warm and high–carbon dioxide environments.
This is Hegel-level boring. What happened to you MacAskill? Doing Good Better had such clear and consice prose.
The math is pointless
MacAskill proposes evaluating longterm causes by estimating three values:
Multiplying these quantities together then gives an overall “importance” to the cause area.
It’s hard to see how you could measure or estimate these values even in principle. Especially contingency: you only ever get to see history play out once, so how could you possibly know the probability that history would have gone another way?
But even if you could measure significance, persistence, and contingency, you’ve then got the harder problem of figuring out how the quantities interact with each other.
MacAskill’s thinking about the SPC interactions seems confused. At one point he claims:
When we’re applying the significance, persistence, and contingency framework, we should therefore be thinking about expected significance, expected persistence, and expected contingency.
But then later admits that
Note, however, that E(SPC) does not in general equal E(S)E(P)E(C).
Which means that actually we shouldn’t be thinking about the expected significance, persistence, and contingency. These are more-or-less irrelevant to evaluating E(SPC).
To me this smells of “I have no idea how evaluate longtermist causes, so I’ll cover up my cluelessness with math.”
Sensationalism
The chance of the end of civilisation this century, whether via extinction or permanent collapse, is far too high for us to be comfortable with. In my view, giving this a probability of at least 1 percent seems reasonable. But even if you think it is only a one-in-a-thousand chance, the risk to humanity this century is still ten times higher than the risk of your dying this year in a car crash.117 If humanity is like a teenager, then she is one who speeds round blind corners, drunk, without wearing a seat belt.
Some out-the-ass numbers, and a sensational metaphor. No evidence or even strategic choice of priors.
Taking this uncertainty fully into account means that the expected length of stagnation could be very great indeed. Even if you think it’s 90 percent likely that stagnation would last only a couple of centuries and just 10 percent likely that it would last ten thousand years, then the expected length of stagnation is still over a thousand years.
Again, totally made up numbers. Why is this paragraph here? What purpose does it serve?
Unfocused
When I wrote a distillation of What We Owe The Future, I often found it really hard to figure out what was the overall point for a given chapter or paragraph.
The book has something of a designed-by-committee feel to it. Which makes sense because it was in fact written by a committree.
I’ve therefore relied heavily on an extensive team of consultants and research assistants. Whenever I’ve stepped outside of moral philosophy, my area of expertise, domain experts have advised me from start to end. This book is therefore not really “mine”: it has been a team effort.
For example, the book has three central metaphors:
To illustrate the claims in this book, I rely on three primary metaphors throughout. The first is of humanity as an imprudent teenager. Most of a teenager’s life is still ahead of them, and their decisions can have lifelong impacts. In choosing how much to study, what career to pursue, or which risks are too risky, they should think not just about short-term thrills but also about the whole course of the life ahead of them.
The second is of history as molten glass. At present, society is still malleable and can be blown into many shapes. But at some point, the glass might cool, set, and become much harder to change. The resulting shape could be beautiful or deformed, or the glass could shatter altogether, depending on what happens while the glass is still hot.
The third metaphor is of the path towards longterm impact as a risky expedition into uncharted terrain. In trying to make the future better, we don’t know exactly what threats we will face or even exactly where we are trying to go; but, nonetheless, we can prepare ourselves. We can scout out the landscape ahead of us, ensure the expedition is well resourced and well coordinated, and, despite uncertainty, guard against those threats we are aware of.
Never before have metaphors been mixed so boldly. Future generations will look back on this analogical concoction and marvel that such a feat was possible at our current levels of technology.
The Chapter on Stagnation Was Weird
The first time I read the chapter on stagnation I thought “Welp, I totally didn’t buy any of that and also I have no idea what that had to do with longtermism.”
On re-reading I started to vaguely understand the argument: MacAskill reckons the technology we have now is existentially risky (eg, dangerous bio-engineering), but that when we get more technology it will become safe again (eg, with an AI surveillance state?), so we want to bypass this period as quickly as possible…?
Clearly I still haven’t really understood what MacAskill’s point was. Honestly, this whole chapter seemed like a trendy topic MacAskill wanted to talk about so he shoe-horned it in.
But anyway. MacAskill argues that there are reasons why we might technologically stagnate. Some of these were demographic extrapolations. Which is weak, but admissable evidence.
But MacAskill also recounts an argument by Robert Gordon that we’ve seen much less technological progress in the last 50 years compared to previous 50-year intervals. In 1880-1920 we got electrificiation, automobiles, and telephones. 1920-1970 we got refridgerators and TVs. And 1970 to 2020 we only have smartphones and microwaves.
But, like, that’s insane. The difference between 1970 and 2020 is that in 2020 I can start a million dollar business without leaving my bedroom. We don’t need a new version of fridges and microphones because we live in a digital universe now. It’s like saying “In 2020 to 2070 everyone uploaded their brains to the cloud and became beings of pure energy but the only new household appliance we got was the plasmic can-opener, so not much progress, really.”
I don’t think we live in a special time
MacAskill argues that we live in a weird moment in history and so may have an outsized impact on the long term future.
Why do we live in a weird moment in time? Because we are experiencing rapid economic growth and we are unusually connected.
We know we are experiencing rapid economic growth because if you draw the economic output on a graph it’s really big recently and really small before that.
This really proves nothing. Maybe economic growth is exponential, hyperbolic, double exponential, or whatever, and we would still see a graph like that without our time being special.
But MacAskill thinks that we must be approaching the end of growth because:
Suppose that future growth slows a little to just 2 percent per year. At such a rate, in ten thousand years the world economy would be 10^86 times larger than it is today—that is, we would produce one hundred trillion trillion trillion trillion trillion trillion trillion times as much output as we do now. But there are less than 10^67 atoms within ten thousand light years of Earth. So if current growth rates continued for just ten millennia more, there would have to be ten million trillion times as much output as our current world produces for every atom that we could, in principle, access. Though of course we can’t be certain, this just doesn’t seem possible.
Totally disagree. Consider the economic progress in storing stories. Vikings stories were incredibly boring sagas and carving them in runes took like one boulders per hundred words. Nowdays you only need like a few micrograms of solid state storage to store, say, all of Adventure Time. The difference in value (for me) is something like 10,000x, and difference in number of atoms used is something like 10^10, so let’s say a ballpark increase in value-per-atom of 10^14x. So the idea that this could increase by a further 10^16 (or whatever) doesn’t seem so wild.
MacAskill says we’re unusually connected because we used to be in isolated bands of hunter-gatherers, now we’re a globalised society, and in the future we’ll be isolated galactic colonial outposts.
But going even further back we were even more connected as the first single-celled organism. And going forward a little we’ll be even more connected as a digital hivemind. So This argument doesn’t speak to me at all.
The upshot is that I don’t think we’re living in a special time and we should expect to have as much impact on the longterm future as at any other time in history.
Parfit hagiography
Derek Parfit was one of the most creative and influential moral philosophers of the last century, a machine for turning coffee into philosophical insights. He lived almost all of his life in educational institutions, attending Eton on a scholarship before studying history at Oxford, then winning a prize fellowship at All Souls College. All Souls might be the most exclusive research institute in the world; there are no undergraduates and fewer than ten graduate students at any one time. The qualifying tests for the fellowship have been called “the hardest exam in the world”: twelve hours of domain-specific and general questions and prompts such as, “What is a number?” “Can we be forced to be free?” and even “Defend tweeting.” Up until recently, there was a further three-hour exam that simply presented you with a single word, such as “water,” “novelty,” or “reproduction,” and required you to write a full essay on the topic.4 After receiving the fellowship at age twenty-four, Parfit spent the next forty-three years at All Souls and never completed any of his philosophy degrees.
He was utterly single-minded in his pursuit of improving our moral understanding. In the latter half of his life, he would take every opportunity to save time on anything that wasn’t philosophy: literally running between seminars, wearing the same outfit every day (black trousers and a white shirt), and eating the same easy-to-prepare vegetarian meals (cereal with yogurt and blackberries for breakfast; for dinner, raw carrots, romaine lettuce, celery dipped in peanut butter or hummus, followed by tangerines and apples). He would read philosophy while brushing his teeth. The coffee he drank was instant, filled from the hot water tap so that he didn’t have to wait for the kettle to boil. As New Yorker journalist Larissa MacFarquhar noted in her profile of him, “The driving force behind Parfit’s moral concern was suffering. He couldn’t bear to see someone suffer—even thinking about suffering in the abstract could make him cry.”
This section made me kinda angry.
I never met Parfit and have no idea what he was like personally, but I don’t see any evidence that he made any real sacrifices or did any concrete good.
Parfit was an Eton boy and the son of medical doctors. His brain was well-suited to academics, and he rapidly cruised to an academic fellowship. Society rewarded his eccentricities and he never had to learn how to act like a grown-up. (Tangerines are one of the more time-consuming fruits to eat, and if he could read while brushing his teeth then surely he could read while waiting for the kettle to boil, so the “time saving” narrative doesn’t ring true.)
Having lived his entire life far removed from physical hardship, he conceptualised suffering as some kind of abstract fluid, and that was what he got sad about. He also had a weirdly abstract writing style which gives the reader a hard time for no reason.
Population ethics and personal identity are fun puzzles for philosophy undergrads, but you have to do a lot of work to actually map them onto anything in reality, and I doubt any of Parfit’s ideas have helped anyone do anything other than write an essay.
And I can’t help but notice that MacAskill is also an Oxford ethics professor. So it’s in his direct interest to associate the profession with beneficence and utility.
The whole thing kinda sniffs of propaganda for the Oxbridge class.
Conclusion
Overall, I did not enjoy reading What We Owe The Future. The philosophical thrust of longtermism is really quite simple, and was made more forcefully by Bostrom.
There were bits I liked. I learned a bunch of cool stuff. But I think it would’ve been twice as good if it was half as long.
Three and a half stars.