(Yet another metastasized comment.)
I often say that the free market is the most efficient machine we've ever invented for converting resources into commodities, and that this is awesome for us to the extent that we are consumers of commodities, and awful for us to the extent that we are resources, and the reality is we're a little of both, so we benefit from it and we suffer for it.
I say it a little tongue-in-cheek, of course, with the intent of being a little jarring, because talking about human social systems like markets (and corporations, governments, committees, schools, volunteer fire departments, families, social and economic classes, etc.) as "machines" that we "invent" is not a standard way of talking.
But I mean it quite literally, just the same. Markets are machines. So are schools, and poems, and legal systems, and apologies, and dating customs. Or, perhaps more precisely, they are design patterns that govern the machines our brains construct for communicating and collaborating with other brains.
Of course, it's theoretically possible to build other social machines, with more humanistic goals than markets. which perform similar functions. For example, there are regulatory systems that are far more humanistic than an unregulated free market that still do a pretty good job of converting resources into commodities, giving us much of the benefit of markets with less of the suffering.
We've implemented some of those over the years as well.
Unfortunately, building those kinds of machines out of human beings is tricky; our brains aren't designed for it and don't really support it. I mean, we defect on Prisoner's Dilemmas, for crying out loud! Cooperating on those is game-theoretical low-hanging fruit, and our brains nevertheless fail at it over and over and over. Another symptom of our inadequacy in this area is that regulatory capture is basically an inescapable property of human-implemented social systems, including (though hardly limited to) markets, especially as they scale up.
Which means that the market regulation algorithms we've invented are inherently unreliable when implemented on human brains.
Fortunately, that's not our only platform option. I mean, we can't implement efficient reliable buildings or vehicles or mathematical-calculators or recorders out of human beings either, even though at one time we used to do so, but over the centuries we've discovered we don't have to. We can build other machines to protect us from the elements, to move objects from place to place, to solve mathematical equations, to record events and play them back, etc., and those machines are better at it than we are, and we benefit from their comparative advantage.
It's a good thing.
We're on the cusp of learning to build social-coordination machines on nonhuman platforms, just as we've learned to build calculating machines and recording machines and vehicles and houses on nonhuman platforms. We've already taken some pretty significant steps in that direction, and we will take more of them. We are building our robot overlords even as we speak.
It's a good thing.
Of course, it's a bit of a paradigm shift. I sympathize with that.
I imagine that the idea of a non-human calculating machine was really hard to wrap our minds around at first, too, for similar reasons... if you've grown up in a world where humans are the only kind of system that can know things like "2+2=4," the idea that we can build an inanimate object that does so is profoundly counterintuitive, probably absurd on the face of it, and probably kind of offensive. When Babbage and Lovelace were talking about analytical engines in the 1800s, most people thought probably thought they were spouting mystical nonsense, and honestly I can't blame them... in that time and place, I suspect I would have thought the same thing.
But of course it turns out that we can build inanimate objects that know "2+2=4" quite handily, and 200 years later nobody thinks that at all strange. It's not simple, but it's not at all mysterious either. And those machines don't know it the same way our brains do, of course (at least not usually, although Hofstadter has done some interesting work with computers that seem to do just that, but that's beside my point here), but they know it in ways that let them perform mathematical calculations just the same.
In the same way, the idea of a non-human social machine -- the idea that inanimate objects can do the same stuff that in our experience is done only by human social systems (corporations, governments, committees, volunteer fire departments, social and economic classes, etc.) -- is really hard to wrap our minds around. It's counterintuitive, absurd on the face of it, and kind of offensive, and in a few generations when we've all grown up with them nobody will think them at all strange.
I think that one might be a little easier to accept, since (pace SCOTUS) most of us don't really think of a government or a corporation as something like us in the first place, even if it's implemented out of humans. But only a little easier, because there are other social machines -- families and socioeconomic classes and workplaces and theatre groups and churches -- that we are strongly invested in thinking of as human constructs, and we will find the idea of replacing those machines with superior inanimate versions intimidating, and alienating, and terrifying.
And of course we won't have to replace them altogether. We still camp out in the wilderness sometimes, even though we have built efficient nonhuman shelter machines. We still travel places with our legs and carry things with our arms and sew things with our hands and remember poetry with our brains and make music with our mouths, and do all kinds of other things that we're not really particularly good at compared to our machines, because it's fun and satisfying and pleasant to do so, and because sometimes it's even more efficient to use our bodies to perform one-off tasks half-assedly on a small scale rather than build a machine to do it right.
And moving forward we'll similarly still organize small-scale low-stakes social systems with our brains as well. When a bunch of friends get together for a night out, we might still work out the social dynamics of what we're going to do using our brains rather than a computer, just as we might decide today to walk a mile to the nearest pub rather than drive. It's fun. It's good exercise. It's emotionally satisfying.
But just as it would never seriously occur to us to ship industrial packages using our legs and hands, or rely on the brains of cashiers to calculate and record financial transactions, or to house the population of Manhattan by camping out in the wilderness, the idea of managing actual governments or markets or other serious social structures with our brains will seem absurd. When we care more about the outcome than about how satisfying the process is, we rely on our machines, because they do it right.
And in much the same way, we will come to rely on our social machines for collaboration and coordination.
And we'll screw it up a few times along the way. There will be horrible disasters in the early generations, social structures that are just unbelievably fucked up in ways that are unimaginable to us today and the suffering will be heartbreaking.
And conservatives will point to that suffering and argue that this is a dangerous path we're on.
And progressives will point to the long and bloody history of the human race and argue that it has always been a dangerous path but this way lies hope.
And we'll stumble along in fits and starts and go down blind alleys and sometimes turn our backs on the whole enterprise, arguing all the while, suffering and dying and loving and inventing and muddling through, just as we always have.
And two centuries from now we'll tell the horror stories about those awful social groups the same way we talk about the Hindenburg today... but at the same time, we'll no more be interested in governing our societies or our families or our churches using human brains than we are interested today in travelling cross-country on horseback. And our descendants will read about how we <i>used</i> to do it, about "democracy" and "free markets" and "homeowners associations" and "town halls" and "mayors" and all this other outmoded stuff, and will be unable to conceive of how that ever seemed like a reasonable way to live.
And they will feel disturbed and disquieted and alienated by their machines, just as we do by our machines today, and that will have all kinds of psychological consequences, and their kids will grow up not knowing how to socialize in their heads because they've always relied on machines, and they will worry about that and find ways to accommodate and alleviate it, and it's right and proper that they do so, but that's another post.