You are viewing dpolicar

Previous Entry | Next Entry

but...

(Yet another metastasized comment.)

I often say that the free market is the most efficient machine we've ever invented for converting resources into commodities, and that this is awesome for us to the extent that we are consumers of commodities, and awful for us to the extent that we are resources, and the reality is we're a little of both, so we benefit from it and we suffer for it.

I say it a little tongue-in-cheek, of course, with the intent of being a little jarring, because talking about human social systems like markets (and corporations, governments, committees, schools, volunteer fire departments, families, social and economic classes, etc.) as "machines" that we "invent" is not a standard way of talking.

But I mean it quite literally, just the same. Markets are machines. So are schools, and poems, and legal systems, and apologies, and dating customs. Or, perhaps more precisely, they are design patterns that govern the machines our brains construct for communicating and collaborating with other brains.

Of course, it's theoretically possible to build other social machines, with more humanistic goals than markets. which perform similar functions. For example, there are regulatory systems that are far more humanistic than an unregulated free market that still do a pretty good job of converting resources into commodities, giving us much of the benefit of markets with less of the suffering.

We've implemented some of those over the years as well.

Unfortunately, building those kinds of machines out of human beings is tricky; our brains aren't designed for it and don't really support it. I mean, we defect on Prisoner's Dilemmas, for crying out loud! Cooperating on those is game-theoretical low-hanging fruit, and our brains nevertheless fail at it over and over and over. Another symptom of our inadequacy in this area is that regulatory capture is basically an inescapable property of human-implemented social systems, including (though hardly limited to) markets, especially as they scale up.

Which means that the market regulation algorithms we've invented are inherently unreliable when implemented on human brains.

Fortunately, that's not our only platform option. I mean, we can't implement efficient reliable buildings or vehicles or mathematical-calculators or recorders out of human beings either, even though at one time we used to do so, but over the centuries we've discovered we don't have to. We can build other machines to protect us from the elements, to move objects from place to place, to solve mathematical equations, to record events and play them back, etc., and those machines are better at it than we are, and we benefit from their comparative advantage.

It's a good thing.

We're on the cusp of learning to build social-coordination machines on nonhuman platforms, just as we've learned to build calculating machines and recording machines and vehicles and houses on nonhuman platforms. We've already taken some pretty significant steps in that direction, and we will take more of them. We are building our robot overlords even as we speak.

It's a good thing.

Of course, it's a bit of a paradigm shift. I sympathize with that.

I imagine that the idea of a non-human calculating machine was really hard to wrap our minds around at first, too, for similar reasons... if you've grown up in a world where humans are the only kind of system that can know things like "2+2=4," the idea that we can build an inanimate object that does so is profoundly counterintuitive, probably absurd on the face of it, and probably kind of offensive. When Babbage and Lovelace were talking about analytical engines in the 1800s, most people thought probably thought they were spouting mystical nonsense, and honestly I can't blame them... in that time and place, I suspect I would have thought the same thing.

But of course it turns out that we can build inanimate objects that know "2+2=4" quite handily, and 200 years later nobody thinks that at all strange. It's not simple, but it's not at all mysterious either. And those machines don't know it the same way our brains do, of course (at least not usually, although Hofstadter has done some interesting work with computers that seem to do just that, but that's beside my point here), but they know it in ways that let them perform mathematical calculations just the same.

In the same way, the idea of a non-human social machine -- the idea that inanimate objects can do the same stuff that in our experience is done only by human social systems (corporations, governments, committees, volunteer fire departments, social and economic classes, etc.) -- is really hard to wrap our minds around. It's counterintuitive, absurd on the face of it, and kind of offensive, and in a few generations when we've all grown up with them nobody will think them at all strange.

I think that one might be a little easier to accept, since (pace SCOTUS) most of us don't really think of a government or a corporation as something like us in the first place, even if it's implemented out of humans. But only a little easier, because there are other social machines -- families and socioeconomic classes and workplaces and theatre groups and churches -- that we are strongly invested in thinking of as human constructs, and we will find the idea of replacing those machines with superior inanimate versions intimidating, and alienating, and terrifying.

And of course we won't have to replace them altogether. We still camp out in the wilderness sometimes, even though we have built efficient nonhuman shelter machines. We still travel places with our legs and carry things with our arms and sew things with our hands and remember poetry with our brains and make music with our mouths, and do all kinds of other things that we're not really particularly good at compared to our machines, because it's fun and satisfying and pleasant to do so, and because sometimes it's even more efficient to use our bodies to perform one-off tasks half-assedly on a small scale rather than build a machine to do it right.

And moving forward we'll similarly still organize small-scale low-stakes social systems with our brains as well. When a bunch of friends get together for a night out, we might still work out the social dynamics of what we're going to do using our brains rather than a computer, just as we might decide today to walk a mile to the nearest pub rather than drive. It's fun. It's good exercise. It's emotionally satisfying.

But just as it would never seriously occur to us to ship industrial packages using our legs and hands, or rely on the brains of cashiers to calculate and record financial transactions, or to house the population of Manhattan by camping out in the wilderness, the idea of managing actual governments or markets or other serious social structures with our brains will seem absurd. When we care more about the outcome than about how satisfying the process is, we rely on our machines, because they do it right.

And in much the same way, we will come to rely on our social machines for collaboration and coordination.

And we'll screw it up a few times along the way. There will be horrible disasters in the early generations, social structures that are just unbelievably fucked up in ways that are unimaginable to us today and the suffering will be heartbreaking.
And conservatives will point to that suffering and argue that this is a dangerous path we're on.
And progressives will point to the long and bloody history of the human race and argue that it has always been a dangerous path but this way lies hope.
And we'll stumble along in fits and starts and go down blind alleys and sometimes turn our backs on the whole enterprise, arguing all the while, suffering and dying and loving and inventing and muddling through, just as we always have.

And two centuries from now we'll tell the horror stories about those awful social groups the same way we talk about the Hindenburg today... but at the same time, we'll no more be interested in governing our societies or our families or our churches using human brains than we are interested today in travelling cross-country on horseback. And our descendants will read about how we <i>used</i> to do it, about "democracy" and "free markets" and "homeowners associations" and "town halls" and "mayors" and all this other outmoded stuff, and will be unable to conceive of how that ever seemed like a reasonable way to live.

And they will feel disturbed and disquieted and alienated by their machines, just as we do by our machines today, and that will have all kinds of psychological consequences, and their kids will grow up not knowing how to socialize in their heads because they've always relied on machines, and they will worry about that and find ways to accommodate and alleviate it, and it's right and proper that they do so, but that's another post.

Comments

( 38 comments — Leave a comment )
desireearmfeldt
Jul. 4th, 2014 04:12 pm (UTC)
and their kids will grow up not knowing how to socialize in their heads because they've always relied on machines, and they will worry about that and find ways to accommodate and alleviate it, and it's right and proper that they do so, but that's another post.

Will they? I don't think we do much to address the worry that if electricity or transportation failed tomorrow -- or heck, if the computer controlling the car glitches -- most of us can't do a damn thing to take over the tasks we've handed over to the machines... Which I suppose doesn't mean we don't do things to *alleviate* the worry, primarily not think about it too hard or too long...
dpolicar
Jul. 4th, 2014 04:39 pm (UTC)
(nods) Fair point.

When I wrote that, I was thinking about things like how when I was a kid everyone was handwringing about how what with pocket calculators kids wouldn't learn to do math, and what with digital clocks kids wouldn't learn to tell time.

But of course you're right that there are far more serious scenarios in the same space, and that for the most part we just kind of accommodate ourselves to the fact that this is just the way the world is. Sometimes drummers just explode.
lyonesse
Jul. 4th, 2014 06:58 pm (UTC)
eh. i think a lot of what we do socially, like make friends and choose leaders and perform kindnesses and ostracize people, is wired in deep and unlikely to be outsourceable.

i recommend frans de waal's book "good natured" for examples and discussion of such in nonhuman animals.
dpolicar
Jul. 4th, 2014 07:20 pm (UTC)
I agree that it's wired in deep, and that it's prevalent in nonhuman animals.

I'm not sure how either of those things relates to our ability to "outsource" it (that is, to build artificial systems that improve on our natural ones).

I mean, a lot of our behavior around eating is wired in deep and prevalent in nonhuman animals as well, but that didn't stop agriculture from catching on.
lyonesse
Jul. 4th, 2014 07:51 pm (UTC)
agriculture is a complicated case, but i think that it did not really change our behavior around eating. we were omnivores before and we're omnivores now (overall), and we still seek a varied diet, we still hunger for sweets and fats (arguably not a good idea anymore at all), &c.
dpolicar
Jul. 4th, 2014 08:06 pm (UTC)
Right. And similarly, I expect we will still value the same things in social interaction in the future that we value today.

I just expect that the infrastructure whereby we obtain those things will be transformed into something currently unimaginable, just like it has with food over the last few millenia.
lyonesse
Jul. 4th, 2014 08:19 pm (UTC)
it's hard to argue with "the future will be unimaginable"....

that said, this conversation is a social interaction depending on a quite novel infrastructure. does it count as the kind of thing you're talking about?
dpolicar
Jul. 4th, 2014 08:34 pm (UTC)
(nods) Yes. And in fact, I think the Internet as a social delivery mechanism has changed a lot of our social patterns, mostly for the better, though on a much smaller scale than I'm ultimately anticipating.
lyonesse
Jul. 5th, 2014 01:43 am (UTC)
ok. what scale do you anticipate, besides "unimaginable"? :)
dpolicar
Jul. 5th, 2014 02:16 am (UTC)
A point where most organizations (companies, social clubs, governments) would no more dream of having their organizational decisions (e.g. hiring, firing, setting policy) made by unaided humans than they would having their shipping done that way.
lyonesse
Jul. 5th, 2014 03:13 am (UTC)
huh. i had the impression that, like everybody else, hr and management make extensive use of computerized tools. do you believe otherwise?
dpolicar
Jul. 5th, 2014 03:18 am (UTC)
Of course I don't believe that HR and management don't use computers.
lyonesse
Jul. 5th, 2014 03:20 am (UTC)
indeed. so what do you expect to change about their practices?

(sorry if this is poky. i am very interested in the idea, and would like to understand you well, though i think my assumptions are very different from yours.)
dpolicar
Jul. 5th, 2014 03:59 am (UTC)
It's hard to speculate in detail about technological developments we haven't seen yet; any given detail is likely to be wrong.

But that said...

I expect to get to a place where I can subscribe to email notifications of jobs I might prefer to my current job with confidence that all the notifications I receive will be for jobs that are at least worth looking into, and that my company runs software that routinely evaluates the likelihood that I'll receive such an offer and recommends whether to provide me with incentives to encourage me to stay, and what incentives to offer. (And, conversely, software that routinely evaluates whether I should be fired.)

I expect to get to a place where instead of a human executive using computer programs to see a bunch of data about my company and other companies and then deciding (at least nominally based on that data) whether to lay off 30% of the QA department or to hire three new QA people, a program running on the same computer looks at the same data and projected development output and mid- to long-term corporate goals and makes a QA staffing recommendation.

I expect to get to a place where instead of humans drawing up voting district maps based on various goals they're optimizing for, which mostly aren't ever explicitly articulated, we instead tell a computer program what goals we want our voting districts optimized for and it generates district boundaries. (I expect that one to be enormously controversial, because many of us will want to continue to keep those goals covert.)

I expect to get to a place where legal software can construct a detailed model of a client's legal situation and goals (through a combination of analyzing public data about the client, private data the client makes available, and statistical/aggregate data about people in general) and compare that model to a library of representations of existing laws in order to recommend a legal strategy to achieve those goals.

I expect to get to a place where Google Calendar suggests that I make dinner plans on July 16th with a friend of mine I haven't seen in a while, because both of us have a free evening then and we both like Chinese food and we both want to see each other a few times a year and we're both going to be in Arlington that afternoon and there's a Chinese restaurant in Arlington we would probably both like, and we should meet a mutual friend for pastry afterwards at a coffee shop a few blocks away who can't join us for dinner but would probably want to see us both.

I expect to get to a place where 90% of the decisions and interventions currently made by ombudsmen can instead be made by software agents operated by private citizens.

Does any of that help?
firstfrost
Jul. 5th, 2014 03:20 pm (UTC)
I've been watching Google Now on my phone, as Google attempts to take another step into this space, and it's kind of funny, like a toddler deducing simplistic rules about how the world works. Some of it is surprisingly clever (You were moving fast, now you are moving slow, so I am going to save this GPS location as "where you parked your car" and provide navigation as to how to get back to it), and some of it is never what I want (I'm going to start telling you how long it will take you to drive to the place you just searched for, because I assume you want to go there RIGHT NOW).

I am also always disappointed by targetted ads on most web sites. I leave a trail of cookies everywhere I go, and while I am sure companies are devoting tons of money and effort to figuring out what to advertise to me, it still hasn't gotten much beyond "You're female so you probably want diet pills" and "You just bought a lamp, so I assume you want to buy five more lamps immediately".

So... baby steps. :)

dpolicar
Jul. 5th, 2014 04:41 pm (UTC)
Yes, precisely. We want to do this, but we're very bad at it.

That said, we've got pretty decent algorithms for doing it that work well in toy universes, but they require an enormous amount of raw data structured in very particular ways in order to be useful in the real world.

I suspect we're going to make zero progress on this for a long time until someone works out a reliable way to automatically parse and structure available public data in an algorithm-friendly format, at which point everything will change radically in approximately 23 seconds.
sylvanstargazer
Jul. 4th, 2014 09:15 pm (UTC)
To be fair, not all cultures reliably defect on the Prisoner's Dilemma. We have, at times when we weren't market-cyborgs who refuse to gossip, successfully implemented zero-determinant strategies.
muffyjo
Jul. 6th, 2014 11:04 pm (UTC)
I'm not sure I can agree with this. When were humans (and in which culture) ever not gossips? Passing along information about each other is largely what societies are about. Who agrees on what, who thinks what is a good idea and who needs help or is being disruptive are all bits of information we have shared since we started working together in teams larger than two.

And I'm not clear what "market-cyborg" means? Help?
sylvanstargazer
Jul. 7th, 2014 09:06 am (UTC)
By "refuse to gossip" I don't mean that gossip never happens, but that it is extremely discouraged. It is discouraged by libel laws, by disdain and blowback if one "names names", and by the labeling of gossip as a "feminine" communication style in a patriarchal society. People's reputations often do not extend outside of small circles, much smaller than the economic circles in which they run.

The idea behind "market-cyborg" is that markets are a technology we developed for the distribution of goods and services, but we've since normalized and internalized that technology to the point where it seems inevitable. In American undergraduates, in particular, this shapes how they respond to psychological game theory experiments like the Prisoner's Dilemma.
muffyjo
Jul. 8th, 2014 01:15 am (UTC)
Still not understanding. Maybe it's a nuance I'm missing. Here's how I understand it to work:

Prisoner's dilemma:
Scenario 1: Prisoner A does not trust B NOT to talk so tells on Prisoner B. A goes free, B goes to jail (or is otherwise punished).
Scenario 2: Prisoner B does not trust A NOT to talk so tells on Prisoner A. B goes free, A goes to jail (or is otherwise punished).
Scenario 3: Both trust each other. No one goes to jail, there is no evidence.

It is by trusting one another that both people win.

All cultures that I know of default to 1 or 2 and have not, to date, lasted when trying 3. Which is not to say 3 doesn't work, just that we lose trust very easily, get greedy, want power, etc.

Edited at 2014-07-08 01:16 am (UTC)
sylvanstargazer
Jul. 8th, 2014 11:57 am (UTC)
What is your citation for no cultures trying 3? I'm referring to behavioral psychological tests of the Prisoner's Dilemma, which routinely find that people cooperate, probably because, as in most social animals, we evolved beneficial non-"rational" behavior. For example, actual prisoners did very well, cooperating 56% of the time: http://www.sciencedirect.com/science/article/pii/S0167268113001522

And of course, in the indefinitely repeated game defection is no longer optimal, since people may retaliate. That is where "tit for tat" and other zero-determinant strategies come in, where you base your behavior on the other person's reputation, proving an incentive for them to develop a positive reputation and as a result everyone is better off. That is why gossip and the ability to name names makes group cooperation more efficient.

Edited at 2014-07-08 11:58 am (UTC)
muffyjo
Jul. 8th, 2014 12:43 pm (UTC)
You misquote me, I did not say they didn't try. I specifically said they don't last when trying 3. 56% is not a success rate of a culture. It's a success rate of a subgroup. It's slightly more than average. That's a rather pessimistic view on my part.

But I also think you and I may be closer in beliefs because I am usually quite the optimist. And yes, I would not rather play thermonuclear warfare, a nice game of tic-tac-toe is much better. :)

Gossip is a two way street both to gain and to hurt trust. Having been at both ends of it, it has proven unreliable. However, I do think it's the best option we have available and sorting the rumors from truth will hopefully become easier over time. Analyzing data more effectively will become easier. Wikipedia will become more trustWORTHY because the sources of its gossip will become more educated. Things like that.

And now we are back to our robot overlords. :)
dr_tectonic
Jul. 4th, 2014 10:23 pm (UTC)
Likebutton.
dpolicar
Jul. 5th, 2014 04:44 pm (UTC)
Yeah, it's funny how quickly my brain has adapted to the +1 paradigm.

I want it everywhere now, the same way I want everything to be a control surface and all information to hyperlink to related information and all event-managers to support saved checkpoints and undo.

The real world, in particular, is remarkably stubborn about all that.
dr_tectonic
Jul. 5th, 2014 05:31 pm (UTC)
What I really want is grep for the physical world...
dpolicar
Jul. 5th, 2014 05:53 pm (UTC)
Hm.

I'm not sure I know what you mean. Or, rather, all the interpretations of that I can think of simply amount to establishing reliable data-structure representations of the physical world, which I think is something we're well along the path to doing.

Can you give me some use cases?
dr_tectonic
Jul. 5th, 2014 06:21 pm (UTC)
Oh, mostly just "where the hell is Object X", which will be well-supported by the final evolution of the system you describe.

Being able to remotely check how many Object Ys are in the house when I'm off at the store would also be useful.

(As well as being able to check which stores *actually* have an Object Y that I can purchase in stock, as opposed to their inventory system lying about it, which is where we need a lot more work on the 'reliable' element of the system...)
dpolicar
Sep. 25th, 2014 01:05 pm (UTC)
Gotcha. Yeah.

Actually, I suspect that well before this process is completed folks like us will simply be recording our preferences about what is in our pantry/fridge on an ongoing basis, and once a week or so will be automatically reminded to stop at the grocery store on our way home to pick up our order, and we won't ever pay any attention to how many Ys we have in the house.
dr_tectonic
Sep. 25th, 2014 03:21 pm (UTC)
So did you actually respond to this today, or did LJ just manage to delay posting the comment for 10 weeks?

Either way, I can sort of see it, but I'm not totally convinced. I think it may imply more kitchen storage capacity, stability of preference, and foreknowledge of what I'll want this week than I actually have...
dpolicar
Sep. 25th, 2014 03:33 pm (UTC)
I actually did respond to this today, having somehow missed your reply at the time.

I find for me, anyway, there's kind of three categories of groceries: staples I renew when we're out of them, regular items I buy when I think of it, and "oh this looks interesting."

dr_tectonic
Sep. 25th, 2014 03:51 pm (UTC)
My categories are more like: staples, which I renew when we run low (but hopefully before we're out, for things like coffee and toilet paper); things we're in the mood for this week (but not all the time); stuff I need for something particular (but normally don't use); and dinner-makings (which are dependent on both what's on sale and what I already have on hand and need to use up)...
jim_p
Jul. 5th, 2014 12:44 am (UTC)
Have you read James Blish's "Cities In Flight"? Not to be too spoilery, but one of the elements of the society he envisions is the so-called "City Fathers". These are huge installations of reconfigurable computers that are used to coordinate municipal activities...
chocorua
Jul. 5th, 2014 03:08 pm (UTC)
And though it isn't talked about much, Blish's City Fathers have the ability veto (and depose and execute) the human mayor.
dpolicar
Jul. 5th, 2014 05:53 pm (UTC)
I haven't. * Adds to list.*
Nat Case
Jul. 5th, 2014 02:03 pm (UTC)
We just won't call it "social"
Your comment in the comments thread about agriculture got me thinking... It seems to me that social mechanisms will not be replaced by machine-versions, but will be supplemented by and gradually made redundant by something entirely new. That's how agriculture started: not as a way to say "the heck with this hunting and gathering crap" and replace it, but as a supplementary, additional, new way of doing things, which gradually supplanted hunting and gathering... in part because the population density agriculture permitted also pretty much decimated huntable-and-gatherable supplies within walking distance of settlements.

So I think this is already happening. Social networks and mechanisms used to be about who you could hear and see. As communications technology has allowed us to form and increasingly maintain networks over large distances, our inborn social regulatory tools (e.g. sitting down and having a cuppa and working out) are being replaced by electronic communities. At first, these were dominated by mass communication and social-order tools, but increasingly they are spreading along more fluid and better fitting to human minds things like social media. And things will continue to evolve. But I've certainly been finding that the kind of opinion-forming, idea-generating stuff that happens when someone I've never actually met (like you, David) is able to place material into this new geography-is-irrelevant social network is incredibly liberating and frankly almost as good as sitting around with whoever happens to be at hand. Almost, but not quite. Because breath is still more powerful to us than text. But we'll be working on that next...
dpolicar
Jul. 5th, 2014 04:56 pm (UTC)
Re: We just won't call it "social"
Oh hey, FB-comment-on-LJ works! Yay for bleeding across social network boundaries.

So, I basically agree with you on all counts.

That said, I also think that the difference between "replaced" and "supplemented" is only meaningful at a level of precision I'm not really ready to operate at.

I mean, if I ask whether automated vehicles replaced natural human locomotion over the last couple of centuries, my answer is "Yes, of course! Also, not at all." But more specifically, if I consider three use cases:
1. Transport 50 pounds over 10 miles.
2. Transport 200 pounds over 100 miles.
3. Transport 5 pounds over 100 feet.

...it seems clear to me that automated vehicles have mostly replaced natural human locomotion for UC1 (though in some contexts like camping and military training humans still do this), have completely replaced it for UC2 (with a few special-case exceptions), and have utterly failed to replace it for UC3 (with a few special-case exceptions).

I expect that in a century if we ask whether automated tools have replaced natural human socialization, the answer will similarly be "Yes, of course! Also, not at all." But trying to make a confident prediction about how that will break down in more detail is beyond my current understanding.
aaminoff
Sep. 25th, 2014 12:28 pm (UTC)
I just posted to cpsparents (yahopo group for Cambridge Public School parents) once again my proposal that the Family Resource Center can be entirely replaced by a software program written by an MIT undergrad over a summer. The FRC is in charge of Cambridge's School Choice plan, where you submit your top 3 choices of schools you want your kids to go to and then if you don't get any of them thy are randomly assigned to a school across town that does not meet their needs, and they refuse to tell you what position you are on the wait list or how many slots are open at which other schools.

The fact is that I am still (I find) very angry at the City of Cambridge for forcing us to move to Beverly AND pay private school tuition when there was a school that would meet Annelise's needs right in Cambridge.
dpolicar
Sep. 25th, 2014 12:59 pm (UTC)
That sucks... I'm sorry.
( 38 comments — Leave a comment )

Latest Month

October 2014
S M T W T F S
   1234
567891011
12131415161718
19202122232425
262728293031 
Powered by LiveJournal.com
Designed by Taylor Savvy