Argument regarding the three freedoms humans gave up for agriculture, cities, and “civilization”

I recently finished reading The Dawn of Everything: A New History of Humanity by anthropologists David Graeber and David Wengrow. I highly recommend the book for its argument about how evidence from recent decades disrupts the common idea that people moved from hunter-gatherers to agriculture and cities and “civilization.” The reason I put civilization in quotes has to do with the argument they make regarding the freedoms humans used to have:

If we do not have these freedoms today what went wrong? The argument and the evidence is worth considering.

Three possible responses to the finding that human behavior is complicated

A review of a new book includes a paragraph (the second one excerpted below) that serves as a good reminder for those interested in human behavior:

What happens in brains and bodies at the moment humans engage in violence with other humans? That is the subject of Stanford University neurobiologist and primatologist Robert M. Sapolsky’s Behave: The Biology of Humans at Our Best and Worst. The book is Sapolsky’s magnum opus, not just in length, scope (nearly every aspect of the human condition is considered), and depth (thousands of references document decades of research by Sapolsky and many others) but also in importance as the acclaimed scientist integrates numerous disciplines to explain both our inner demons and our better angels. It is a magnificent culmination of integrative thinking, on par with similar authoritative works, such as Jared Diamond’s Guns, Germs, and Steel and Steven Pinker’s The Better Angels of Our Nature. Its length and detail are daunting, but Sapolsky’s engaging style—honed through decades of writing editorials, review essays, and columns for The Wall Street Journal, as well as popular science books (Why Zebras Don’t Get Ulcers, A Primate’s Memoir)—carries the reader effortlessly from one subject to the next. The work is a monumental contribution to the scientific understanding of human behavior that belongs on every bookshelf and many a course syllabus.

Sapolsky begins with a particular behavioral act, and then works backward to explain it chapter by chapter: one second before, seconds to minutes before, hours to days before, days to months before, and so on back through adolescence, the crib, the womb, and ultimately centuries and millennia in the past, all the way to our evolutionary ancestors and the origin of our moral emotions. He gets deep into the weeds of all the mitigating factors at work at every level of analysis, which is multilayered, not just chronologically but categorically. Or more to the point, uncategorically, for one of Sapolsky’s key insights to understanding human action is that the moment you proffer X as a cause—neurons, neurotransmitters, hormones, brain-specific transcription factors, epigenetic effects, gene transposition during neurogenesis, dopamine D4 receptor gene variants, the prenatal environment, the postnatal environment, teachers, mentors, peers, socioeconomic status, society, culture—it triggers a cascade of links to all such intervening variables. None acts in isolation. Nearly every trait or behavior he considers results in a definitive conclusion, “It’s complicated.”

To adapt sociologist Joel Best’s approach to statistics in Damned Lies and Statistics, I suggest there are three broad approaches to understanding human behavior:

1. The naive. This approach believes human behavior is simple and explainable. We just need the right key to unlock behavior (whether this is a religious text or a single scientific cause or a strongly held personal preferance).

2. The cynical. Human behavior is so complicated that we can never understand it. Why bother trying?

3. The critical. As Best suggests, this is an informed approach that knows how to ask the right questions. To the reductionist, it might ask whether there are other factors to consider. To the cynical, it might say that just because it is really complicated doesn’t mean that we can’t find patterns. Causation is often difficult to determine in the natural and social sciences but this does not mean that we cannot find bundles of factors or processes that occur. The key here is recognizing when people are making reasonable arguments about explaining human behavior: when do their claims go too far or when are they missing something?

More on the reduced geographic mobility of Americans

A new book from economist Tyler Cowen discusses how the geographic mobility of Americans has declined:

Nowadays, moving from one state to another has dropped 51 percent from its average in the postwar years, and that number has been decreasing for more than 30 years. Black Americans, once especially adventurous, are now especially immobile. A survey of blacks born between 1952 and 1982 found that 69 percent had remained in the same county and 82 percent stayed in the same state where they were born…

One reason people don’t move where the jobs are is because of real-estate prices — which in turn are kept at high levels by regulatory restrictions and NIMBY-ism. In New York City in the 1950s a typical apartment rented for $60 a month, or $530 today if you adjust for inflation. Two researchers found that if you reduced regulations for building new homes in places like New York and San Francisco to the median level, the resulting expanded workforce would increase US GDP by $1.7 trillion. That won’t happen, though: More homes would diminish the property values of existing homeowners.

That locked-in syndrome is a factor in economic stagnation, too: A recent Wells Fargo survey found that white-collar office productivity growth was zero. As the economy was supposedly recovering from the financial crisis, from 2009 to 2014, American median wages fell 4 percent. Men’s median incomes today are actually below 1969 levels. Had we retained our pre-1973 rates of productivity growth, the typical household would earn about $30,000 a year more than it does.

Despite all the hype attached to a few tech companies, far fewer companies are being formed than in the 1980s, and fewer Americans are working for startups. Such new companies are linked with rapid job creation. We’re coming close, Cowen says, to realizing the 1950s cliche (not really true then) of everyone clinging to a job at a handful of huge, soul-crushing companies.

As I’ve seen a number of stories about declining mobility (see earlier posts here, here, and here), I wonder if the period between the early 1900s and 1960s was simply unusual. The American economy was doing well (except for the Great Depression and the World Wars) and other factors including legal segregation in the South drove mobility. What if more limited mobility is “normal” outside of unusual time periods? Should we expect that Americans should be willing to pick up and move just because there may be a job or an opportunity elsewhere? I would guess humans default toward less geographic mobility because moves limit the ability to develop communities. In fact, it has only been in recent centuries that more of the population has even had the opportunity to travel or move large distances from where they were born. Perhaps the real question here is to find out more of what would lead people (whether in the United States or elsewhere) to move significant distances.

“Cities: How Crowded Life is Changing Us”

Here are some insights into how the large concentrations of people in major cities could be changing human beings:

The sheer concentration of people attracted by the urban lifestyle means that cosmopolitan cities like New York are host to people speaking more than 800 different languages – thought to be the highest language density in the world. In London, less than half of the population is made of white Britons – down from 58% a decade ago. Meanwhile, languages around the world are declining at a faster rate than ever – one of the 7,000 global tongues dies every two weeks.

It is having an effect not just culturally, but biologically: urban melting pots are genetically altering humans. The spread of genetic diversity can be traced back to the invention of the bicycle, according to geneticist Steve Jones, which encouraged the intermarriage of people between villages and towns. But the urbanisation occurring now is generating unprecedented mixing. As a result, humans are now more genetically similar than at any time in the last 100,000 years, Jones says.

The genetic and cultural melange does a lot to erode the barriers between races, as well as leading to novel works of art, science and music that draw on many perspectives. And the tight concentration of people in a city also leads to other tolerances and practices, many of which are less common in other human habitats (like the village) or in other species. For example, people in a metropolis are generally freer to practice different religions or none, to be openly gay, for women to work and to voluntarily limit their family size despite – or indeed because of – access to greater resources.

The biggest takeaway from this in my mind is the reminder that the megacities of today are relatively recent in the scale of human history. Outside of the last 150 years or so, at only a few points in human history has a city or two had a million people. Cities have been very influential throughout history, whether in Rome, Constantinople, Baghdad, or elsewhere, but today’s scale and rate of growth is astounding.

I also wonder if seeing these kinds of changes won’t really be fully known for a couple of hundred years where we can then look back and see that the changes in cities starting in the 1800s really altered human life. At the same time, plenty of learned people have noted the changes that started taking place in European life in particular in the late 1700s and early 1800s, from the Enlightenment to the Industrial Revolution. The more I have thought about it, the more I’ve become convinced that sociology’s origins are intimately tied to these changes in urban life.

Argument: Big Data reduces humans to something less than human

One commentator suggests Big Data can’t quite capture what makes humans human:

I have been browsing in the literature on “sentiment analysis,” a branch of digital analytics that—in the words of a scientific paper—“seeks to identify the viewpoint(s) underlying a text span.” This is accomplished by mechanically identifying the words in a proposition that originate in “subjectivity,” and thereby obtaining an accurate understanding of the feelings and the preferences that animate the utterance. This finding can then be tabulated and integrated with similar findings, with millions of them, so that a vast repository of information about inwardness can be created: the Big Data of the Heart. The purpose of this accumulated information is to detect patterns that will enable prediction: a world with uncertainty steadily decreasing to zero, as if that is a dream and not a nightmare. I found a scientific paper that even provided a mathematical model for grief, which it bizarrely defined as “dissatisfaction.” It called its discovery the Good Grief Algorithm.

The mathematization of subjectivity will founder upon the resplendent fact that we are ambiguous beings. We frequently have mixed feelings, and are divided against ourselves. We use different words to communicate similar thoughts, but those words are not synonyms. Though we dream of exactitude and transparency, our meanings are often approximate and obscure. What algorithm will capture “the feel of not to feel it?/?when there is none to heal it,” or “half in love with easeful Death”? How will the sentiment analysis of those words advance the comprehension of bleak emotions? (In my safari into sentiment analysis I found some recognition of the problem of ambiguity, but it was treated as merely a technical obstacle.) We are also self-interpreting beings—that is, we deceive ourselves and each other. We even lie. It is true that we make choices, and translate our feelings into actions; but a choice is often a coarse and inadequate translation of a feeling, and a full picture of our inner states cannot always be inferred from it. I have never voted wholeheartedly in a general election.

For the purpose of the outcome of an election, of course, it does not matter that I vote complicatedly. All that matters is that I vote. The same is true of what I buy. A business does not want my heart; it wants my money. Its interest in my heart is owed to its interest in my money. (For business, dissatisfaction is grief.) It will come as no surprise that the most common application of the datafication of subjectivity is to commerce, in which I include politics. Again and again in the scholarly papers on sentiment analysis the examples given are restaurant reviews and movie reviews. This is fine: the study of the consumer is one of capitalism’s oldest techniques. But it is not fine that the consumer is mistaken for the entirety of the person. Mayer-Schönberger and Cukier exult that “datafication is a mental outlook that may penetrate all areas of life.” This is the revolution: the Rotten Tomatoes view of life. “Datafication represents an essential enrichment in human comprehension.” It is this inflated claim that gives offense. It would be more proper to say that datafication represents an essential enrichment in human marketing. But marketing is hardly the supreme or most consequential human activity. Subjectivity is not most fully achieved in shopping. Or is it, in our wired consumerist satyricon?

“With the help of big data,” Mayer-Schönberger and Cukier continue, “we will no longer regard our world as a string of happenings that we explain as natural and social phenomena, but as a universe comprised essentially of information.” An improvement! Can anyone seriously accept that information is the essence of the world? Of our world, perhaps; but we are making this world, and acquiescing in its making. The religion of information is another superstition, another distorting totalism, another counterfeit deliverance. In some ways the technology is transforming us into brilliant fools. In the riot of words and numbers in which we live so smartly and so articulately, in the comprehensively quantified existence in which we presume to believe that eventually we will know everything, in the expanding universe of prediction in which hope and longing will come to seem obsolete and merely ignorant, we are renouncing some of the primary human experiences. We are certainly renouncing the inexpressible. The other day I was listening to Mahler in my library. When I caught sight of the computer on the table, it looked small.

I think there are a couple of arguments possible about the limitations of big data and Wieseltier is making a particular argument. He does not appear to be saying that big data can’t predict or model human complexity. And fans of big data would probably say the biggest issue is that we simply don’t have enough data yet and we are developing better and better models. In other words, our abilities and data will eventually catch up to the problem of complexity. But I think Wieseltier is arguing something else: he, along with many others, does not want humans to be reduced to information. Even if we had the best models, it is one thing to see people as complex individuals and yet another to say they are simply another piece of information. Doing the latter takes away the dignity of people. Reducing people to data means we stop seeing people as people that can change their minds, be creative, and confound predictions.

It will be interesting to see how this plays out in the coming years. I think this is the same fear many people have about statistics. Particularly in our modern world where we see ourselves as sovereign individuals, describing statistical trends to people strikes them as reducing their own agency and negating their own experiences. Of course, this is not what statistics is about and something more training in statistics could help change. But, how we talk about data and its uses might go a long way to how big data is viewed in the future.

Claim: 90% of information ever created by humans was created in the last two years

An article on big data makes a claim about how much information humans have created in the last two years:

In the last two years, humans have created 90% of all information ever created by our species. If our data output used to be a sprinkler, it is now a firehose that’s only getting stronger, and it is revealing information about our relationships, health, and undiscovered trends in society that are just beginning to be understood.

This is quite a bit of data. But a few points in a response:

1. I assume this refers only to recorded data. While there are more people on earth than before, humans are expressive creatures and have been for a long time.

2. This article could be interpreted by some to mean that we need to pay more attention to online privacy but I would guess much of this information is volunteered. Think of Facebook: users voluntarily submit information their friends and Facebook can access. Or blogs: people voluntarily put together content.

3. This claim also suggests we need better ways to sort through and make sense of all this data. How can the average Internet user put it all this data together in a meaningful way? We are simply awash in information and I wonder how many people, particularly younger people, know how to make sense of all that is out there.

4. Of course, having all of this information out there doesn’t necessarily mean it is meaningful or worthwhile.

Thinking about Americans losing the ability to work with their hands

A New York Times essay argues we are losing something as Americans because fewer people can work skillfully with their hands:

“In an earlier generation, we lost our connection to the land, and now we are losing our connection to the machinery we depend on,” says Michael Hout, a sociologist at the University of California, Berkeley. “People who work with their hands,” he went on, “are doing things today that we call service jobs, in restaurants and laundries, or in medical technology and the like.”

That’s one explanation for the decline in traditional craftsmanship. Lack of interest is another. The big money is in fields like finance. Starting in the 1980s, skill in finance grew in stature, and, as depicted in the news media and the movies, became a more appealing source of income…

Craft work has higher status in nations like Germany, which invests in apprenticeship programs for high school students. “Corporations in Germany realized that there was an interest to be served economically and patriotically in building up a skilled labor force at home; we never had that ethos,” says Richard Sennett, a New York University sociologist who has written about the connection of craft and culture…

As for craftsmanship itself, the issue is how to preserve it as a valued skill in the general population. Ms. Milkman, the sociologist, argues that American craftsmanship isn’t disappearing as quickly as some would argue — that it has instead shifted to immigrants. “Pride in craft, it is alive in the immigrant world,” she says.

I don’t doubt that the ability to produce craftmenship is worthwhile, particularly if one is a homeowner. But I wonder about the larger value of working with one’s hands. Why can’t using a mouse or a controller be considered “working with one’s hands”? Of course, it fits in a literal sense but there is a difference in production and skills. Yet, it still requires effort and finesse to be able to effectively utilize the newest machines. Perhaps we have swapped our traditional toolbox for a “digital toolbox.”

If the world is moving toward an information and service economy, is this necessarily bad? This reminds me of a piece in The Atlantic months ago about a contest where programmers had to try to put together a computer that could converse like a human. Working with tools is not uniquely human but thinking and reasoning might be. Does this make working with our hands less valuable compared to other possible activities?

Sociologist considers “Humanity 2.0”

A sociologist who is “Auguste Comte chair in social epistemology in Warwick University’s Department of Sociology” discusses his new book titled Humanity 2.0. In my opinion, here is the most interesting part of the interview:

Let’s put it this way: we’ve always been heading towards a pretty strong sense of Humanity 2.0. The history of science and technology, especially in the west, has been about remaking the world in our collective “image and likeness”, to recall the biblical phrase. This means making the world more accessible and usable by us. Consider the history of agriculture, especially animal and plant breeding. Then move to prosthetic devices such as eyeglasses and telescopes.

More recently, and more mundanely, people are voting with their feet to enter Humanity 2.0 with the time they spend in front of computers, as opposed to having direct contact with physical human beings. In all this, it’s not so much that we’ve been losing our humanity but that it’s becoming projected or distributed across things that lack a human body. In any case, Humanity 2.0 is less about the power of new technologies than a state of mind in which we see our lives fulfilled in such things.

Wouldn’t someone like Archimedes describe us as Humanity 3.0 compared to his era?

Yes, Archimedes would probably see us as pretty exotic creatures. He would already be impressed by what we take for granted as Humanity 1.0, since the Greeks generally believed that “humanity” was an elite prospect for ordinary Homo sapiens, requiring the right character and training. Moreover, he would be surprised – if not puzzled – that we appear to think of science and technology as some long-term collective project of self-improvement – “progress” in its strongest sense. While the Greeks gave us many of our fundamental scientific ideas, they did not think of them as a blueprint for upgrading the species. Rather, those ideas were meant either to relieve drudgery or provide high-brow entertainment.

What is considered “normal” for human beings has changed quite a bit over the centuries. This reminds me of something I read months ago about the concept of “normal” in medicine: we tend to focus on more unusual circumstances so don’t know as much what the possible ranges of “normal.” When first introduced, many technological changes were not “normal” but humans adapted. As Fuller suggests, perhaps we need to have a conversation about what is “normal” and how much change we are willing to accept and how quickly it might be implemented.

Were Archimedes and the Greeks correct in focusing more on “character and training” rather than scientific progress?

When people talk about these sorts of topics, readers start thinking about things like robots, prosthetics, and computer chip implants and don’t think so much about eyeglasses or common crops. Indeed, the book cover plays off these common stereotypes with its “futuristic” look at a human head. Does this jump to future technology and the potential problems immediately turn some possible readers off while a cover that played around more with “safer” ideas like eyeglasses would be attractive to more people?