Study suggests cities and farming began more than 40,000 years ago

A recent study suggests cities may have started much earlier:

For centuries, archaeologists believed that ancient people couldn’t live in tropical jungles. The environment was simply too harsh and challenging, they thought. As a result, scientists simply didn’t look for clues of ancient civilizations in the tropics. Instead, they turned their attention to the Middle East, where we have ample evidence that hunter-gatherers settled down in farming villages 9,000 years ago during a period dubbed the “Neolithic revolution.” Eventually, these farmers’ offspring built the ziggurats of Mesopotamia and the great pyramids of Egypt. It seemed certain that city life came from these places and spread from there around the world.

But now that story seems increasingly uncertain. In an article published in Nature Plants, Max Planck Institute archaeologist Patrick Roberts and his colleagues explain that cities and farms are far older than we think. Using techniques ranging from genetic sampling of forest ecosystems and isotope analysis of human teeth, to soil analysis and lidar, the researchers have found ample evidence that people at the equator were actively changing the natural world to make it more human-centric.

It all started about 45,000 years ago. At that point, people began burning down vegetation to make room for plant resources and homes. Over the next 35,000 years, the simple practice of burning back forest evolved. People mixed specialized soils for growing plants; they drained swamps for agriculture; they domesticated animals like chickens; and they farmed yam, taro, sweet potato, chili pepper, black pepper, mango, and bananas…

“The tropics demonstrate that where we draw the lines of agriculture and urbanism can be very difficult to determine. Humans were clearly modifying environments and moving even small animals around as early as 20,000 years ago in Melanesia, they were performing the extensive drainage of landscapes at Kuk Swamp to farm yams [and] bananas… From a Middle East/European perspective, there has always been a revolutionary difference (“Neolithic revolution”) between hunter gatherers and farmers, [but] the tropics belie this somewhat.”

Two things strike me:

  1. The article suggests that this finding just occurred now because scholars assumed it wasn’t worth examining the tropics. This happens more often than researchers want to admit: we explore certain phenomena for certain reasons and this may blind us to other phenomena or explanations. In a perfect world, there would be so many researchers that everything could be covered and research that rules out explanations or shows a lack of phenomena would be valued more highly.
  2. That cities and agriculture took a longer time to develop does not seem too surprising. The shift to more anchored lives – tied to farming and larger population centers – would have been quite a change. Arguably, the world is still going through this process with the pace of urbanization increasing tremendously in the last century and nations and cities desperately trying to catch up.

Now that scientists are looking into this matter, hopefully we get a more complete understanding soon.

Good data is foundational to doing good sociological work

I’ve had conversations in recent months with a few colleagues outside the discipline about debates within sociology over the work of ethnographers like Alice Goffman, Matt Desmond, and Sudhir Venkatesh. It is enlightening to hear how outsiders see the disagreements and this has pushed me to consider more fully how I would explain the issues at hand. What follows is my one paragraph response to what is at stake:

In the end, what separates the work of sociologists from perceptive non-academics or journalists? (An aside: many of my favorite journalists often operate like pop sociologists as they try to explain and not just describe social phenomena.) To me, it comes down to data and methods. This is why I enjoy teaching both our Statistics course and our Social Research course: undergraduates rarely come into them excited but they are foundational to who sociologists are. What we want to do is have data that is (1) scientific – reliable and valid – and (2) generalizable – allowing us to see patterns across individuals and cases or settings. I don’t think it is a surprise that the three sociologists under fire above wrote ethnographies where it is perhaps more difficult to fit the method under a scientific rubric. (I do think it can be done but it doesn’t always appear that way to outsiders or even some sociologists.) Sociology is unique in both its methodological pluralism – we do everything from ethnography to historical analysis to statistical models to lab or natural experiments to mass surveys – and we aim to find causal explanations for phenomena rather than just describe what is happening. Ultimately, if you can’t trust a sociologist’s data, why bother considering their conclusions or why would you prioritize their explanations over that of an astute person on the street?

Caveats: I know no data is perfect and sociologists are not in the business of “proving” things but rather we look for patterns. There is also plenty of disagreement within sociology about these issues. In a perfect world, we would have researchers using different methods to examine the same phenomena and develop a more holistic approach. I also don’t mean to exclude the role of theory in my description above; data has to be interpreted. But, if you don’t have good data to start with, the theories are abstractions.

When software – like Excel – hampers scientific research

Statistical software can be very helpful but it does not automatically guarantee correct analyses:

A team of Australian researchers analyzed nearly 3,600 genetics papers published in a number of leading scientific journals — like Nature, Science and PLoS One. As is common practice in the field, these papers all came with supplementary files containing lists of genes used in the research.

The Australian researchers found that roughly 1 in 5 of these papers included errors in their gene lists that were due to Excel automatically converting gene names to things like calendar dates or random numbers…

Genetics isn’t the only field where a life’s work can potentially be undermined by a spreadsheet error. Harvard economists Carmen Reinhart and Kenneth Rogoff famously made an Excel goof — omitting a few rows of data from a calculation — that caused them to drastically overstate the negative GDP impact of high debt burdens. Researchers in other fields occasionally have to issue retractions after finding Excel errors as well…

For the time being, the only fix for the issue is for researchers and journal editors to remain vigilant when working with their data files. Even better, they could abandon Excel completely in favor of programs and languages that were built for statistical research, like R and Python.

Excel has particular autoformatting issues but all statistical programs have unique ways of handling data. Spreadsheets of data – often formatted with cases in the rows and variables in the columns – don’t automatically read in correctly.

Additionally, user error can lead to issues with any sort of statistical software. Different programs may have different quirks but various researchers can do all sort of weird things from recoding incorrectly to misreading missing data to misinterpreting results. Data doesn’t analyze itself and statistical software is just a tool that needs to be used correctly.

A number of researchers have in recent years called for open data once a paper is published and this could help those in an academic field spot mistakes. Of course, the best solution is to double-check (at least) data before review and publication. Yet, when you are buried in a quantitative project and there are dozens of steps of data work and analysis, it can be hard to (1) keep track of everything and (2) closely watch for errors. Perhaps we need independent data review even before publication.

Quick Review: League of Denial

I had a chance this past week to read the book League of Denial and see the PBS documentary by the same name. Some thoughts about the story of the NFL and concussion research (focusing mostly on the book which provides a more detailed narrative):

1. I know some fans are already complaining of “concussion fatigue” but it is hard to think of football the same way after hearing this story. For decades, we have held up players for their toughness and yet it may be ruining their brains.

2. The human story in all of this is quite interesting. This includes some of the former football players who have been driven to the edge by their football-related brain injuries. At the same time, the story amongst the doctors is also pretty fascinating, the chase for fame, publishing articles, and acquiring brains. Running through the whole book is this question of “who is really doing this research for the right reasons?” Even if the NFL research appears to be irrevocably tainted, are the researchers on the other side completely neutral or pure of heart?

3. The whole scientific process is laid out in the book (glossed over more in the documentary)…and I’m not sure how it fares. You have scientists fighting each other to acquire brains. You have peer-reviewed research – supposed to help prevent erroneous findings – that is viewed by many as erroneous from the start. You have scientists fighting for funding, an ongoing battle for all researchers as they must support their work and have their own livelihoods. In the end, consensus seems to be emerging but the book and documentary highlight the messy process it takes to get there.

4. The comparisons of the NFL to Big Tobacco seem compelling: the NFL tried to bury concussions research for a few decades and still doesn’t admit to a long-term impact of concussions on its players. One place where the comparison might break down for the general public (and scientific research could change this in the near future): the worst problems seem to be in long-time NFL players. When exactly does CTE start in the brains of football players? There is some evidence younger players, college or high school, might already have CTE but we need more evidence of this to be sure. If that is established, that perhaps kids as young as junior high already have CTE and that CTE is derived from regular hits at a young age (not the big knock-out blows), the link to Big Tobacco might be complete.

5. It is not really part of this story but I was struck again by how relatively little we know about the brain. Concussion research didn’t really take off until the 1990s, even as this had happened with football players for decades. (One sports area where it had been studied: boxing.) Much of this research is quite new and is a reminder that we humans don’t know as much as we might think.

6. This also provides a big reminder that the NFL is big business. Players seem the most aware of this: they can be cut at any time and an injury outside of their control could end their careers. The league and owners do not come off well here as they try to protect their holdings. The employees – the players – are generally treated badly: paid well if they perform but thrown aside otherwise. This may lead to a “better product” on the field but the human toll is staggering.

7. How exactly you change people’s opinions, both fans and players, regarding concussions will be fascinating to watch. It will take quite a shift among players from the tough-guy image to being willing to consider their futures more carefully. For fans, they may become more understanding as their favorite players consider what concussions might do to their lives. Will the NFL remain as popular? Hard to say though I imagine most fans this past weekend of football had little problem watching lots of gridiron action Saturday and Sunday.

Sociologists = people who look at “boring data compiled during endless research”

If this is how a good portion of the public views what sociologists do, sociologists may be in trouble:

Anthony Campolo is a sociologist by trade, used to looking at boring data compiled during endless research.

Data collection and analysis may not be glamorous but a statement like this suggests sociologists may have some PR issues. Data collection and analysis are often time consuming and even tedious. But, there are reasons for working so hard to get data and do research: so sociologists can make substantiated claims about how the social world works. Without rigorous methods, sociologists would just be settling for interpretation, opinion, or anecdotal evidence. For example, we might be left with stories like that of a homeless man in Austin, Texas who was “testing”  which religious groups contributed more money to him. Of course, his one case tells us little to nothing.

Perhaps this opening sentence should look something like this: time spent collecting and analyzing data will pay off in stronger arguments.

 

Social psychology can move forward by pursuing more replication

Here is an argument that a renewed emphasis on replicating studies will help the field of social psychology move beyond some public issues:

Things aren’t quite as bad as they seem, though. Although Natures report was headlined “Disputed results a fresh blow for social psychology,” it scarcely noted that there have been some replications of experiments modelled on Dijksterhuis’s phenomenon. His finding could still out turn to be right, if weaker than first thought. More broadly, social priming is just one thread in the very rich fabric of social psychology. The field will survive, even if social priming turns out to have been overrated or an unfortunate detour.

Even if this one particular line of work is under a shroud, it is important not to lose sight of the fact many of the old standbys from social psychology have been endlessly replicated, like the Milgram effect—the old study of obedience in which subjects turned up electrical shocks (or what they thought were electrical shocks) all the way to four hundred and fifty volts, apparently causing great pain to their subjects, simply because they’d been asked to do it. Milgram himself replicated the experiment numerous times, in many different populations, with groups of differing backgrounds. It is still robust (in hands of other researchers) nearly fifty years later. And even today, people are still extending that result; just last week I read about a study in which intrepid experimenters asked whether people might administer electric shocks to robots, under similar circumstances. (Answer: yes.)

More importantly, there is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. Throughout my career, and long before it, journals emphasized that new papers have to publish original results; I completely failed to replicate a particular study a few years ago, but at the time didn’t bother to submit it to a journal because I knew few people would be interested. Now, happily, the scientific culture has changed. Since I first mentioned these issues in late December, several leading researchers in psychology have announced major efforts to replicate previous work, and to change the incentives so that scientists can do the right thing without feeling like they are spending time doing something that might not be valued by tenure committees.

The Reproducibility Project, from the Center for Open Science is now underway, with its first white paper on the psychology and sociology of replication itself. Thanks to Daniel Simons and Bobbie Spellman, the journal Perspectives in Psychological Science is now accepting submissions for a new section of each issue devoted to replicability. The journal Social Psychology is planning a special issue on replications for important results in social psychology, and has already received forty proposals. Other journals in neuroscience and medicine are engaged in similar efforts: my N.Y.U. colleague Todd Gureckis just used Amazon’s Mechanical Turk to replicate a wide range of basic results in cognitive psychology. And just last week, Uri Simonsohn released a paper on coping with the famous file-drawer problem, in which failed studies have historically been underreported.

It is a good thing if the social sciences were able to be more sure of their findings. Replication could go a long way to moving the conversation away from headline-grabbing findings based on small Ns to be more certain results that a broader swath of an academic field can agree with. The goal is to get it right in the long run with evidence about human behaviors and attitudes, not necessarily in the short-term.

Even with a renewed emphasis on replication, there might still be some issues:

1. The ability to publish more replication studies would certainly help but is there enough incentive for researchers, particularly those trying to establish themselves, to pursue replication studies over innovative ideas and areas that gain more attention?

2. What about the number of studies that are conducted with WEIRD populations, primarily US undergraduate students? If studies continue to be replicated with skewed populations, is much gained?

Debate over priming effect illustrates need for replication

A review of the literature regarding the priming effect highlights the need in science for replication:

At the same time, psychology has been beset with scandal and doubt. Formerly high-flying researchers like Diederik Stapel, Marc Hauser, and Dirk Smeesters saw their careers implode after allegations that they had cooked their results and managed to slip them past the supposedly watchful eyes of peer reviewers. Psychology isn’t the only field with fakers, but it has its share. Plus there’s the so-called file-drawer problem, that is, the tendency for researchers to publish their singular successes and ignore their multiple failures, making a fluke look like a breakthrough. Fairly or not, social psychologists are perceived to be less rigorous in their methods, generally not replicating their own or one another’s work, instead pressing on toward the next headline-making outcome.

Much of the criticism has been directed at priming. The definitions get dicey here because the term can refer to a range of phenomena, some of which are grounded in decades of solid evidence—like the “anchoring effect,” which happens, for instance, when a store lists a competitor’s inflated price next to its own to make you think you’re getting a bargain. That works. The studies that raise eyebrows are mostly in an area known as behavioral or goal priming, research that demonstrates how subliminal prompts can make you do all manner of crazy things. A warm mug makes you friendlier. The American flag makes you vote Republican. Fast-food logos make you impatient. A small group of skeptical psychologists—let’s call them the Replicators—have been trying to reproduce some of the most popular priming effects in their own labs.

What have they found? Mostly that they can’t get those results. The studies don’t check out. Something is wrong. And because he is undoubtedly the biggest name in the field, the Replicators have paid special attention to John Bargh and the study that started it all.

While some may find this discouraging, it sounds like the scientific process is being followed. A researcher, Bargh, finds something interesting. Others follow up to see if Bargh was right and to try to extend the idea. Debate ensues once a number of studies have been done. Perhaps there is one stage left to finish off in this process: the research community has to look at the accumulated evidence at some point and decide whether the priming effect exists or not. What does the overall weight of the evidence suggest?

For the replication process to work well, a few things need to happen. Researchers need to be willing to repeat the studies of others as well as their own studies. They need to be willing to report both positive and negative findings, regardless of which side of the debate they are on. Journals need to provide space for positive and negative findings. This incremental process will take time and may not lead to big headlines but its steady approach should pay off in the end.