Fewer plane flights, worse weather forecasts, collecting data

The consequences of COVID-19 continue: with fewer commercial airline flights, weather models have less data.

white clouds and blue sky

Photo by Ithalu Dominguez on Pexels.com

During their time in the skies, commercial airplanes regularly log a variety of meteorological data, including air temperature, relative humidity, air pressure and wind direction — data that is used to populate weather prediction models…

With less spring meteorological data to work with, forecasting models have produced less accurate predictions, researchers said. Long-term term forecasts suffered the most from a lack of meteorological data, according to the latest analysis…

Forecast accuracy suffered the most across the United States, southeast China and Australia, as well as more remote regions like the Sahara Desert, Greenland and Antarctica.

Though Western Europe experienced an 80 to 90 percent drop in flight traffic during the height of the pandemic, weather forecasts in the region remained relatively accurate. Chen suspects the region’s densely-packed network of ground-based weather stations helped forecasters continue to populate models with sufficient amounts of meteorological data.

Models, whether for pandemics or weather, need good input. Better data up front helps researchers adjust models to fit past patterns and predict future outcomes. Absent of data, it can be hard to fit models, especially in complex systems like weather.

As noted above, there are other ways to obtain weather data. Airplanes offered a convenient way to collect data: thousands of regular flights could lead to a lot of data. In contrast, constructing ground stations would require more resources in the short-term.

Yet, any data collector needs to remain flexible. One source of data can disappear, leading to a new approach. Or, a new opportunity might arise and switching methods makes sense. Or, those studying and predicting weather could develop multiple good sources of data that could options or redundancy amid black swan events.

Few may recognize all of this is happening. Weather forecasts will continue. Behind the scenes, we might even get better weather models in the long run as researchers and meteorologists adjust.

Font sizes, randomly ordered names, and an uncertain Iowa poll

Ahead of the Iowa caucuses yesterday, the Des Moines Register had to cancel a final poll just ahead of the voting due to problems with administering the survey:

Sources told several news outlets that they figured out the whole problem was due to an issue with font size. Specifically, one operator working at the call center used for the poll enlarged the font size on their computer screen of the script that included candidates’ names and it appears Buttigieg’s name was cut out from the list of options. After every call the list of candidates’ names is reordered randomly so it isn’t clear whether other candidates may have been affected as well but the organizers were not able to figure out whether it was an isolated incident. “We are unable to know how many times this might have happened, because we don’t know how long that monitor was in that setting,” a source told Politico. “Because we do not know for certain—and may not ever be able to know for certain—we don’t have confidence to release the poll.”…

In their official statements announcing the decision to nix the poll, the organizers did not mention the font issue, focusing instead on the need to maintain the integrity of the survey. “Today, a respondent raised an issue with the way the survey was administered, which could have compromised the results of the poll. It appears a candidate’s name was omitted in at least one interview in which the respondent was asked to name their preferred candidate,” Register executive editor Carol Hunter said in a statement. “While this appears to be isolated to one surveyor, we cannot confirm that with certainty. Therefore, the partners made the difficult decision to not to move forward with releasing the Iowa Poll.” CNN also issued a statement saying that the decision was made as part of their “aim to uphold the highest standards of survey research.”

This provides some insight into how these polls are conducted. The process can include call centers, randomly ordered names, and a system in place so that the administrators of the poll can feel confident in the results (even as there is always a margin of error). If there is a problem in the system, the opinions of those polled may not match what the data says. Will the future processes not allow individual callers to change the font size?

More broadly, a move like this could provide more transparency and ultimately trust regarding political polling. The industry faces a number of challenges. Would revealing this particular issue cause people to wonder how often this happens or reassure them that pollsters are concerned about good data?

At the same time, it appears that the unreported numbers still had an influence:

Indeed, the numbers widely circulating aren’t that different from last month’s edition of the same poll, or some other recent polls. But to other people, both journalists and operatives, milling around the lobby of the Des Moines Marriott Sunday night, the impact had been obvious.

Here are what some reporters told me about how the poll affected their work:

• One reporter for a major newspaper told me they inserted a few paragraphs into a story to anticipate results predicted by the poll.

• A reporter for another major national outlet said they covered an Elizabeth Warren event in part because she looked strong in the secret poll.

• Another outlet had been trying to figure out whether Amy Klobuchar was surging; the poll, which looked similar to other recent polling, steered coverage away from that conclusion.

• “You can’t help it affecting how you’re thinking,” said another reporter.


Surprise! The best suburbs in America are wealthy, educated, and in regions with reasonable costs of living

The Niche 2019 Best Places to Live falls into some of the same patterns of similar lists of highlighting already well-off communities with a high quality of life. Part of the reason is the methodology:


If this is what Niche and Money and other want to look for in terms of data and how it is weighted, they are going to consistently churn out lists of similar kinds of communities. The “best” suburbs and small towns in certain regions, those with higher housing prices, will find it hard to make the list. A certain amount of diversity is acceptable but not too much and it is related to social class. In other words, these are lists that might be intended for middle to upper-class suburbanites who are looking for safe, quiet, and enriching places to live.

So, perhaps instead of calling these the “Best Places to Live,” how about: “Aspirational Places for Middle- to Upper-Class Families?” Or, how about more lists that address hidden gems, communities that wouldn’t make a list like this due to one factor or another but are still great places? Or, how about ones that weight certain factors a lot higher, like “The Best Diverse Suburbs” or “The Best Suburbs for Housing Opportunities.”

Ultimately, these lists tend to reinforce cultural narratives about the places in which Americans most want to live and where the American Dream can be found. No doubt these magazines and sites need to sell copy – there are Americans who want to move to these top suburbs. But, there are also hundreds of other great places to live in the United States that do not always fit the longstanding suburban mold of mostly white, wealthy, educated, and quiet.

Teaching how science and research actually works

As a regular instructor of Statistics and Social Research classes, I took note at this paragraph in a recent profile of Bruno Latour:

Latour believes that if scientists were transparent about how science really functions — as a process in which people, politics, institutions, peer review and so forth all play their parts — they would be in a stronger position to convince people of their claims. Climatologists, he says, must recognize that, as nature’s designated representatives, they have always been political actors, and that they are now combatants in a war whose outcome will have planetary ramifications. We would be in a much better situation, he has told scientists, if they stopped pretending that “the others” — the climate-change deniers — “are the ones engaged in politics and that you are engaged ‘only in science.’ ” In certain respects, new efforts like the March for Science, which has sought to underscore the indispensable role that science plays (or ought to play) in policy decisions, and groups like 314 Action, which are supporting the campaigns of scientists and engineers running for public office, represent an important if belated acknowledgment from today’s scientists that they need, as one of the March’s slogans put it, to step out of the lab and into the streets. (To this Latour might add that the lab has never been truly separate from the streets; that it seems to be is merely a result of scientific culture’s attempt to pass itself off as above the fray.)

Textbooks on Statistics and Social Research say there are right ways and wrong ways to do the work. There are steps to follow, guidelines to adhere to, clear cut answers on how to do the work right. It is all presented in a logical and consistent format.

There are hints that this may not happen all the time. Certain known factors as well as unknown issues can push a researcher off track a bit. But, to do a good job, to do work that is scientifically interesting and acceptable to the scientific community, you would want to stick to the guidelines as much as possible.

This provides a Weberian ideal type of how science should operate. Or, perhaps the opposite ideal type occasionally provides a contrast. The researcher who committed outright fraud. The scholar who stepped way over ethical boundaries.

I see one of my jobs of teaching these classes as providing how these steps work out in actuality. You want to follow those guidelines but here is what can often happen. I regularly talk about the constraints of time and money: researchers often want to answer big questions with ideal data and that does not always happen. You make mistakes, such as in collecting data or analyzing results. You send the manuscript off for review and people offer all sorts of suggestions of how to fix it. The focus of the project and the hypothesis changes, perhaps even multiple times. It takes years to see everything through to publication.

On one hand, students often want the black and white presentation because it offers clear guidelines. If this happens, do this. On the other hand, presenting the cleaner version is an incomplete education into how research works. Students need to know how to respond when the process does not go as planned and know that this does not necessarily mean their work is doomed.

Scientific research is not easy nor is it always clear cut. Coming back to the ideal type concept, perhaps we should present it as we aspire to certain standards and particular matters may be non-negotiable but there are parts of the process, sometimes small and sometimes large, that are more flexible depending on circumstances.

Speculating on why sociology is less relevant to the media and public than economics

In calling for more sociological insight into economics, a journalist who attended the recent ASA meetings in Philadelphia provides two reasons why sociology lags behind economics in public attention:

Economists, you see, put draft versions of their papers online seemingly as soon as they’ve finished typing. Attend their big annual meeting, as I have several times, and virtually every paper discussed is available beforehand for download and perusal. In fact, they’re available even if you don’t go to the meeting. I wrote a column two years ago arguing that this openness had given economists a big leg up over the other social sciences in media attention and political influence, and noting that a few sociologists agreed and were trying to nudge their discipline — which disseminates its research mainly through paywalled academic journals and university-press books — in that direction with a new open repository for papers called SocArxiv. Now that I’ve experienced the ASA annual meeting for the first time, I can report that (1) things haven’t progressed much since 2016, and (2) I have a bit more sympathy for sociologists’ reticence to act like economists, although I continue to think it’s holding them back.

SocArxiv’s collection of open-access papers is growing steadily if not spectacularly, and Sociological Science, an open-access journal founded in 2014, is carving out a respected role as, among other things, a place to quickly publish articles of public interest. “Unions and Nonunion Pay in the United States, 1977-2015” by Patrick Denice of the University of Western Ontario and Jake Rosenfeld of Washington University in St. Louis, for example, was submitted June 12, accepted July 10 and published on Wednesday, the day after it was presented at the ASA meeting. These dissemination tools are used by only a small minority of sociologists, though, and the most sparsely attended session I attended in three-plus days at their annual meeting was the one on “Open Scholarship in Sociology” organized by the University of Maryland’s Philip Cohen, the founder of SocArxiv and one of the discipline’s most prominent social-media voices. This despite the fact that it was great, featuring compelling presentations by Cohen, Sociological Review deputy editor Kim Weeden of Cornell University and higher-education expert Elizabeth Popp Berman of the State University of New York at Albany, and free SocArxiv pens for all.

As I made the rounds of other sessions, I did come to a better understanding of why sociologists might be more reticent than economists to put their drafts online. The ASA welcomes journalists to its annual meeting and says they can attend all sessions where research is presented, but few reporters show up and it’s clear that most of those presenting research don’t consider themselves to be speaking in public. The most dramatic example of this in Philadelphia came about halfway through a presentation involving a particular corporation. The speaker paused, then asked the 50-plus people in the room not to mention the name of said corporation to anybody because she was about to return to an undercover job there. That was a bit ridiculous, given that there were sociologists live-tweeting some of the sessions. But there was something charming and probably healthy about the willingness of the sociologists at the ASA meeting to discuss still-far-from-complete work with their peers. When a paper is presented at an economics conference, many of the discussant’s comments and audience questions are attempts to poke holes in the reasoning or methodology. At the ASA meeting, it was usually, “This is great. Have you thought about adding …?” Also charming and probably healthy was the high number of graduate students presenting research alongside the professors, which you don’t see so much at the economists’ equivalent gathering.

All in all — and I’m sure there are sociological terms to describe this, but I’m not familiar with them — sociology seems more focused on internal cohesion than economics is. This may be partly because it’s what Popp Berman calls a “low-consensus discipline,” with lots of different methodological approaches and greatly varying standards of quality and rigor. Economists can be mean to each other in public yet still present a semi-united face to the world because they use a widely shared set of tools to arrive at answers. Sociologists may feel that they don’t have that luxury.

Disciplinary differences can be mystifying at times.

I wonder about a third possible difference in addition to the two provided: different conceptions in sociology and economics about what constitutes good arguments and data (hinted at above with the idea of “lots of different methodological approaches and greatly varying standards of quality and rigor.”) Both disciplines do aspire to the idea of social science where empirical data is used to test hypotheses about human behavior, usually in collectives, works. But, this is tricky to do as there are numerous pitfalls along the way. For example, accurate measurement is difficult even when a researcher has clearly identified a concept. Additionally, it is my sense that sociologists as a whole may be more open to qualitative and quantitative data (even with occasional flare-ups between researchers studying the same topic yet falling in different methodological camps). With these methodological questions, sociologists may feel they need more time to connect their methods to a convincing causal and scientific argument

A fourth possible reason behind the differences (also hinted at above with the idea of economists having a “semi-united face” to present): sociology has a reputation as a more left-leaning discipline. Some researchers may prefer to have all their ducks in a row before they expose their work to full public scrutiny. The work of economists is more generally accepted by the public and some leaders while sociology regularly has to work against some backlash. (As an example, see conservative leaders complain about sociology excusing poor behavior when the job of the discipline is to explain human behavior.) Why expose your work to a less welcoming public earlier when you could take a little more time to polish the argument?

Online survey panels in first-world countries versus developing nations

While reading about the opposition Canadians have to self-driving cars, I ran into this explanation from Ipsos about conducting online surveys in countries around the world:


Having online panels is a regular practice among survey organizations. However, I do not recall seeing an explanation like this regarding differences in online panels across countries. The online sample in non-industrialized countries is simply unrepresentative as it reflects “a more ‘connected’ population.” Put another way, the online panel in places like Brazil, China, Russia, and Saudi Arabia reflects the upper class and people who live more like Westerners and not the vast majority of their population. Then, the sample is also smaller in these countries: 500+ rather than 1000+. Finally, it would be interesting to see how much the data needs to be weighted to “best reflect the demographic proile of the adult population.”

With all these caveats, is an online panel in a non-industrialized country worth it?

Collecting big data the slow way

One of the interesting side effects of the era of big data is finding out how much information is not actually automatically collected (or is at least not available to the general public or researchers without paying money). A quick example from the work of sociologist Matthew Desmond:

The new data, assembled from about 83 million court records going back to 2000, suggest that the most pervasive problems aren’t necessarily in the most expensive regions. Evictions are accumulating across Michigan and Indiana. And several factors build on one another in Richmond: It’s in the Southeast, where the poverty rates are high and the minimum wage is low; it’s in Virginia, which lacks some tenant rights available in other states; and it’s a city where many poor African-Americans live in low-quality housing with limited means of escaping it.

According to the Eviction Lab, here is how they collected the data:

First, we requested a bulk report of cases directly from courts. These reports included all recorded information related to eviction-related cases. Second, we conducted automated record collection from online portals, via web scraping and text parsing protocols. Third, we partnered with companies that carry out manual collection of records, going directly into the courts and extracting the relevant case information by hand.

In other words, it took a lot of work to put together such a database: various courts, websites, and companies had different pieces of information but a researcher to access all of that data and put them together.

Without a researcher or a company or government body explicitly starting to record or collect certain information, a big dataset on that particular topic will not happen. Someone or some institution, typically with resources at its disposal, needs to set a process into motion. And simply having the data is not enough; it needs to be cleaned up so it all works with the other pieces. Again, from the Eviction Lab:

To create the best estimates, all data we obtained underwent a rigorous cleaning protocol. This included formatting the data so that each observation represented a household; cleaning and standardizing the names and addresses; and dropping duplicate cases. The details of this process can be found in the Methodology Report (PDF).

This all can lead to a fascinating dataset of over 83 million records on an important topic.

We are probably still a ways off from a scenario where this information would automatically become part of a dataset. This data had a definite start and required much work. There are many other areas of social life that require similar efforts before researchers and the public have big data to examine and learn from.

The problem of archiving the Internet may be just the first problem; how do we make causal arguments from its contents?

Archiving the Internet so that it can understood and studied by later researchers and scholars may be a big problem:

In a new paper, “Stewardship in the ‘Age of Algorithms,’” Clifford Lynch, the director of the Coalition for Networked Information, argues that the paradigm for preserving digital artifacts is not up to the challenge of preserving what happens on social networks.

Over the last 40 years, archivists have begun to gather more digital objects—web pages, PDFs, databases, kinds of software. There is more data about more people than ever before, however, the cultural institutions dedicated to preserving the memory of what it was to be alive in our time, including our hours on the internet, may actually be capturing less usable information than in previous eras…

Nick Seaver of Tufts University, a researcher in the emerging field of “algorithm studies,” wrote a broader summary of the issues with trying to figure out what is happening on the internet. He ticks off the problems of trying to pin down—or in our case, archive—how these web services work. One, they’re always testing out new versions. So there isn’t one Google or one Bing, but “10 million different permutations of Bing.” Two, as a result of that testing and their own internal decision-making, “You can’t log into the same Facebook twice.” It’s constantly changing in big and small ways. Three, the number of inputs and complex interactions between them simply makes these large-scale systems very difficult to understand, even if we have access to outputs and some knowledge of inputs.

In order to study something, you have measure and document it well. This is an essential first step for many research projects.

But, I wonder if even it can all be documented well, what exactly would it tell us about behaviors and aspirations? Like any “text,” it may be difficult to make causal arguments based on the artifacts of our Internet or social media. They are controlled by a relatively small number of people. Social media is dominated by a relatively small number of users. Many people in society interact with both but how exactly are their lives changed? The history of the Internet and social media and the forces behind it is one thing; it could be fascinating to see how the birth of the World Wide Web in the early 1990s or AOL or Facebook or Google are all viewed several decades into the future. But, it will be much harder to clearly show how all these forces affected the average person. Did it change personalities? Did day-to-day life change in substantial ways? Did political opinions change? Did it disrupt or enhance relationships? What if Twitter dominates the media and the lives of 10% of the American population but little impact on most lives?

There is a lot here to sort out and a lot of opportunities for good research. At the same time, there are a lot of chances for people to make vague claims and arguments based on correlations and broad patterns that cannot be explicitly linked.

Good data is foundational to doing good sociological work

I’ve had conversations in recent months with a few colleagues outside the discipline about debates within sociology over the work of ethnographers like Alice Goffman, Matt Desmond, and Sudhir Venkatesh. It is enlightening to hear how outsiders see the disagreements and this has pushed me to consider more fully how I would explain the issues at hand. What follows is my one paragraph response to what is at stake:

In the end, what separates the work of sociologists from perceptive non-academics or journalists? (An aside: many of my favorite journalists often operate like pop sociologists as they try to explain and not just describe social phenomena.) To me, it comes down to data and methods. This is why I enjoy teaching both our Statistics course and our Social Research course: undergraduates rarely come into them excited but they are foundational to who sociologists are. What we want to do is have data that is (1) scientific – reliable and valid – and (2) generalizable – allowing us to see patterns across individuals and cases or settings. I don’t think it is a surprise that the three sociologists under fire above wrote ethnographies where it is perhaps more difficult to fit the method under a scientific rubric. (I do think it can be done but it doesn’t always appear that way to outsiders or even some sociologists.) Sociology is unique in both its methodological pluralism – we do everything from ethnography to historical analysis to statistical models to lab or natural experiments to mass surveys – and we aim to find causal explanations for phenomena rather than just describe what is happening. Ultimately, if you can’t trust a sociologist’s data, why bother considering their conclusions or why would you prioritize their explanations over that of an astute person on the street?

Caveats: I know no data is perfect and sociologists are not in the business of “proving” things but rather we look for patterns. There is also plenty of disagreement within sociology about these issues. In a perfect world, we would have researchers using different methods to examine the same phenomena and develop a more holistic approach. I also don’t mean to exclude the role of theory in my description above; data has to be interpreted. But, if you don’t have good data to start with, the theories are abstractions.

Estimating big crowds accurately with weather balloons, bicycles, and counting

The best way to count large crowds – such as in Washington D.C. – may be by using a weather balloon and supplementing that data:

Well, technically, a “tethered aerostat.” Tethered because it is anchored to the ground, and aerostat because it will hold a static altitude in the air. A nine-lens camera is attached to its base, so it can capture the full 360-degree view of the proceedings. It will observe the entire Women’s March…

Their technique involves more than just the weather balloon. While the weather balloon records the events from above, Westergard and his team will bike or walk around the protest site. They’ll take note of how many people are taking cover under structures, like the massive elm trees on the Mall. Sometimes they’ll even lower the aerostat so that it can capture crowds in the shade. “At 400 feet, we’re looking under the trees. At 800 feet, you’re looking at the top of them,” he told me…

Once the data is collected, they return to their headquarters. Three days of work commences. First, they will measure the density of different parts of the crowd. They do this by counting heads in a specific area. “We sit there literally, head by head, going tick-tick-tick-tick-tick” with the images, he told me. “It’s painful, it’s long, but it’s far more accurate than these algorithms.”

Sometimes they outsource this task to Amazon’s Mechanical Turk service to increase their own accuracy: They ask a dozen strangers to count heads in a certain picture without telling them where the picture was taken.

Once they have this density map, they overlay it on a map of the topography. “If you have people surrounding the Washington Monument—which is on a moderately steep hill—and you look out at a crowd, you’re going to see more people because they’re tilted toward you,” he said. The computer model will correct for those kinds of inaccuracies.

See earlier posts (such as here and here) about counting crowds.

It is also interesting that this more accurate method is explained by the leader of a private firm: “Curt Westergard…is the president of Digital Design and Imaging Service based in Falls Church, Virginia, and he stressed that his company’s methods were “at the very top of the accuracy and ethical side.”” He is working for those who want to hire him, something that could be worthwhile for the article to explore. Are they impartial observers who are doing this work for science? In other words, crowd counting could be influenced by who exactly is doing the counting. Parties who often make the counts – police, local officials, the media – have vested interests. For example, take the case of the rally for the Cubs World Series victory.

Of course, as it noted in this article, the numbers themselves are often politicized. What will be the official count accepted by posterity for a Trump inauguration that likely stirs up emotions for everyone?