Trying to fit all the election results on one television screen

I watched briefly a number of election night broadcasts last night. One conclusion I came to: there is way too much data to fit on a television screen. And if you want more of the data, you need the Internet, not television.

The different broadcasts tried similar variations: flipping back and forth between a set of anchors and pundits at desks and analysts at a smart board showing election results from different states and locations. They have done this for enough election nights that the process is pretty established.

While they do this, there is often a lot of data on the screen. This could include: a map of the United States with states shaded; a chryon at the bottom with scrolling news; another panel at the bottom flipping through results from different races; and people talking, sometimes in connection to the data on the screen and sometimes. If the analyst at the smart board is on the screen, there is another set of maps to consider.

CNN broadcast, November 4, 2020

This is a lot to take in and it might not be enough. The broadcasts try to balance all of the levels of government – from the presidential race to congressional districts – and are flipping back and forth. I appreciated seeing the more simple approach of PBS which went with a lot less data on the screen, bigger images of the talking heads, and simple summary graphics of the winners.

But, if you want the data, the television broadcast does not cut it. Numerous websites offered single pages where one could monitor all of the major races in real-time. Want to keep up on both local and national races? Have two pages open. Want reaction? Add social media in a third window. Use multiple Internet-connected devices including smartphones, tablets, and computers (and maybe Internet-enabled televisions).

Furthermore, web pages give users more control over the data they are seeing. Take the final 2020 election forecast from FiveThirtyEight:

On one page, readers could see multiple presentations of data plus explanations. Want to scroll through in 10 seconds and see the headlines? Fine. Want to spend 5 minutes analyzing the various graphics? That works. Want to click on all the links for the metholodogy and commentary? A reader could do that too.

The one big advantage television offers is that it offers commentary and faces in real-time plus the potential for live coverage from the scene (such as images of gatherings for candidates) and feeling like the viewer is present when major announcements are made. The Internet has approximations of this – lively social media accounts, live blogs – but it is not the same feeling. (Of course, when you have more than ten live election night broadcasts available on your television, the audience will be pretty split there as well.) Elections are not just about data for many; they also include emotions, presence, and the potential for important memories.

Given these differences in media, I did what I am guessing many did last night: I consumed both television and Internet/social media coverage. Neither are perfect for the task. I had to go to sleep eventually. And whoever can figure out how to combine the best elements of both for election nights may do very well for themselves.

When I see “study” in a news story, I (wrongly) assume it is a peer-reviewed analysis

In the last week, I have run into two potentially interesting news stories that cite studies. Yet, when I looked into what kind of studies these were, they were not what I expected.

Photo by Pixabay on Pexels.com

First, the Chicago Tribune online headline: “Why are Chicagoans moving away during the pandemic? As study suggests outbound migration is spiking, we asked them.” The opening to the story:

Chicago’s population has been on the decline for years, with the metropolitan area suffering some of the greatest losses of any major U.S. city. But new research suggests that the pandemic might be exacerbating the exodus.

For the first time in four years, moving concierge app Updater has helped more people move out of Chicago than to it, the company said. The catch-all moving service estimates that it takes part in one-third of all U.S. moves, providing unique, real-time insight into pandemic-driven trends, said Jenna Weinerman, Updater’s vice president of marketing.

“All these macro conditions — job insecurity, remote work, people wanting to gain more space — are coming together to create these patterns,” Weinerman said.

The Chicago figures are based on approximately 39,000 moves within city limits from March 1 to Sept. 30. Compared to 2019, this year saw more moving activity in general, with an 8% jump in moves into the city — but a 19% increase in the number of people leaving.

The second article involved a study at Cafe Storage and the headline “Average Home Size in the US: New Homes Bigger than 10 Years Ago but Apartments Trail Behind” (also cited in the Chicago Tribune) From the story:

According to the latest available US Census data, the average size of single family homes built in the US was trending upwards from 2010 until 2017, when sizes hit a peak of 2,643 square feet. Since then, single family homes began decreasing in size, with homes built in 2019 averaging 2,611 square feet…

Location matters when it comes to average home size. Some urban hotspots follow the national trend, while others move in the opposite direction. Here’s how single family home and apartment sizes look in the country’s top 20 largest cities, based on Yardi Matrix, Property Shark and Point2Homes data.

As an academic, here is what I expect when I hear the word study:

  1. Peer-reviewed work published in an academic outlet.
  2. Rigorous methodology and trusted data sources.

These steps do not guarantee research free from error but it does impose standards and steps intended to reduce errors.

In both cases, this analysis does not meet those standards. Instead, they utilize more proprietary data and serve the companies or websites publicizing the findings. This does not necessarily mean the findings are untrue. It does, however, make it much more difficult for journalists or the public to know how the study was conducted, what the findings are, and what it all means.

Use of the term study is related to a larger phenomena: many organizations, businesses, and individuals have potentially interesting data to contribute to public discussions and policy making. For example, without official data about the number of people moving out of cities, we are left searching for other data sources. How reliable are they? What data is anecdotal and what can be trusted? Why don’t academics and journalists find better data?

If we use the word “study” to refer to any data analysis, we risk making it even harder for people to discern what is a trustworthy study and what is not. Call it an analysis, call it a set of findings. Make clear who conducted the research, how the analysis was conducted, and with what data. (These three steps would be good for any coverage of an academic study.) Help readers and interested parties put the findings in the context of other findings and ongoing conversations. Just do not suggest that this is a study in the same way that other analyses are studies.

Comparing “five myths about the suburbs” in 2011 and 2020

The Washington Post has a new “five myths about the suburbs” that differs from its 2011 piece by the same name (though a different author). From my 2011 post, here is the older list:

Photo by Kelly Lacy on Pexels.com

1. Suburbs are white, middle-class enclaves…

2. Suburbs aren’t cool…

3. Suburbs are a product of the free market…

4. Suburbs are politically conservative…

5. Suburbanites don’t care about the environment…

From the 2020 list:

Suburbs are less dense than cities…

All suburbanites own detached houses…

Suburban workers typically commute to downtown jobs…

Today’s suburbs are racially integrated…

E-commerce killed suburban malls.

There is a lot of overlap between these lists including commentary on class status, who suburban residents are, and what suburban communities are like. There are also differences in the lists: the 2011 list discusses the cool factor and the environmental impact of suburbs while the 2020 list highlights retail.

Even with the overlap, it is notable that myths about suburbia are still viable decades after suburban changes have been in motion. This hints that the image of suburbia is persistent and powerful: the single-family suburban home where a nuclear family pursues the American Dream can still be found in both reality and in cultural productions. But, there is also a another/newer side of suburbia that features new kinds of residents, alternative forms of housing, tougher lives and disillusionment in the supposed land of plenty, and changing everyday life. This sounds like complex suburbia: the suburbs are more varied than the typical image.

Furthermore, there are a number of actors interested in researching and discussing the suburbs of today. From books like Confronting Suburban Poverty to Radical Suburbs to videos, there is still plenty to analyze and learn about in a geographic domain that many think is relatively easy to understand. The suburbs may not appear as exciting as other dynamic locations but with a majority of Americans living in suburban settings, what happens in the suburbs has the potential to shape many lives.

A short overview of recent survey questions about Holocaust knowledge in the US

Although this article leads with recent survey results about what Americans know and think about the Holocaust, I’ll start with the summary of earlier surveys and move forward in time to the recent results:

Whether or not the assumptions in the Claims Conference survey are fair, and how to tell, is at the core of a decades long debate over Holocaust knowledge surveys, which are notoriously difficult to design. In 1994, Roper Starch Worldwide, which conducted a poll for the American Jewish Committee, admitted that its widely publicized Holocaust denial question was “flawed.” Initially, it appeared that 1 in 5, or 22 percent, of Americans thought it was possible the Holocaust never happened. But pollsters later determined that the question—“Does it seem possible or does it seem impossible to you that the Nazi extermination of the Jews never happened?”—was confusing and biased the sample. In a subsequent Gallup poll, when asked to explain their views on the Holocaust in their own words, “only about 4 percent [of Americans] have real doubts about the Holocaust; the others are just insecure about their historical knowledge or won’t believe anything they have not experienced themselves,” according to an Associated Press report at the time. More recently, the Anti-Defamation League was criticized for a 2014 worldwide study that asked respondents to rate 11 statements—“People hate Jews because of the way they behave, for example”—as “probably true” or “probably false.” If respondents said “probably true” to six or more of the statements, they were considered to harbor anti-Semitic views, a line that many experts said could not adequately represent real beliefs…

Just two years ago, the Claims Conference released another survey of Americans that found “Two-Thirds of Millennials Don’t Know What Auschwitz Is,” as a Washington Post headline summarized it. The New York Times reported on the numbers at the time as proof that the “Holocaust is fading from memory.” Lest it appear the group is singling out Americans, the Claims Conference also released surveys with “stunning” results from Canada, France, and Austria.

But a deeper look at the Claims Conference data, which was collected by the firm Schoen Cooperman Research, reveals methodological choices that conflate specific terms (the ability to ID Auschwitz) and figures (that 6 million Jews were murdered) about the Holocaust with general knowledge of it, and knowledge with attitudes or beliefs toward Jews and Judaism. This is not to discount the real issues of anti-Semitism in the United States. But it is an important reminder that the Claims Conference, which seeks restitution for the victims of Nazi persecution and also to “ensure that future generations learn the lessons of the Holocaust,” is doing its job: generating data and headlines that it hopes will support its worthy cause.

The new Claims Conference survey is actually divided into two, with one set of data from a 1,000-person national survey and another set from 50 state-by-state surveys of 200 people each. In both iterations, the pollsters aimed to assess Holocaust knowledge according to three foundational criteria: the ability to recognize the term the Holocaust, name a concentration camp, and state the number of Jews murdered. The results weren’t great—fully 12 percent of national survey respondents had not or did not think they had heard the term Holocaust—but some of the questions weren’t necessarily written to help respondents succeed. Only 44 percent were “familiar with Auschwitz,” according to the executive summary of the data, but that statistic was determined by an open-ended question: “Can you name any concentration camps, death camps, or ghettos you have heard of?” This type of active, as opposed to passive, recall is not necessarily indicative of real knowledge. The Claims Conference also emphasized that 36 percent of respondents “believe” 2 million or fewer Jews were killed in the Holocaust (the correct answer is 6 million), but respondents were actually given a multiple-choice question with seven options—25,000, 100,000, 1 million, 2 million, 6 million, 20 million, and “not sure”—four of which were lowball figures. (Six million was by far the most common answer, at 37 percent, followed by “not sure.”)

The first example above has made it into research methods textbooks regarding the importance of how survey questions are worded. The ongoing discussion in this article also could illustrate these textbook dialogues: how questions are asked and how the results are interpreted by the researchers are very important.

There are other actors in this process that can help or harm the data interpretation:

  1. Funders/organizations behind the data. What do they do with the results?
  2. How the media reports the information. Do they accurately represent the data? Do they report on how the data was collected and analyzed?
  3. Does the public understand what the data means? Or, do they solely take their cues from the researchers and/or the media reports?
  4. Other researchers who look at the data. Would they measure the topics in the same way and, if not, what might be gained by alternatives?

This all may be boring details to many but going from choosing research topics and developing questions to sharing results with the public and interpretation from others can be a process. The hope is that all of the actors involved can help get as close to what is actually happening – in this case, accurately measuring and reporting attitudes and beliefs.

Collect better data on whether Chicagoans are leaving the city

Even as there are claims 500,000 New Yorkers have left the city, a new article suggests “some” Chicago residents are leaving. The evidence:

Incidents of widespread looting and soaring homicide figures in Chicago have made national news during an already tumultuous year. As a result, some say residents in affluent neighborhoods downtown, and on the North Side, no longer feel safe in the city’s epicenter and are looking to move away. Aldermen say they see their constituents leaving the city, and it’s a concern echoed by some real estate agents and the head of a sizable property management firm.

It’s still too soon to get an accurate measure of an actual shift in population, and such a change could be driven by a number of factors — from restless residents looking for more spacious homes in the suburbs due to COVID-19, to remote work allowing more employees to live anywhere they please…

The day after looting broke out two weeks ago, a Tribune columnist strolled through Gold Coast and Streeterville. Residents of the swanky Near North Side told him they’d be moving “as soon as we can get out.” Others expressed fear of returning downtown in the future.

Rafael Murillo, a licensed real estate broker at Compass whose primary market is downtown high-rises, said he has seen a trend of city dwellers looking to move to the suburbs sooner than initially planned, due in part to the recent unrest in the city.

Three pieces of data I see i this story: aldermen reporting on actions in their districts; journalists talking to some people; and comments from people in the real estate industry. This is not that different than what is being said in New York City (plus information from moving companies).

The caveat that leads the second paragraph above – we do not have an accurate measure yet – may be correct but then it is difficult to square with the rest of the story that suggests “some” people are leaving. What we want to know is the size of this trend. Is this a trickle of people in a city that has been losing people or a recent flood? And if the numbers are larger, what exactly are the motivations of people for leaving (being pushed over the edge, fear, housing values, etc.)?

Someone could find some more certain data. Work with the local utilities to look at usage (or nonusage in units)? Traffic counts? Post office address changes? Triangulate with more data sources? If this is indeed a trend, it is an important one to highlight, explain, and discuss. But, without better data, it is hard to know what to make of it.

Remember the suburban voters in 2020

As COVID-19 and police brutality pushed the 2020 presidential election off the front pages for months, recent poll data suggests suburban voters are breaking one way in national polls:

And while Trump has an edge with rural voters, Biden crushes him in the suburbs – which often decide how swing states swing.

Fifty per cent of suburban registered voters told the pollster they planned to vote for Biden, while 36 per cent said they’d vote for Trump.

And in Texas metropolitan areas:

A Quinnipiac University survey released last week found Trump leading Biden by 1 point in Texas. Trump leads by 2.2 points in the RealClearPolitics average.

Texas Republicans are primarily worried about their standing in the suburbs, where women and independents have steadily gravitated away from the GOP since Trump took office.

Republican support has eroded in the areas surrounding Houston, Dallas, Austin and San Antonio, four of the nation’s largest and fastest growing metro areas. Democrats defeated longtime GOP incumbents in Houston and Dallas in 2018.

More background on trying to find a suburban “silent majority”:

The suburbs — not the red, but sparsely populated rural areas of the country most often associated with Trump — are where Trump found the majority of his support in 2016. Yet it was in the suburbs that Democrats built their House majority two years ago in a dramatic midterm repudiation of the Republican president.

Now, Trump’s approach to the violence and unrest that have gripped the nation’s big cities seems calibrated toward winning back those places, in the hopes that voters will recoil at the current images of chaos and looting — as they did in the late 1960s — and look to the White House for stability…

Five months before the general election, according to national polls, the political landscape for Trump is bleak. But there is a clear window of opportunity: Trump remains popular in rural America, and he won the suburbs by 4 percentage points in 2016 — largely on the backs of non-college-educated whites.

There are millions more potential voters where those came from — people who fit in Trump’s demographic sweet spot but did not vote. They live in rural and exurban areas, but also in working class suburbs like Macomb County, outside Detroit. They are who Republicans are referring to when they talk about a new “silent majority” — the kind of potential voters who, even if disgusted by police violence, are not joining in protest.

This probably bears repeating: the American suburbs of today are not solely populated by wealthy, white, conservative voters. This is the era of complex suburbia where different racial and ethnic groups as well as varied social classes live throughout metropolitan regions.

Relatively little media coverage has examined how COVID-19 or police brutality has affected suburbs or how suburbanites feel about all the change. While just over 50% of Americans live in suburbs, coverage emphasized urban areas. And what do suburbanites think when they see these images of urban life, policing, and protest that they may or not understand on an experiential or deeper level?

Americans watching more TV during COVID-19

Nielsen reported in 2018 that Americans consume on average over 11 hours of media a day, with over four hours a day of television viewing. Several sources suggest people are watching more TV than ever during COVID-19.

From Comcast:

The average household is putting in an extra workday’s worth of viewing each week – watching 8+ hours more per week than they were in early March, going from approximately 57 hours a week per household to 66 hours…

Since the start of COVID, these distinctions have blurred and weekdays are seeing viewing levels and trends akin to the weekend. As a matter of fact, in the past two weeks, Monday has become a more popular day to watch television than Saturday.

From the Washington Post:

Explosive demand for TV content led almost 16 million people to sign up for Netflix — more than double what the company predicted before the Covid-19 outbreak. The extended time at home also has been a chance for consumers to take new apps out for a spin, including Disney+, Apple TV+, Quibi and Comcast Corp.’s Peacock. Disney+ has added 28 million subscribers since December. Meanwhile, as the recession causes consumers to tighten their budgets, pricey cable-TV bills will be on the chopping block. Already last quarter, the big four pay-TV providers saw an exodus of nearly 2 million customers, with AT&T Inc.’s DirecTV accounting for almost half of those cancellations.

The desire to save money is boosting interest in free streaming-video services, such as Pluto TV and Tubi, that are funded by advertisers. Pluto TV’s growth proved to be the biggest bright spot in ViacomCBS Inc.’s quarterly results, as the cancellation of the NCAA March Madness tournament crushed traditional network ad sales

From the Denver Post:

Ever since city and state stay-at-home orders abruptly arrived with social distancing in mid-March, Denverites’ TV-viewing plus internet-connected device TV usage (as Nielsen calls it) has jumped up to 20% over comparable periods in the previous weeks.

Local TV stations also have become many viewers’ go-to source for information about the coronavirus and COVID-19,  reversing a trend that saw sharp declines in local news viewership in recent years. In the top 25 markets, local news experienced a 7% viewership lift between early February and the week of March 9. Among people 25-54, the spike was more than 10%, and 20% for people aged 2-17, Nielsen reported.

In total, the biggest weekly viewing increase across the country — when compared with the same period last year — occurred the week of April 6, Nielsen data showed.

Several thoughts on this:

  1. This all makes sense: people are home more and television is one of the top non-work activities for Americans. Even in the age of Internet, social media, and smartphones, television is a force to be reckoned with.
  2. This adds up to a lot of television on a daily and cumulative basis. For those worried about its effects, when people have more time, they still turn to television.
  3. This is not necessarily all good news for television networks and content creators. Advertising revenues are tough to find and cord-cutting, connected to unemployment and economic uncertainty, is up.
  4. It will be interesting to see what happens with long-term viewing patterns. COVID-19 restrictions could last a while in some places and fear about going out in public could continue even longer. Does this mean TV viewing will be up for a while? If so, is there a way for content creators, advertisers, and others to capitalize on the opportunities? Or, imagine a public campaign that pushes other activities beyond sitting in front of a television or smartphone screen (unlikely, I admit)?

Models are models, not perfect predictions

One academic summarizes how we should read and interpret COVID-19 models:

Every time the White House releases a COVID-19 model, we will be tempted to drown ourselves in endless discussions about the error bars, the clarity around the parameters, the wide range of outcomes, and the applicability of the underlying data. And the media might be tempted to cover those discussions, as this fits their horse-race, he-said-she-said scripts. Let’s not. We should instead look at the calamitous branches of our decision tree and chop them all off, and then chop them off again.

Sometimes, when we succeed in chopping off the end of the pessimistic tail, it looks like we overreacted. A near miss can make a model look false. But that’s not always what happened. It just means we won. And that’s why we model.

Five quick thoughts in response:

  1. I would be tempted to say that the perilous times of COVID-19 lead more people to see models as certainty but I have seen this issue plenty of times in more “normal” periods.
  2. It would help if the media had less innumeracy and more knowledge of how science, natural and social, works. I know the media leans towards answers and sure headlines but science is often messier and takes time to reach consensus.
  3. Making models that include social behavior is difficult. This particular phenomena has both a physical and social component. Viruses act in certain ways. Humans act in somewhat predictable ways. Both can change.
  4. Models involve data and assumptions. Sometimes, the model might fit reality. At other times, models do not fit. Either way, researchers are looking to refine their models so that we better understand how the world works. In this case, perhaps models can become better on the fly as more data comes in and/or certain patterns are established.
  5. Predictions or proof can be difficult to come by with models. The language of “proof” is one we often use in regular conversation but is unrealistic in numerous academic settings. Instead, we might talk about higher or lower likelihoods or provide the best possible estimate and the margins of error.

A (real) pie chart to effectively illustrate wealth inequality

Pie graphs can be great at showing relative differences between a small number of categories. A recent example of this comes from CBS:

CBS This Morning co-host Tony Dokoupil set up a table at a mall in West Nyack, New York, with a pie that represented $98 trillion of household wealth in the United States. The pie was sliced into 10 pieces and Dokoupil asked people to divide up those pieces onto five plates representing the poorest, the lower middle class, middle class, upper middle class, and wealthiest Americans. No one got it right. And, in fact, no one was even kind of close to estimating the real ratio, which involves giving nine pieces to the top 20 percent of Americans while the upper middle class and the middle class share one piece between the two of them. The lower middle class would effectively get crumbs considering they only have 0.3 percent of the pie. What about the poorest Americans? They wouldn’t get any pie at all, and in fact would get a bill, considering they are, on average, around $6,000 in debt…

To illustrate just how concentrated wealth is in the country, Dokoupil went on to note that if just the top 1 percent are taken into account, they would get four of the nine pieces of pie that go to the wealthiest Americans.

A pie chart sounds like a great device for this situation because of several features of the data and the presentation:

1. There are five categories of social class. Not too many for a pie chart.

2. One of those categories, the top 20 of Americans, clearly has a bigger portion of the pie than the other groups. A pie chart is well-suited to show one dominant category compared to the others.

3. Visitors to a shopping mall can easily understand a pie chart. They understand how it works and what it says (particularly with #1 and #2 above).

Together, a pie chart works in ways that other graphs and charts would not.

(Side note: it is hard to know whether the use of food in the pie chart helped or hurt the presentation. Do people work better with data when feeling hungry?)

Font sizes, randomly ordered names, and an uncertain Iowa poll

Ahead of the Iowa caucuses yesterday, the Des Moines Register had to cancel a final poll just ahead of the voting due to problems with administering the survey:

Sources told several news outlets that they figured out the whole problem was due to an issue with font size. Specifically, one operator working at the call center used for the poll enlarged the font size on their computer screen of the script that included candidates’ names and it appears Buttigieg’s name was cut out from the list of options. After every call the list of candidates’ names is reordered randomly so it isn’t clear whether other candidates may have been affected as well but the organizers were not able to figure out whether it was an isolated incident. “We are unable to know how many times this might have happened, because we don’t know how long that monitor was in that setting,” a source told Politico. “Because we do not know for certain—and may not ever be able to know for certain—we don’t have confidence to release the poll.”…

In their official statements announcing the decision to nix the poll, the organizers did not mention the font issue, focusing instead on the need to maintain the integrity of the survey. “Today, a respondent raised an issue with the way the survey was administered, which could have compromised the results of the poll. It appears a candidate’s name was omitted in at least one interview in which the respondent was asked to name their preferred candidate,” Register executive editor Carol Hunter said in a statement. “While this appears to be isolated to one surveyor, we cannot confirm that with certainty. Therefore, the partners made the difficult decision to not to move forward with releasing the Iowa Poll.” CNN also issued a statement saying that the decision was made as part of their “aim to uphold the highest standards of survey research.”

This provides some insight into how these polls are conducted. The process can include call centers, randomly ordered names, and a system in place so that the administrators of the poll can feel confident in the results (even as there is always a margin of error). If there is a problem in the system, the opinions of those polled may not match what the data says. Will the future processes not allow individual callers to change the font size?

More broadly, a move like this could provide more transparency and ultimately trust regarding political polling. The industry faces a number of challenges. Would revealing this particular issue cause people to wonder how often this happens or reassure them that pollsters are concerned about good data?

At the same time, it appears that the unreported numbers still had an influence:

Indeed, the numbers widely circulating aren’t that different from last month’s edition of the same poll, or some other recent polls. But to other people, both journalists and operatives, milling around the lobby of the Des Moines Marriott Sunday night, the impact had been obvious.

Here are what some reporters told me about how the poll affected their work:

• One reporter for a major newspaper told me they inserted a few paragraphs into a story to anticipate results predicted by the poll.

• A reporter for another major national outlet said they covered an Elizabeth Warren event in part because she looked strong in the secret poll.

• Another outlet had been trying to figure out whether Amy Klobuchar was surging; the poll, which looked similar to other recent polling, steered coverage away from that conclusion.

• “You can’t help it affecting how you’re thinking,” said another reporter.

asdf