Census 2020 looking to go online

Reaching younger Americans is part of the reason plans are underway to move parts of the decennial 2020 census online:

Millennials (born from 1981 to 1996) and Generation Z (born after 1996) account for about 35 percent of the approximate 325 million people in the U.S., according to estimates, and census officials say their traditional means of outreach — mail-in questionnaires, landline phone calls and door-to-door surveys — are failing to connect with this significant segment of the population.

The Census Bureau plans to conduct its first-ever online headcount, which it predicts will generate 60 percent of the total responses for 2020…

However, social scientists suggest that millennials and Generation Z could have a hard time appreciating the importance of the census, having grown up amid a distorted media landscape of instant online gratification, “fake news” and a culture of likes on social networks…

Last month, census communications chief Burton Reist was quoted as saying endorsements from celebrities such as LeBron James are being considered. He described a hypothetical situation in which the NBA superstar urges young people during halftime to pull out their cellphones and “answer the census.”

Moving data collection online would seem to offer a lot in terms of lower costs and easier data tabulation. But, as the article suggests, it brings along its own issues such as cutting through the online clutter and working with celebrities to pitch the online data collection.

On one hand, this might lead to the conclusion that it is still difficult to use web surveys to collect information on a broad scale. Unless a research company has a panel of possible participants in a recruited and relatively representative panel, reaching the broader public on a voluntary basis is hard.

On the other hand, perhaps this should be taken as a good sign: the Census Bureau clearly indicates their data collection has to match what people actually use. Going door to door may not be feasible going forward. If people are online or using devices for hours a day, online surveys might be more attractive.

Almost regardless of how this turns out in the 2020 count, it will be an interesting experiment to watch. What will the online response rate be? How will the Census Bureau have to go about advertising online data entry?

Online survey panels in first-world countries versus developing nations

While reading about the opposition Canadians have to self-driving cars, I ran into this explanation from Ipsos about conducting online surveys in countries around the world:

IpsosOnlineSurveybyCountry2019

Having online panels is a regular practice among survey organizations. However, I do not recall seeing an explanation like this regarding differences in online panels across countries. The online sample in non-industrialized countries is simply unrepresentative as it reflects “a more ‘connected’ population.” Put another way, the online panel in places like Brazil, China, Russia, and Saudi Arabia reflects the upper class and people who live more like Westerners and not the vast majority of their population. Then, the sample is also smaller in these countries: 500+ rather than 1000+. Finally, it would be interesting to see how much the data needs to be weighted to “best reflect the demographic proile of the adult population.”

With all these caveats, is an online panel in a non-industrialized country worth it?

NYT: Yes, don’t trust online polls

Although the purpose here may truly be to discredit Donald Trump, here is another argument in the New York Times against online polls:

“Those do a good job of engaging audiences online, and they do a good job of letting you know how other people who have come to the webpage feel about whatever issue,” said Mollyann Brodie, the executive director for public opinion and survey research at the Kaiser Family Foundation. “But they’re not necessarily good at telling you, in general, what people think, because we don’t know who’s come to that website and who’s taken it.”

Professional pollsters use scientific statistical methods to make sure that their small random samples are demographically appropriate to indicate how larger groups of people think. Online polls do nothing of the sort, and are not random, allowing anyone who finds the poll to vote. They are thus open to manipulation from those who would want to stuff the ballot box. Users on Reddit and 4chan directed masses of people to vote for Mr. Trump in the instant-analysis surveys, according to The Daily Dot. Similar efforts were observed on Twitter and other sites.

Even when there is no intentional manipulation, the results are largely a reflection of who is likely to come to a particular site and who would be motivated enough to participate. Intuitively, it’s no surprise that readers of sites like Breitbart News and the Drudge Report would see Mr. Trump as the winner, just as Mrs. Clinton would be more likely to find support on liberal sites…

“In our business, the key is generalizability,” he said, referring to the ability of a sample group to apply to a wider population. “That’s the core of what we do. Typically, it takes a lot of time, and a lot of effort, and a lot of money to do it.”

One helpful solution may be to have media outlets refuse to use any online polls. On one hand, journalists often remind the public that they don’t mean anything while they consistently offer them on their website or on the evening news broadcast. They may have some marketing purpose – perhaps participants feel more engaged or it can give outlets some indication of how many people are going further than just passively taking it in – but why confuse people.

What do those post-debate snap polls tell us?

Ed Driscoll comments on the results of some of the post-debate snap polls:

TRUMP WINS MOST IMMEDIATE POLLS: “The newspaper collected screen shots of 19 ‘snap’ polls conducted immediately after the debate, and in 17 of them, most respondents said Trump won the debate, often by a wide margin. It isn’t just Drudge and Breitbart; Trump also got more votes than Clinton in instant polls at Time, Slate, Variety and other liberal outlets. I can’t explain it, other than to say that perhaps it tells us more about how people view Hillary Clinton than about how Donald Trump actually performed.”

Well, certainly one explanation is a repeat of the “Ron Paul Revolution” days of early 2008 – but as with Paul’s quixotic presidential bid, having a large enough group of dedicated zealots to tilt Internet polls does not necessarily translate into sufficient votes at the ballot box where it counts.

It seems safe to say that Trump’s core followers are much more passionate than Hillary’s. We’ll know soon enough if there are a majority of them.

The large issue with these snap polls is that they are unrepresentative. We don’t know who answered them and in what numbers. As suggested here, perhaps Donald Trump has more active followers who take such polls.

At the same time, if there are consistent patterns in non-helpful polls like this, perhaps they can provide insights into concerted online efforts. They may not reveal much about the electorate at large but they could help us understand patterns of partisans. Why is it important to “win” such snap polls? Are there dedicated efforts to win and how are these efforts organized?

Ultimately, does this suggest that snap polls are even worse than being unrepresentative: they are regularly used by particular groups to push a message? Winning in any arena is simply too important to be left to real survey methods…

Sociologist discusses why the BBC’s “class calculator” can help the field of sociology

Check out the BBC’s class calculator and this argument from a sociologist about how the calculator matters for sociology:

As an academic sociologist, this take-up, while exciting, is also disconcerting. I am more used to debating social class with my academic peers than seeing the topic taken up so actively in the public arena, and it has been subject to much biting comment. We are deluged by emails complaining about how the calculator puts you in the wrong class, with the wrong labels. Eminent sociologists such as David Rose are concerned with the quality of the social science lying behind the work (do we really need Bourdieu rather than Weber?). Guy Standing is not convinced about our use of his “precariat” (precarious proletariat) term as the label for the most disadvantaged class that we uncover. There are already numerous spoofs and take-offs of the class model and its measurement. Given this furore, I want to explain what we are trying to achieve sociologically with this project. Is this a model of a new kind of accessible social science? Or is it a worrying case of pandering to media headlines?

We are relaxed about people having fun “placing” themselves and discussing this with family and friends, and arguing with us sociologists along the way. It has led to a wider collective discussion on Twitter and Facebook, which we see as a desirable resource for a public-facing sociology in a digital age. We do need to set the record straight, however. The Class Calculator was designed by the BBC to mimic the more complex model we had developed on the basis of the survey data, and the two should not be conflated. As numerous people have pointed out, changing just one response can shift you between different classes. This would not be possible within the latent class analysis we deployed, where all six measures are simultaneously used to allocate class membership. Actually, this kind of simplification was deliberate, as the measures used in the Class Calculator were chosen precisely to make respondents aware of the most important factors in placing people into classes. But it still poses questions about whether we have been simplistic.

Let me be blunt. The concept of class matters, because we need a way of connecting accentuating economic inequalities to social and cultural differences which permeate our society. Rather than seeing our lifestyles and social networks as somehow separate from economic inequalities, there are overlaps that can work together to produce social advantage and disadvantage. For all its problems, the concept of class remains fundamental to making these connections. Sure, we would all rather not live in a class-divided society. But in reality, the markers of class cannot be doubted. Our model seeks to find a way of making these connections, arguing that occupational measures alone are too blunt a tool for this purpose…

In my view, probably the most important finding from our research is the existence of a distinctive “elite” class. We are so used to turning the telescope on the poor and disadvantaged that sociologists have had little to say about those who are at the apex of British society. Sociological studies of class have no specific place for an elite category. What we have shown is that this very wealthy class is now clearly distinguished from all the other classes in Britain, and the economic differences are huge. That is a powerful and unsettling finding.

It is a simple little survey (it took me a few minutes and this was a little longer than it had to be because I was trying to do some mental conversions from dollars to pounds) but it sounds like it might have some potential for research and reflection.

I wonder how well this might work in an American setting. Compared to the United States, Britain is known for being more conscious of class. In contrast, most Americans would prefer to say they are middle class. So, what would happen if PBS or the New York Times or an equivalent news source ran such a survey? Would it be beneficial in that it could help show people where they really fall in society rather than the middle-class aspirations many claim to have?

Census Bureau moving to more online data collection to save money

The US Census Bureau is collecting more information online in order to cut costs:

The Census Bureau already has started offering an Internet option to the 250,000 households it selects every month at random for the American Community Survey. Since becoming available in January, more than half the responses have come in on a secure site that requires codes and PIN numbers.

The bureau expects to use the Internet — plus smart phones and other technologies yet to be invented — for the next decen­nial census, in 2020.

The increasing reliance on technology is designed to save money. The 2010 Census cost $96 per household, including the American Community Survey that has replaced the old long form. That cost has more than doubled in two decades, up from $70 in 2000 and $39 as recently as 1990…

The Census Bureau spent two years running preliminary experiments in how people responded to American Commu­nity Survey questions on the computer screen. Five rounds of ­testing involved tracking eye movements as people scanned a Web page looking for which answer they wanted to check.

The households selected for the survey still get their first contact the old-fashioned way, with a mailed letter telling them the questionnaire is on its way. Then they receive a letter telling them how to respond over the Internet. If they don’t use that option, they get a 28-page paper form a few weeks later.

It is too bad this may be motivated primarily by money. I would hope it would be motivated more by wanting to collect better data and boost response rates. However, I’m glad they seem to have done a good amount of testing. But, the article fails to address one of the biggest issues with web surveys: can this technique be used widely with different groups in the US population or does it work best with certain groups (usually younger, more Internet access)? All this is related to how much money can be saved: what percentage of mailed forms or household visits can be eliminated with new techniques? And I would be interested in hearing more about using smartphones. The Internet may be horribly outdated even today for a certain segment of the population. Imagine a Census 2020 app – used via Google Glass.

 

Zipcar finds more Millennials would rather give up cars rather than cell phones, computers

A recent Zipcar survey asked this question: “In your daily routine, losing which piece of technology would have the greatest negative impact on you?” Here are the results with the possible answers of TV, mobile phone, computer, or car.

 

 

 

 

 

 

 

 

There are some clear differences by generation: cars become more highly regarded as age increases. TV is not rated that highly in this group of technology across all groups though it is clear these days that TV doesn’t really operate primarily through a big screen on top of a piece of furniture. And by quite a bit, mobile phones are most valuable to Millennials compared to the other technologies.

Now what exactly this means for what Millennials will do in the future is unclear. Would they chose a smartphone over a car when they need a better job and it is only accessible by car? Will they really change where they live over their lifetime because they value cars less?

It would be nice to have more information about Zipcar’s web survey. Is it a representative sample? If they don’t say anything about it, it makes me nervous…