Asking residents of Burbank, CA about their thoughts on mansionization

A recent survey in Burbank, California asked residents about possible mansionization in the city:

A new survey of residents in Burbank, California, is trying to quantify some of this local frustration. Using images of seemingly out-of-place new houses within the city’s older neighborhoods, the online poll tries to get at both the “gut reactions” that city residents have to these “mansionized” houses and their overall willingness to create new laws to control the growth of house size.

Burbank last limited the size of new home construction in 2005, when it reduced the ratio of house square footage to total lot size, from 0.6 to 0.4. But even these new regulations allow for homes far larger than the average size across the city, according to Carol Barrett, the city’s assistant director for planning and transportation. She says the poll is designed to gauge the community’s interest in creating further size restrictions, as well as new guidelines for architectural style and building materials.

“It’s not just an issue that the houses are bigger,” Barrett says. Another important question, she explains, would be: “Is it just a giant box with some precast concrete stuck on for a little decorative design, or does it have a specific architectural character?”

All of this could be seen as largely a matter of taste. But the awkward images in the survey, of giant, Spanish-style mini-mansions dwarfing the decades-old bungalows and ranch houses next door are awfully convincing. Below are some of the most telling images from the survey, which Barrett culled from suggestions from local citizen groups like Preserve Burbank and coworkers in city hall.

I like the idea of a survey about mansionization. Here are a few thoughts on such a survey:

1. Having a decent survey response rate might be the biggest issue. Getting a representative sample from a city of just over 100,000 people is not necessarily easy. On one hand, people have more survey fatigue but, on the other hand, suburbanites tend to take threats to their neighborhoods and property values very seriously.

2. Linking people’s “gut reactions” to particular policy changes is an important step. I suspect, based on the pictures shown, people would respond fairly negatively to mansionization. But, there are a number of ways this could be addressed. It sounds like the survey asks about several policy options to limit houses; I wonder if there are a few residents who would argue for property rights (and the ability to make lots of money when selling their property).

3. The pictures included in the survey are very helpful: people need to see exactly what such houses might look like rather than imagine what might be the case. However, the particular pictures might influence responses as mansionziation can take multiple forms.

I would be really curious to see how residents respond.

Chicago Tribune editorial against “survey mania”

The Chicago Tribune takes a strong stance against “survey mania.”

Question 1: Do you find that being pelted by survey requests from your bank, cable company, doctor, insurance agent, landlord, airline, phone company — and so on — is annoying and intrusive?

Question 2: Do you ignore all online and phone requests for survey responses because, well, your brief encounter with a bank teller doesn’t really warrant a 15-minute exegesis on the endearing time you spent together?

Question 3: Don’t you wish that virtually every company in America hadn’t succumbed to survey mania at the same time, so that you’d feel, well, a little more special when each request for your precious thoughts pings into your email?

Question 4: Do you wish that companies would spend a little less on surveys and a little more on customer service staff, so that callers would not be held captive by soul-sucking, brain-scorching, automated answering systems in which a chirpy-voiced robot only grudgingly ushers your call — “which is very important to us, which is still very important to us” — to a human being?

Question 5: Do you agree that blogger Greg Reinacker laid out some reasonable guidelines for companies that send surveys to customers: “Tell me how long it’s going to take. Even better, tell me exactly how many questions there will be. … Don’t ask me the same question three different ways just to see if I’m consistent. … If you really, really want me to take the survey, offer me something. I’m a sucker for free stuff. And a drawing probably won’t do it.”

Question 6: Do you think companies should be aware that a pleasant experience — a flight, a hotel stay, a cruise — can be retroactively tainted by an exhausting survey and all those nagging email reminders that you haven’t yet filled it out?

Question 7: Do you find it irritating when a salesperson tries to game the system by reminding you over and over that only an excellent rating for his or her service will suffice … before said service has been rendered to you?

Question 8: Do you agree that there are ample opportunities to put in a good word for, say, an excellent waiter or sales clerk or customer service agent (just ask to speak to his or her supervisor!), which is much more sincere than you unhappily trudging through a long multiple-choice online questionnaire?

Question 9: Are you aware that marketing professors tell us that these surveys can be vitally important for companies to improve their service and that employee bonuses and other incentives hinge on whether you rate their service highly or not? We’re dubious, too, but just in case it’s true … would you please tell our boss how great you think this editorial is? Use all the space you need.

We get it – some people think they are being asked to do too many surveys. At the same time, this hints at some larger issues with surveys:

1. Companies and organizations would love to have more data. This reminds me of part of the genius of Facebook – people voluntarily give up their data because they get something out of it (the chance to maintain relationships with people they know).

2. Some of these problems listed above could be fixed easily. Take #7. Salespeople can be too pushy in trying to get data.

3. Some things in #5 could be done while others listed there are harder. It should be common practice to tell survey takers how long the survey might take. But, asking about a topic multiple times is often important to see if people are consistent. This is called testing the validity of the data.

4. I think more consumers would like to receive more for participating in surveys. This could be in the form of incentives, everything from free or cheaper products or special opportunities. At the least, they don’t want to feel used or to feel like just another data point.

5. Survey fatigue is a growing problem. This makes collecting data more difficult for everyone, including academic researchers.

All together, I don’t think the quest for survey data is going to end soon because customer or consumer info is so valuable for businesses and organizations. But, approaching consumers for data can be done in better or worse ways. To get good data – not just some data – organizations need to offer consumers something worthwhile in return.

Congressional town halls not necessarily indicative of public opinion

I heard two news reports yesterday from two respected media sources about Congressional members holding towns halls in their districts about possible military action in Syria. Both reports featured residents speaking up against military action. Both hinted that constituents weren’t happy with the idea of military action. However, how much do town halls like these really tell us?

I would suggest not much. While they give constituents an opportunity to directly address a member of Congress, these events are great for the media. There are plenty of opportunities for heated speeches, soundbites, and disagreement amongst the crowd. One report featured a soundbite of a constituent suggesting that if he were in power, he would put charge both the president and his congressman with treason. The other report featured some people speaking for military action in Syria – some Syrian Americans asking for the United States to stand up to a dictator – and facing boos from others in the crowd.

Instead of focusing on town halls which provide some political theater, we should look to national surveys to American public opinion. Focus on the big picture, not on towns halls which provide small samples.

Social inertia in time use between the 1960s and today

A sociologist who has examined recent time use surveys suggests not much has changed since the 1960s:

John Robinson, a sociology professor from the University of Maryland whose research has focused heavily on Americans’ time use, said the most striking aspect of the latest American Time Use Survey is how closely it resembles similar information from before the 2008 recession — and from as early as the 1960s when time-use surveys first came into being.

The annual Bureau of Labor Statistics publication documents how Americans spend their time. In 2012, employed people worked for about 7.7 hours each day, spent two hours on household chores and took between five and six hours on leisure activities, with close to three of those hours spent plopped in front of the television…

Although today’s Americans spend their time similarly to their counterparts in the decade of discontent, Mr. Robinson noted some important changes in the by-the-minute breakdown. Men and women spend much more equal amounts of time at work, on housework and on leisure activities than they did in the 1960s.

Time spent watching TV has inched upward with every passing year, and although Mr. Robinson expected Internet use to slowly eat into TV time, the Web has yet to take up a large chunk of Americans’ time. The latest survey found men and women both spend less than 30 minutes of leisure time per workday on the computer.

Regardless, both Internet and TV use fall into the same category of activity: sedentary behavior.

This sounds like a good example of persistent social patterns. Without any official guidelines or norms about how people should spend their time, people are living fairly similarly to how they did in the 1960s. If daily life hasn’t changed much, perhaps it is more important to ask people’s perceptions about their time use. Do they feel better today about how they spend their days compared to fifty years ago? These perceptions are shaped by a number of factors, including generational changes where the younger adults of the 1960s are now the older adults of today.

The easier target for analysis: did people in the past expect that the people of the future would spend their time watching TV? I doubt it. At the same time, it suggests television has some staying power as a form of entertainment and information.

Gans says “public opinion polls do not always report public opinion”

Sociologist Herbert Gans suggests public opinion polls tells us something but may not really uncover public opinion:

The pollsters typically ask people whether they favor or oppose, agree or disagree, approve or disapprove of an issue, and their wording generally follows the centrist bias of the mainstream news media. They offer respondents only two sides (along with the opportunity to say “don’t know” or “unsure”), thus leaving out alternatives proposed by people with minority political views. Occasionally, one side is presented in stronger or more approving language — but by and large, poll questions maintains the balanced neutrality of the mainstream news media.

The pollsters’ reports and press releases usually begin with the asked question and then present tables with the statistical proportions of poll respondents giving each of the possible answers. However, the news media stories about the polls usually report only the results, and by leaving out the questions and the don’t knows, transform answers into opinions. When these opinions are shared by a majority, the news stories turn poll respondents into the public, thus giving birth to public opinion…

To be sure, poll respondents favor what they tell the pollsters they favor. But still, poll answers are not quite the same as their opinions. While their answers may reflect their already determined opinions, they may also express what they feel, or believe they ought to feel, at the moment. Pollsters should therefore distinguish between respondents with previously determined opinion and those with spur-of-the-moment answers to pollster questions.

However, only rarely do pollsters ask whether the respondents have thought about the question before the pollsters called, or whether they will ever do so again. In addition, polls usually do not tell us whether respondents have talked about the issue with family or friends, or whether they have expressed their answer cum opinion in other, more directly political ways.

Interesting thoughts. As far as surveys and polls go, they are only as good as the questions asked. But, I wonder if Gans’ suggestions might backfire: what if a majority of Americans don’t have intense feelings about an issue or haven’t thought about the issue before? What then should be done with the data? Polls today may suggest a majority of Americans care about an issue but the reverse might really be true: a lower percentage of Americans actually follow all of the issues. Gans seems to suggest it is the active opinions that matter more but this seems like it could lead to all sorts of legislation and other action based on a minority of public opinion. Of course, this may be it really works now through the actions and lobbying of influential people…

It sounds like the real issue here is how much public opinion, however it is measured, should factor into the decisions of politicians.

Why Public Policy Polling (PPP) should not conduct “goofy polls”

Here is an explanation why the polling firm Public Policy Polling (PPP) conducts “goofy polls”:

But over the past year, PPP has been regularly releasing goofy, sometimes pointless polls about every other month. In early January, one such survey showed that Congress was less popular than traffic jams, France and used-car salesmen. According to their food-centric surveys released this week, Americans clearly prefer Ronald McDonald over Burger King for President; Democrats are more likely to get their chicken at KFC than Chick-fil-A, and Republicans are more apt to order pancakes than waffles. “We’re obviously doing a lot of polling on the key 2014 races,” says Jensen. “That kind of polling is important. We also like to do some fun polls.”

PPP, which has a left-leaning reputation, releases fun polls in part because they’re entertaining but mostly in an attempt to set themselves apart as an approachable polling company. Questions for polls are sometimes crowd-sourced via Twitter. The outfit does informal on-site surveys about what state they should survey next. And when the results of offbeat polls come out, the tidbits have potential to go viral. “We’re not trying to be the next Gallup or trying to be the next Pew,” Jensen says. “We’re really following a completely different model where we’re known for being willing to poll on stuff other people aren’t willing to poll on.” Like whether Republicans are willing to eat sushi (a solid 64% are certainly not).

Which means polls about “Mexican food favorability” are a publicity stunt on some level. Jensen says PPP, which has about 150 clients, gets more business from silly surveys and the ethos it implies than they do cold-calling. One such client was outspoken liberal Bill Maher, who hired PPP to poll for numbers he could use on his HBO show Real Time. That survey, released during the 2012 Republican primaries, found that Republicans were more likely to vote for a gay candidate than an atheist candidate—and that conservative virgins preferred Mitt Romney, while Republicans with 20 or more sexual partners strongly favored Ron Paul.

Jensen argues that the offbeat polls do provide some useful information. One query from the food survey, for instance, asks respondents whether they consider themselves obese: about 20% of men and women said yes, well under the actual American obesity rate of 35.7%.  Information like that could give health crusaders some fodder for, say, crafting public education PSAs. Still, the vast majority of people are only going to use these polls to procrastinate at work: goodness knows it’s hard to resist a “scientific” analysis of partisans’ favorite pizza toppings (Republicans like olives twice as much!).

Here is my problem with this strategy: it is short-sighted and privileges PPP. While polling firms do need to market themselves as there are a number of organizations that conduct national polls, this strategy can harm the whole field. When the average American sees the results of “goofy polls,” is it likely to improve their view of the polling in general? I argue there is already enough suspicion in America about polls and their validity without throwing in polls that don’t tell us as much. This suspicion contributes to lower response rates across the board, a problem for all survey researchers.

In the end, the scientific nature of polling takes a hit when any firm is willing to reduce polling to marketing.

Unscientific survey results of the day: CTA riders supposedly split on new seating arrangement

The Chicago Tribune had a story on the front page of its website a day ago that says Chicago residents are split on the new seating arrangements in new CTA cars. Unfortunately, the story has a fatal flaw: it is based on an unscientific poll.

The aisle-facing, bucket-style seats on the new CTA rail cars have prompted strong reactions among riders — though evenly split pro and con, an unscientific survey suggests.

More than 2,500 people participated in the online poll conducted this month by the Active Transportation Alliance, a Chicago-area group that promotes safe transportation, bicycle use and other alternatives to automobiles.

Forty-nine percent said they would prefer New York-style benches with no defined separation between passengers instead of the individual “scoop” seats that are on the CTA’s new 5000 Series rail cars, the Active Transportation Alliance reported.

Forty-eight percent of respondents said they prefer the scoop, or bucket-style, seats, and 3 percent said they had no preference, the poll found.

“While the poll results are unscientific and it was nearly a draw, one clear conclusion is that transit riders have strong opinions when it comes to issues of comfort and convenience,” said Lee Crandell, director of campaigns for the Active Transportation Alliance. “We’ve shared the results with the CTA and encouraged the agency to always seek input from the transit riders about significant changes to the system.”

While the newspaper perhaps should get some credit for acknowledging in the first paragraph that this was an unscientific poll, it then makes no sense to base the story on this information. One could talk about some divergent opinions on the seats without having to rely on an unscientific poll. Why not interview a few riders in the “man-on-the-street” style newspapers like? Should the CTA listen to those poll results provided by the Active Transportation Alliance? No – they suggest at least a few people don’t like the new seats but they aren’t necessarily a large number or a majority. In the end, I find this to be irresponsible. This poll tells us little about anything and even with the early disclaimer, is likely to confuse some readers.

I also think this story will blow over soon enough. New York riders seem to have done just fine with these seating arrangements and Chicago riders will get used to them as well.

Cell phone users now comprise half of Gallup’s polling contacts

Even as Americans are less interested in participating in telephone surveys, polling firms are trying to keep up. Gallup has responded by making sure 50% of people contacted for polling samples are cell phone users:

Polling works only when it is truly representative of the population it seeks to understand. So, naturally, Gallup’s daily tracking political surveys include cellphone numbers, given how many Americans have given up on land lines altogether. But what’s kind of amazing is that it now makes sure that 50 percent of respondents in each poll are contacted via mobile numbers.

Gallup’s editor in chief, Frank Newport, wrote yesterday about the evolution of Gallup’s methods to remain “consistent with changes in the communication behavior and habits of those we are interviewing.” In the 1980s the company moved from door-to-door polling to phone calls. In 2008 it added cellphones. To reflect the growing number of Americans who have gone mobile-only, it has steadily increased the percentage of those numbers it contacts.

“If we were starting from scratch today,” Newport told Wired, “we would start with cellphones.”…

Although it may be a better reflection of society, mobile-phone polling is more expensive, says Newport. They have to call more numbers because the response rate is lower due to the nature of mobile communication.

As technology and social conventions change, researchers have to try and keep up. This is a difficult task, particularly if fewer people want to participate and technologies offer more and more options to screen out unknown requests. Where are we going next: polling by text? Utilizing well-used platforms like Facebook (where we know many people are turning every day)?

Pew Research: the response rate for a typical phone survey is now 9% and response rates are down across the board

Earlier this year, Pew Research described a growing problem for pollsters: over 90% of the  public that doesn’t want to participate in telephone surveys.

It has become increasingly difficult to contact potential respondents and to persuade them to participate. The percentage of households in a sample that are successfully interviewed – the response rate – has fallen dramatically. At Pew Research, the response rate of a typical telephone survey was 36% in 1997 and is just 9% today.

The general decline in response rates is evident across nearly all types of surveys, in the United States and abroad. At the same time, greater effort and expense are required to achieve even the diminished response rates of today. These challenges have led many to question whether surveys are still providing accurate and unbiased information. Although response rates have decreased in landline surveys, the inclusion of cell phones – necessitated by the rapid rise of households with cell phones but no landline – has further contributed to the overall decline in response rates for telephone surveys.

A new study by the Pew Research Center for the People & the Press finds that, despite declining response rates, telephone surveys that include landlines and cell phones and are weighted to match the demographic composition of the population continue to provide accurate data on most political, social and economic measures. This comports with the consistent record of accuracy achieved by major polls when it comes to estimating election outcomes, among other things.

This is not to say that declining response rates are without consequence. One significant area of potential non-response bias identified in the study is that survey participants tend to be significantly more engaged in civic activity than those who do not participate, confirming what previous research has shown. People who volunteer are more likely to agree to take part in surveys than those who do not do these things. This has serious implications for a survey’s ability to accurately gauge behaviors related to volunteerism and civic activity. For example, telephone surveys may overestimate such behaviors as church attendance, contacting elected officials, or attending campaign events.

Read on for more comparisons between those who do tend to participate in telephone surveys and those who do not.

This has been a growing problem for years now: more people don’t want to be contacted and it is more difficult to contact cell phone users. One way this might be combated is to offer participants small incentives. This is already done with some online panels and it is more commonly used in mail surveys. These incentives wouldn’t be large enough to sway opinion or perhaps just get a sample of people who want the incentive but would be enough to raise response rates. It could be thought of as just enough to acknowledge and thank people for their time. I don’t know what the profit margins of firms like Gallup or Pew are but I imagine they could offer these small incentives quite easily.

This does suggest that the science of weighting is increasingly important. Having government benchmarks is really important, hence, the need for updated Census figures. However, it is not inconceivable that the Census could be scaled back: this is often a conservative proposal either based on the money spent on the Census Bureau or the “invasive” questions asked. And, it also may make the Census even more political as years of polling might be dependent on getting the figures “right,” depending on what side of the political aisle one is one.

Argument: we could have skewed survey results because we ignore prisoners

Several sociologists suggest American survey results may be off because they tend to ignore prisoners:

“We’re missing 1% of the population,” said Becky Pettit, a University of Washington sociologist and author of the book, “Invisible Men.” “People might say, ‘That’s not a big deal.’ “But it is for some groups, she writes — particularly young black men. And for young black men, especially those without a high-school diploma, official statistics paint a rosier picture than reality on factors such as employment and voter turnout.

“Because many surveys skip institutionalized populations, and because we incarcerate lots of people, especially young black men with low levels of education, certain statistics can look rosier than if we included” prisoners in surveys, said Jason Schnittker, a sociologist at the University of Pennsylvania. “Whether you regard the impact as ‘massive’ depends on your perspective. The problem of incarceration tends to get swept under the rug in lots of different ways, rendering the issue invisible.”

Further commentary in the article suggests sociologists and others, like the Census Bureau, are split on whether they think including prisoners in surveys is necessary.

Based on this discussion here, I wonder if there is another issue here: is getting slightly better survey results through picking up 1% of the population going to significantly affect results and policy decisions? If not, some would conclude it is not worth the effort. But, Petit argues some statistics could change a lot:

Among the generally accepted ideas about African-American young-male progress over the last three decades that Becky Pettit, a University of Washington sociologist, questions in her book “Invisible Men”: that the high-school dropout rate has dropped precipitously; that employment rates for young high-school dropouts have stopped falling; and that the voter-turnout rate has gone up.

For example, without adjusting for prisoners, the high-school completion gap between white and black men has fallen by more than 50% since 1980, says Prof. Pettit. After adjusting, she says, the gap has barely closed and has been constant since the late 1980s. “Given the data available, I’m very confident that if we include inmates” in more surveys, “the trends are quite different than we would otherwise have known,” she says…

For instance, commonly accepted numbers show that the turnout rate among black male high-school dropouts age 20 to 34 surged between 1980 and 2008, to the point where about one in three were voting in presidential races. Prof. Pettit says her research indicates that instead the rate was flat, at around one in five, even after the surge in interest in voting among many young black Americans with Barack Obama in the 2008 race.

It will be interesting to see how this plays out.