Does urbanization in America explain the declining deaths by lightning strike?

Here is an interesting research question: is urbanization responsible for the sharp decline in Americans who die by lightning strike each year?

In the lightning-death literature, one explanation has gained prominence: urbanization. Lightning death rates have declined in step with the rural population, and rural lightning deaths make up a far smaller percent of all lightning deaths. Urban areas afford more protection from lightning. Ergo, urbanization has helped make people safer from lightning. Here’s a graph showing this, neat and clean:

And a competing perspective:

I spoke with Ronald Holle, a meteorologist who studies lightning deaths, and he agreed that modernization played a significant role. “Absolutely,” he said. Better infrastructure in rural areas—not just improvements to homes and buildings, but improvements to farming equipment too has—made rural regions safer today than they were in the past. Urbanization seems to explain some of the decline, but not all of it.

“Rural activities back then were primarily agriculture, and what we call labor-intensive manual agriculture. Back then, my family—my grandfather and his father before that in Indiana—had a team of horses, and it took them all day to do a 20-acre field.” Today, a similar farmer would be inside a fully-enclosed metal-topped vehicle, which offers excellent lightning protection. Agriculture has declined as a percent of total lightning-death-related activities, as the graph below shows, but unfortunately it does not show the per capita lightning-death rate of people engaged in agriculture.

Sounds like more data is needed! I wonder how long it would take to collect the relevant information versus the payoff of the findings…

More broadly, this hints at how human interactions with nature has changed, even in relatively recent times: we are more insulated from the effects of weather and nature. During the recent cold snap in the area, I was reminded of an idea I had a few years ago to explain why so many adults seem to talk about the weather. Could it be related to the fact that the weather is perhaps the most notable thing on a daily basis that is outside of our control? As 21st century humans, we control a lot that is in front of us (or at least we think we do) but can do little about what the conditions will be like outside. We have more choices than ever about how to respond but it prompts responses from everyone, from the poor to the wealthy, the aged to the young.

Sociology departments “holding steady” across American colleges

Inside Higher Ed summarizes a new report from the American Sociological Association on the state of sociology departments across the country. A few highlights:

“We’re doing relatively well,” said Roberta Spalter-Roth, director of research and development for the ASA. “We aren’t doing as well as we would like to be, but we’re doing relatively well compared to other disciplines,” such as physics and foreign languages, which have seen widespread closures in recent years…

One noticeable finding is that bigger sociology departments actually have decreased their employment of adjunct faculty, bucking a long-term, national trend toward hiring more adjuncts across disciplines. That probably accounts for the fact that tenure-line faculty workloads at those kinds of institutions have gone up, Spalter-Roth said. She called the latter trend “problematic.”…

There also was a slight “graying” of the faculty, the survey notes, with the most growth in the associate professor ranks. In 2001-2, departments had, on average: three full professors; two associate professors, and two assistant professors. In 2011-12, they had: 3.7 full professors, three associate professors; and 2.6 assistant professors. The study calls the distribution pattern an “inverted triangle,” with more full professors than assistant professors…

Spalter-Roth said the data was mostly for internal use to report on the data-driven profession, but would also be available to individual departments to report back to their institutions. The association usually surveys departments on different matters every five years, she said.

See the full report here.

It is too bad there aren’t similar figures from other disciplines to compare to. Without good comparisons, the ASA can only compare to ten years ago and not assess the relative movements among disciplines. Isn’t that probably what sociologists really want to know?

It is a little amusing that the ASA collects such data and produces a number of reports on things like mismatches between graduate student subject area interests and jobss and the state of jobs in the discipline. Should we expect much different from a data-driven discipline? At the same time, shouldn’t other disciplines collect similar data to better serve their members? I don’t know what kind of personnel or offices are required to pull off such research but I assume there is some added value to collecting it and distributing the results.

Studying religiosity by text messages and three minute surveys

A new study of religiosity utilizes text messages and short surveys:

After signing up on soulpulse.org, users receive text messages twice a day for 14 days that direct them to a 15 to 20-question survey. These questions gather data on daily spiritual attitudes and physical influences at points during the day, such as quality of sleep, amount of exercise and alcohol consumption. The average length of time required to complete the survey is around three minutes and is designed with the ideas of simplicity and ease of use.

At the end of the two-week testing period, the reward for participants is a comprehensive review of their data that allows them to see and learn more about their spiritual mindsets. In return, the research team is given the opportunity to analyze the information that they have collected. Wright said they have already found that people report the greatest feelings of spirituality on Sundays and the least amount on Wednesdays.

A collection of three-minute surveys however, took months of collaboration across the country to complete. 18 months of planning and 10 trips to Silicon Valley were necessary, as well as a team of people who each contributed a unique skillset to the group. The Soulpulse team consists of four computer programmers, three public engagers and six academic advisors – including UConn professors Crystal Park and Jeremy Pais.

Measuring religiosity is well established in sociology but it often relies on people reporting on their past behavior. For example, some sociologists suggest church attendance figures are regularly inflated. Using text messages would allow more up-to-date data as the goal is to quickly interrupt people’s activity and get their more accurate take on their religious behavior.

Generally, I would guess sociology and other social science fields are headed in this direction for data collection: less formal and more minute to minute. In the past, some of this was done with time diaries or logs. But, even these posed problems as at the end of the day a person might misremember or reinterpret their earlier actions. Utilizing text messages or pop-up Internet surveys or other means could yield more better data, utilize newer technologies respondents are regularly engaging, and perhaps even take less time in the long run.

Reminder to journalists: a blog post garnering 110 comments doesn’t say much about social trends

In reading a book this weekend (a review to come later this week), I ran across a common tactic used by journalists: looking at the popularity of websites as evidence of a growing social trend. This particular book quoted a blog post and then said “The post got 110 comments.”

The problem is that this figure doesn’t really tell us much about anything.

1. These days, 110 comments on an Internet story is nothing. Controversial articles on major news websites regularly garner hundreds, if not thousands, of comments.

2. We don’t know who exactly was commenting on the story. Were these people who already agreed with what the author was writing? Was it friends and family?

In the end, citing these comments runs into the same problems that face web surveys done poorly: we don’t know whether they are representative of Americans as a whole or not. That doesn’t mean blogs and comments can’t be cited at all but we need to be very careful of what these sites tell us, what we can know from the comments, and who exactly they represent. A random sample of blog posts might help as would a more long-term study of responses to news articles and blog posts. But, simply saying that something is an important issue because a bunch of people were moved enough to comment online may not mean much of anything.

Thankfully, the author didn’t use this number of blog comments as their only source of evidence; it was part of a larger story with more conclusive data. However, it might simply be better to quote a blog post like this as an example of what is out on the Internet rather than try to follow it with some “hard” numbers.

Both eHarmony.com and Match.com claim to be #1 sites for marriages. Who is right?

After recently seeing ads from both eharmony.com and match.com claiming they are #1 in marriages, I decided to look into their claims. First, from match.com:

Research Study Overview & Objectives
In 2009 and 2010, Match.com engaged research firm Chadwick Martin Bailey to conduct three studies to provide insights into America’s dating behavior: a survey of recently married people (“Marriage Survey”), a survey of people who have used online dating (“Online Dating Survey”),
and a survey of single people and people in new committed relationships (“General Survey”).
Key Findings Marriage Survey
• 17% of couples married in the last 3 years, or 1 in 6, met each other on an online dating site. (Table 1)
• In the last year, more than twice as many marriages occurred between people who met on an online dating site than met in bars, at clubs and other social events combined. (Table 1)
• Approximately twice as many recently married couples met on Match.com than the site that ranked second. (Table 2)

The data is from 2009-2010. And from eHarmony.com:

SANTA MONICA, Calif. – June 3, 2013 – New research data released today, “Marital Satisfaction and Breakups Differ Across Online and Offline Meeting Venues” published in Proceedings of the National Academy of Sciences (PNAS) shows eHarmony ranks first in creating more online marriages than any other online site.* The study also ranks eHarmony first in its measures of marital satisfaction.* Data also shows eHarmony has the lowest rates of divorce and separation than couples who met through all other online and offline meeting places.

eHarmony Ranked #1 for Number of Marriages Created by an Online Dating Site

The largest number of marriages surveyed who met via online dating met on eHarmony (25.04%)

eHarmony Ranked #1 for Marital Satisfaction by an Online Dating Site

The happiest couples meeting through any means met on eHarmony (mean = 5.86)…

*John T. Cacioppo, Stephanie Cacioppo, Gian C. Gonzaga, Elizabeth L. Ogburn, and Tyler J. VanderWeele (2013) Marital satisfaction and break-ups differ across on-line and off-line meeting venues. Proceedings of the National Academy of Sciences (www.pnas.org/lookup/suppl/doi:10.1073/pnas.1222447110/-/DCSupplemental)

Just based on these brief descriptions from their own websites, here is which number I would trust more: eHarmony.com. Why?

1. More recent data. Data that is a few years old is eons old in Internet time. People on dating sites today likely want to know the marriage rates today.

2. More reliable place where the study is published as well as the more scientific method. It looks like match.com hired a firm to do a study for them while the eHarmony.com data comes from a respectable academic journal.

When two companies both claim to be number one, it is not necessarily the case that one is lying or that one has to be wrong. However, it does help to compare their data sources, see what their claims are based on, and then make a decision as to which number you are more likely to believe. .

Journalists: stop saying scientists “proved” something in studies

One comment after a story about a new study on innovation in American films over time reminds journalists that scientists do not “prove” things in studies.

The front page title is “Scientist Proves…”

I’m willing to bet the scientist said no such thing. Rather it was probably more along the lines of “the data gives an indication that…”

Terms in science have pretty specific meanings that differ from our day-to-day usage. “Prove” and “theory, among others, are such terms. Indeed, science tends to avoid “prove” or “proof.” To quote another article “Proof, then, is solely the realm of logic and mathematics (and whiskey).”

[end pedantry]

To go further, using the language of proof/prove tends to relay a particular meaning to the public: the scientist has shown without a doubt and that in 100% of cases that a causal relationship exists. This is not how science, natural or social, works. We tend to say outcomes are more or less likely. There can also be relationships that are not causal – correlation without causation is a common example. Similarly, a relationship can still be true even if it doesn’t apply to all or even most cases. When teaching statistics and research methods, I try to remind my students of this. Early on, I suggest we are into “proving” things but rather looking for relationships between things using methods, quantitative or qualitative, that still have some measure of error built-in. If we can’t have 100% proof, that doesn’t mean science is dead – it just means that done correctly, we can be more confident about our observations.

See an earlier post regarding how Internet commentors often fall into similar traps when responding to scientific studies.

 

“The Best Map Every Made of America’s Racial Segregation”

This is a lofty claim about a map but these maps clearly show racially divided neighborhoods in American cities. What makes these maps so good?

1. Data and mapping software that allows for mapping at smaller levels. Instead of focusing on municipal boundaries, counties, or census tracts, we can now get at smaller units of analysis.

2. The colors on these maps are visually interesting. I don’t know how much they play around with that but having an eye-popping map doesn’t hurt.

3. Perhaps most important: there are clear patterns to map here. As documented clearly in American Apartheid twenty years ago, American communities are split on racial and ethnic lines.

Is Chicago’s flag “a much bigger deal than” the flags of other big cities?

Here is an argument for “why Chicago’s flag is a much bigger deal than any other city’s flag“:

As reporter Elliott Ramos suggested in a 2011 post for WBEZ, Chicago’s love affair with its flag seems to have taken off in the 1990s, with an influx of young adults into the city. Michael, a kickball player featured on the Chicago Flag Tattoos website, explains why he felt compelled to have the flag permanently emblazoned on his arm: “After moving to Chicago and living here for a few years, Chicago really kind of took a place in my heart, so I thought it’d be a good thing to do.”…

Symbolism aside, the flag’s simple, bold design is the reason it caught on. On his Urbanophile blog, Aaron M. Renn wrote: “In the United States, I’d have to rate Chicago far and away #1 in the use of official civic symbols (maybe the best in the world for all I know), and also note the overall high level of design quality of these objects … If you come to Chicago, you’ll notice that the city flag is ubiquitous.”

It’s enough to make you wonder: Is this a unique local thing? How do other cities’ flags stack up against Chicago’s?

Turns out, many are bland, and a few are downright appalling. Even the good flags aren’t necessarily well-known by the people of their cities.

When the North American Vexillological Association (vexillology is the study of flags) conducted a survey in 2004 ranking the nation’s best city flags, Chicago’s flag received a stellar 9.03 out of 10 possible points. But that was only good enough to land Chicago in the No. 2 spot. No. 2? Who could possibly beat us?

There is some limited evidence here: anecdotal tales that Chicagoans seem to display the flag often and the flag is rated highly by a flag group. But, there are several issues at work here. One, Chicago’s flag might be “better” than other flags. This is more of an aethestic or design consideration. This is where you want to appeal to outside, impartial groups like the North American Vexillological Association. Second, Chicagoans might like their flag or identify with it more than residents of other cities. Perhaps it indicates that Chicagoans have some decent levels of civic pride. This could be addressed by survey research. Third, Chicagoans might display the flag more often. This is probably the easiest to quantify and observational data could provide better evidence (perhaps easier to do these days with Google Street View).

Given the evidence presented in this piece, I’m not convinced any of these three options are true…

Comparing ethnography and journalism, stories vs. data

A letter writer to the New York Times unfavorably compares ethnography and social scientific methods to journalism:

David Brooks’s review of George Packer’s book “The Unwinding: An Inner History of the New America” (June 9) is befuddling. First, Brooks praises Packer’s “gripping narrative survey” of recession-era life, comparing it to earlier efforts like that of John Dos Passos. Then, bizarrely, he faults Packer for not providing a “theoretical framework and worldview” that would include “sociology, economics or political analysis.” Narrative description and evocation has for centuries been among the most powerful forms of argument — so powerful, in fact, that the social psychologists Brooks admires appropriated the styles and cloaked them in the pseudoscientific garb of “ethnography” (which we used to call “journalism”).

Have we reached a point where devotion to instrumental reason is so maniacal we can’t handle mere stories anymore? Or perhaps we accept stories only when they’re accompanied by the tenuous methodology of social “scientists.” I would bet that a single profile by Packer, one of America’s best journalists, provides a better snapshot of real life than the legions of sociology and economics articles published since the crash.

Here is someone suspicious of social science. This is not an uncommon position. There is no doubt that stories and narratives are powerful and also have a longer history than the social sciences which developed in and after the Enlightenment. Yet, we also live in a world where science and data have also become powerful arguments.

Intriguingly, ethnography is a social scientific method that might help bridge this gap between narrative and data. This method differs from journalism in some important ways but also shares some similarities. The ethnographer doesn’t just work with statistics and data from a distance or through a few interviews. Through an extended engagement with the research subject, even living with the subjects for months or years, the researcher gets an insider perspective while also trying to maintain objectivity. The participant observer is engaged with larger social science theories and ideas, trying to understand how more specific experiences and groups line up with larger theories and models. The research case is of interest but the connection to the bigger picture is very important in the end. At the same time, ethnographies are often written in a more narrative style than social science journal articles (unless we are talking about journal articles utilizing ethnography).

Stories and data can both be illuminating. I know which side I tend to favor, hence I’m a sociologist, but I also enjoy narratives and “mere stories.”

Google says their creative interview questions didn’t predict good workers…so why ask them?

Google announced yesterday that their creative and odd interview questions didn’t help them understand who was going to be a good worker. So, why did they ask them?

“We found that brainteasers are a complete waste of time,” Laszlo Bock, senior vice president of people operations at Google, told the New York Times. “They don’t predict anything. They serve primarily to make the interviewer feel smart.”

A list of Google questions compiled by Seattle job coach Lewis Lin, and then read by approximately everyone on the entire Internet in one form or another, included these humdingers:

  • How much should you charge to wash all the windows in Seattle?
  • Design an evacuation plan for San Francisco
  • How many times a day does a clock’s hands overlap?
  • A man pushed his car to a hotel and lost his fortune. What happened?
  • You are shrunk to the height of a nickel and your mass is proportionally reduced so as to maintain your original density. You are then thrown into an empty glass blender. The blades will start moving in 60 seconds. What do you do?

Bock says Google now relies on more quotidian means of interviewing prospective employees, such as standardizing interviews so that candidates can be assessed consistently, and “behavioral interviewing,” such as asking people to describe a time they solved a difficult problem. It’s also giving much less weight to college grade point averages and SAT scores.

The suggestion here is that these were more about the interviewer than the interviewee. Interesting. This is just speculation but here are other potential reasons for asking such questions.

1. They really thought these questions would be a good filter – but they learned better later. Was this initial idea based on research? Experience? Anecdotes? Or did this just sort of happen one time and it seemed to work so it continued? For a company that is all about data and algorithms, it would be interesting to know whether this interviewing practice was based on data.

2. Perhaps Google is trying to project a certain image to potential employees: “We are a place that values this kind of thinking.” The interview at Google isn’t just a typical interview; it is an experience.

3. They wanted to be to the wider public as a place that asked these kind of intimidating/interesting (depending on your point of view) questions. And this image is tied to social status: “Google does something in their interviews that others don’t! They must know something.” Were these questions all part of a larger branding strategy? It would be interesting to know how long they have thought the questions didn’t predict good workers. What does it say about the company now if they are moving on to other methods and more “quotidian”/pedestrian/boring interviewing approaches?