Methodological issues with the “average” American wedding costing $27,000

Recent news reports suggest the average American wedding costs $27,000. But, there may be some important methodological issues with this figure: selection bias and using an average rather than a median.

The first problem with the figure is what statisticians call selection bias. One of the most extensive surveys, and perhaps the most widely cited, is the “Real Weddings Study” conducted each year by and (It’s the sole source for the Reuters and CNN Money stories, among others.) They survey some 20,000 brides per annum, an impressive figure. But all of them are drawn from the sites’ own online membership, surely a more gung-ho group than the brides who don’t sign up for wedding websites, let alone those who lack regular Internet access. Similarly, Brides magazine’s “American Wedding Study” draws solely from that glossy Condé Nast publication’s subscribers and website visitors. So before they do a single calculation, the big wedding studies have excluded the poorest and the most low-key couples from their samples. This isn’t intentional, but it skews the results nonetheless.

But an even bigger problem with the average wedding cost is right there in the phrase itself: the word “average.” You calculate an average, also known as a mean, by adding up all the figures in your sample and dividing by the number of respondents. So if you have 99 couples who spend $10,000 apiece, and just one ultra-wealthy couple splashes $1 million on a lavish Big Sur affair, your average wedding cost is almost $20,000—even though virtually everyone spent far less than that. What you want, if you’re trying to get an idea of what the typical couple spends, is not the average but the median. That’s the amount spent by the couple that’s right smack in the middle of all couples in terms of its spending. In the example above, the median is $10,000—a much better yardstick for any normal couple trying to figure out what they might need to spend.

Apologies to those for whom this is basic knowledge, but the distinction apparently eludes not only the media but some of the people responsible for the surveys. I asked Rebecca Dolgin, editor in chief of, via email why the Real Weddings Study publishes the average cost but never the median. She began by making a valid point, which is that the study is not intended to give couples a barometer for how much they should spend but rather to give the industry a sense of how much couples are spending. More on that in a moment. But then she added, “If the average cost in a given area is, let’s say, $35,000, that’s just it—an average. Half of couples spend less than the average and half spend more.” No, no, no. Half of couples spend less than the median and half spend more.

When I pressed on why they don’t just publish both figures, they told me they didn’t want to confuse people. To their credit, they did disclose the figure to me when I asked, but this number gets very little attention. Are you ready? In 2012, when the average wedding cost was $27,427, the median was $18,086. In 2011, when the average was $27,021, the median was $16,886. In Manhattan, where the widely reported average is $76,687, the median is $55,104. And in Alaska, where the average is $15,504, the median is a mere $8,440. In all cases, the proportion of couples who spent the “average” or more was actually a minority. And remember, we’re still talking only about the subset of couples who sign up for wedding websites and respond to their online surveys. The actual median is probably even lower.

These are common issues with figures reported in the media. Indeed, these are two questions the average reader should ask when seeing a statistic like the average cost of the wedding:

1. How was the data collected? If this journalist is correct about these wedding cost studies, then this data is likely very skewed. What we would want to see is a more representative sample of weddings rather than having subscribers or readers volunteer how much their wedding cost.

2. What statistic is reported? Confusing the mean and median is a big program and pops up with issues as varied as the average vs. median college debtthe average vs. median credit card debt, and the average vs. median square footage of new homes. This journalist is correct to point out that the media should know better and shouldn’t get the two confused. However, reporting a higher average with skewed data tends to make the number more sensationalistic. It also wouldn’t hurt to have more media consumers know the difference and adjust accordingly.

It sounds like the median wedding cost would likely be significantly lower than the $27,000 bandied about in the media if some basic methodological questions were asked.

Supersized McMansions, supersized roses for Valentine’s Day

I’ve seen McMansions compared to a number of other large consumer items, but until today I had not seen a comparison to flowers:

Leave it to America, land of the Big Gulp, Monster Burger and McMansions, to supersize yet one more thing: the rose.

Make that a six-foot rose, just in time for Valentine’s Day.

This flower-on-steroids — it actually gets this big from special breeding and soils — comes courtesy of several companies, including FTD, The Ultimate Rose and Sales are taking off as florists promote the gargantuan blooms, which also come in three-, four- and five-foot varieties. The companies won’t release exact numbers, but FTD says sales have increased 50% year over year since it started selling the roses four years ago…

Skaff says FTD has already sold out of the five-foot variety and had to order more to meet demand ahead of Valentine’s Day. The Ultimate Rose, which supplies the giant roses to FTD and also sells them on its own site, says sales jump this time of year.

The suggestion here is that the presence of McMansions is related to the presence of six-foot tall roses through the desires of Americans for both because they are large. This seems like a bit of a stretch to me; are the same people buying McMansions and large roses? Are both solely about standing out from the crowd? Overall, this seems like a journalistic shortcut of recent years: when an item becomes larger, compare it to McMansions (and perhaps SUVs and Big Gulps might be other apt comparisons). What items if an item becomes smaller – is there a similar go-to comparison?

Reading between the lines of an ABC News story on the bad odds of winning the $500 million Powerball lottery

Check out this ABC News video about the odds of winning the $500 million Powerball lottery.

Several things are striking about the content of the video beyond the bad odds of winning: 1 in 175 million chance.

1. A journalist admits he doesn’t know much about math or statistics. It is not uncommon for reporters to go to experts like statisticians in times like these (appealing to the expert boosts the credentials of the story) but it is more unusual for journalists to admit they are doing so because they don’t know the information. I’ve argued before we need more journalists who understand statistics and science.

2. The reporter mentions some interesting odds that are more favorable than winning the Powerball. One of these is the idea that you are more likely to be possessed by the devil today than win the lottery. Who exactly keeps track of these figures and how accurate are they?

3. The story includes some talk about being more likely to win in particular states than others. Really? This sounds more like statistical noise or something related to the population of the states with multiple Powerball winners (like Illinois and New Jersey).

4. Interesting closing: the math expert himself hasn’t bought a lottery ticket before. So the moral of the story is that people shouldn’t buy any tickets?

Sociologist defends statistical predictions for elections and other important information

Political polling has come under a lot of recent fire but a sociologist defends these predictions and reminds us that we rely on many such predictions:

We rely on statistical models for many decisions every single day, including, crucially: weather, medicine, and pretty much any complex system in which there’s an element of uncertainty to the outcome. In fact, these are the same methods by which scientists could tell Hurricane Sandy was about to hit the United States many days in advance…

This isn’t wizardry, this is the sound science of complex systems. Uncertainty is an integral part of it. But that uncertainty shouldn’t suggest that we don’t know anything, that we’re completely in the dark, that everything’s a toss-up.

Polls tell you the likely outcome with some uncertainty and some sources of (both known and unknown) error. Statistical models take a bunch of factors and run lots of simulations of elections by varying those outcomes according to what we know (such as other polls, structural factors like the economy, what we know about turnout, demographics, etc.) and what we can reasonably infer about the range of uncertainty (given historical precedents and our logical models). These models then produce probability distributions…

Refusing to run statistical models simply because they produce probability distributions rather than absolute certainty is irresponsible. For many important issues (climate change!), statistical models are all we have and all we can have. We still need to take them seriously and act on them (well, if you care about life on Earth as we know it, blah, blah, blah).

A key point here: statistical models have uncertainty (we are making inferences about larger populations or systems from samples that we can collect) but that doesn’t necessarily mean they are flawed.

A second key point: because of what I stated above, we should expect that some statistical predictions will be wrong. But this is how science works: you tweak models, take in more information, perhaps change your data collection, perhaps use different methods of analysis, and hope to get better. While it may not be exciting, confirming what we don’t know does help us get to an outcome.

I’ve become more convinced in recent years that one of the reasons polls are not used effectively in reporting is that many in the media don’t know exactly how they work. Journalists need to be trained in how to read, interpret, and report on data. This could also be a time issue; how much time to those in the media have to pore over the details of research findings or do they simply have to scan for new findings? Scientists can pump out study after study but part of the dissemination of this information to the public requires a media who understands how scientific research and the scientific process work. This includes understanding how models are consistently refined, collecting the right data to answer the questions we want to answer, and looking at the accumulated scientific research rather than just grabbing the latest attention-getting finding.

An alternative to this idea about media statistical illiteracy is presented in the article: perhaps the media perhaps knows how polls work but likes a political horse race. This may also be true but there is a lot of reporting on statistics on data outside of political elections that also needs work.

David Brooks, “Boo-boos in Paradise,” and American public intellectuals

I like David Brooks’ pop sociology analysis of the suburbs in Bobos in Paradise but a piece in Philadelphia suggests Brooks got some of his facts wrong:

Brooks, an agile and engaging writer, was doing what he does best, bringing sweeping social movements to life by zeroing in on what Tom Wolfe called “status detail,” those telling symbols — the Weber Grill, the open-toed sandals with advanced polymer soles — that immediately fix a person in place, time and class. Through his articles, a best-selling book, and now a twice-a-week column in what is arguably journalism’s most prized locale, the New York Times op-ed page, Brooks has become a must-read, charming us into seeing events in the news through his worldview.

There’s just one problem: Many of his generalizations are false. According to sales data, one of Goodwin’s strongest markets has been deep-Red McAllen, Texas. That’s probably not, however, QVC country. “I would guess our audience would skew toward Blue areas of the country,” says Doug Rose, the network’s vice president of merchandising and brand development. “Generally our audience is female suburban baby boomers, and our business skews towards affluent areas.” Rose’s standard PowerPoint presentation of the QVC brand includes a map of one zip code — Beverly Hills, 90210 — covered in little red dots that each represent one QVC customer address, to debunk “the myth that they’re all little old ladies in trailer parks eating bonbons all day.”

But this isn’t the main complaint of this arguement: rather, the main problem is that Brooks is considered a public intellectual and his words have a lot of weight:

On the publication of Bobos, New York Times critic Walter Goodman lumped Brooks with William H. Whyte Jr., author of The Organization Man, and David Riesman, who wrote The Lonely Crowd, as a practitioner of “sociological journalism.” (In the introduction to Bobos, Brooks invoked Whyte — plus Jane Jacobs and John Kenneth Galbraith — as predecessors.) In 2001, the New School for Social Research, in Manhattan, held a panel discussion in which real-life scholars pondered the bobo. When, in 2001, Richard Posner ranked the 100 highest-profile public intellectuals, Brooks came in 85th, just behind Marshall McLuhan at 82nd, and ahead of Garry Wills, Isaiah Berlin and Margaret Mead.

Ironically, Richard Florida is granted the final academic say regarding needing more serious public intellectuals:

Richard Florida, a Carnegie Mellon demographer whose 2002 book The Rise of the Creative Class earned Bobos-like mainstream cachet, nostalgizes an era when readers looked to academia for such insights:

“You had Holly Whyte, who got Jane Jacobs started, Daniel Bell, David Riesman, Galbraith. This is what we’re missing; this is a gap,” Florida says. “Now you have David Brooks as your sociologist, and Al Franken and Michael Moore as your political scientists. Where is the serious public intellectualism of a previous era? It’s the failure of social science to be relevant enough to do it.”

Here is what I take away from this: this writer is worried that Brooks (and other New Journalists) are influencing public opinion and possibly public policy more through impressionistic writing than facts and correctly interpreting data.

This could make for an interesting discussion involving things like the role of columnists and opinion-makers (facts or zeitgeists?), why social scientists and sociologists aren’t seen as public intellectuals, and who should guide public policy anyway. It is interesting to note that the American Sociological Association (ASA) gave David Brooks the Excellence in Reporting of Social Issues Award in 2011. I assume the ASA didn’t just give the award because Brooks discusses sociological research or is of the same political/social persuasion as sociologists.

By the way, having read a lot of David Brooks and Tom Wolfe, I wonder how many commentators would suggest these two are engaging in similar techniques.

Sociologist to journalists: “Racism: Not Isolated Incidents but Systemic”

After several recent incidences in East Haven, Connecticut, a sociologist explains why racism is a systemic issue, not a matter of a few racist individuals:

As a sociology professor whose specialties include the study of racism, I am sometimes asked to explain what is happening following such a flurry of racist incidents. That question is based on the faulty assumptions that what is happening now is something new and that what occurred is no more than a disturbing accumulation of isolated incidents of racial bigotry committed by a few Neanderthals who didn’t get the memo that in today’s colorblind America we have moved past all that.

Social structures, racist, or otherwise, don’t just disappear or grow old and die. Consequently, when I get that “what is happening now?” query from the press, I feel like yawning as I mutter, “There you go again.” Lately I have advised reporters to connect the dots. I challenge them to, for once, abandon racism-evasive language such as “race” or “the race issue” and to call the thing what it is, racism, which is by its nature always systemic.

So far, to my knowledge, no reporter has taken my advice. Instead they tend to write stories that, if they even acknowledge a pattern of racist incidents, seem to attribute it to the bad economy, the coming of a full moon or perhaps some foul-smelling concoction that was secretly slipped into our drinking water. Then they go away for another few months; and when still more overtly racist stuff happens, they email again to ask me to explain, once more, what is happening, now.

Unfortunately that type of news reporting supports the dominant response to racism by European Americans — the militant denial of its existence or significance. A very successful racism denial tactic is to conveniently confuse the racial, bigoted attitudes and behaviors of some person of color with systemic racism as a way of suggesting that white racism is no more of a problem than is so-called black racism. On other occasions a person of color may be accused of being a racist for simply bringing up the issue of racism.

This is a message needed for more than just journalists.

I wonder if journalists are any better on this issue than average Americans. On the whole, Americans often privilege individualistic situations to social problems, race or otherwise. White Americans, in particular, would prefer to act like race doesn’t matter and claim that we should move on. I’ve noted before that the reverse should be true: Americans should have to show that race isn’t involved in social situations instead of suggesting it doesn’t matter until there is incontrovertible proof otherwise.

Don’t dismiss social science research just because of one fradulent scientist

Andrew Ferguson argued in early December that journalists fall too easily for bad academic research. However, he seems to base much of his argument on the actions of one fraudulent scientist:

Lots of cultural writing these days, in books and magazines and newspapers, relies on the so-called Chump Effect. The Effect is defined by its discoverer, me, as the eagerness of laymen and journalists to swallow whole the claims made by social scientists. Entire journalistic enterprises, whole books from cover to cover, would simply collapse into dust if even a smidgen of skepticism were summoned whenever we read that “scientists say” or “a new study finds” or “research shows” or “data suggest.” Most such claims of social science, we would soon find, fall into one of three categories: the trivial, the dubious, or the flatly untrue.

A rather extreme example of this third option emerged last month when an internationally renowned social psychologist, Diederik Stapel of Tilburg University in the Netherlands, was proved to be a fraud. No jokes, please: This social psychologist is a fraud in the literal, perhaps criminal, and not merely figurative, sense. An investigative committee concluded that Stapel had falsified data in at least “several dozen” of the nearly 150 papers he had published in his extremely prolific career…

But it hardly seems to matter, does it? The silliness of social psychology doesn’t lie in its questionable research practices but in the research practices that no one thinks to question. The most common working premise of social-psychology research is far-fetched all by itself: The behavior of a statistically insignificant, self-selected number of college students or high schoolers filling out questionnaires and role-playing in a psych lab can reveal scientifically valid truths about human behavior…

Who cares? The experiments are preposterous. You’d have to be a highly trained social psychologist, or a journalist, to think otherwise. Just for starters, the experiments can never be repeated or their results tested under controlled conditions. The influence of a hundred different variables is impossible to record. The first group of passengers may have little in common with the second group. The groups were too small to yield statistically significant results. The questionnaire is hopelessly imprecise, and so are the measures of racism and homophobia. The notions of “disorder” and “stereotype” are arbitrary—and so on and so on.

Yet the allure of “science” is too strong for our journalists to resist: all those numbers, those equations, those fancy names (say it twice: the Self-Activation Effect), all those experts with Ph.D.’s!

I was afraid that the actions of one scientist might taint the work of many others.

But there are a couple of issues here and several are worth pursuing:

1. The fact that Stapel committed fraud doesn’t mean that all scientists do bad work. Ferguson seems to want to blame other scientists for not knowing Stapel was committing fraud – how exactly would they have known?

2. Ferguson doesn’t seem to like social psychology. He does point to some valid methodological concerns: many studies involve small groups of undergraduates. Drawing large conclusions from these studies is difficult and indeed, perhaps dangerous. But this isn’t all social psychology is about.

2a. More generally, Ferguson could be writing about a lot of disciplines. Medical research tends to start with small groups and then decisions are made. Lots of research, particularly in the social sciences, could be invalidated if Ferguson was completely right. Ferguson really would suggest “Most such claims of social science…fall into one of three categories: the trivial, the dubious, or the flatly untrue.”?

3. I’ve said it before and I’ll say it again: journalists need more training in order to understand what scientific studies mean. Science doesn’t work in the way that journalists suggests where there is a steady stream of big findings. Rather, scientists find something and then others try to replicate the findings in different settings with different populations. Science is more like an accumulation of evidence than a lot of sudden lightning strikes of new facts. One small study of undergraduates may not tell us much but dozens of such experiments among different groups might.

4. I can’t help but wonder if there is a political slant to this: what if scientists were reporting positive things about conservative viewpoints? Ferguson complains that measuring things like racism and homophobia are difficult but this is the nature of studying humans and society. Ferguson just wants to say that it is all “arbitrary” – this is simply throwing up our hands and saying the world is too difficult to comprehend so we might as well quit. If there isn’t a political edge here, perhaps Ferguson is simply anti-science? What science does Ferguson suggest is credible and valid?

In the end, you can’t dismiss all of social psychology because of the actions of one scientist or because journalists are ill-prepared to report on scientific findings.

h/t Instapundit

Naomi Klein may often be considered “radical” but she is not a “sociologist”

Naomi Klein is a popular journalist (most popular book: No Logo) but is she really a “radical sociologist”?

WHEN RADICAL sociologist Naomi Klein addressed the Occupy Wall Street camp in Zuccotti Park in Lower Manhattan last week, she echoed in a rhetorical question what many have asked of Ireland’s passivity in the face of the recent economic crisis. The baffled TV pundits ask why they are protesting, she said. “Meanwhile, the rest of the world asks: ‘What took you so long?’”

Klein may use some sociological ideas and be liked by many sociologists, but I can’t find any evidence she has much of a background in sociology itself. Here is what the biography on her website says about her background:

Naomi Klein is a contributing editor for Harper’s and reporter for Rolling Stone, and writes a regular column for The Nation and The Guardian that is syndicated internationally by The New York Times Syndicate. In 2004, her reporting from Iraq for Harper’s won the James Aronson Award for Social Justice Journalism. Additionally, her writing has appeared in The New York Times, The Washington Post, Newsweek, The Los Angeles Times, The Globe and Mail, El Pais, L’Espresso and The New Statesman, among many other publications.

She is a former Miliband Fellow at the London School of Economics and holds an honorary Doctor of Civil Laws from the University of King’s College, Nova Scotia.

In a 2009 interview, Klein says that she did not finish her undergraduate studies in philosophy and literature at the University of Toronto before beginning her journalism career:

LAMB: Did you get a degree from…

KLEIN: Then I went to the University of Toronto.

LAMB: And your degree is in what?

KLEIN: I studied philosophy and literature, but I actually left when I got offered this job at the Globe and Mail. It was an election – I went as a summer intern, and I had a couple of credits left. And then there was an election campaign, pretty sort of hot election campaign, and they asked me to stay on. And I never actually made it back to school. So yes.

This reminds me of a plenary session I attended at the 2007 American Sociological Association meetings in New York City that featured Klein. The session on globalization featured Klein and well-known economist Jeffrey Sachs (along with two others). See video of this ASA session here (Klein starts speaking at about 46:52). Klein was, to put it mildly, well-received by the crowd of sociologists (applause from 1:20:42 to 1:21:12). On the other hand, Sachs sent in a video, which was probably a smart move on his part as he probably would have not been so warmly received. Here is an example of how the story was spun by those more favorable to Klein’s point of view:

One of the most highly anticipated sessions was to feature Jeffrey Sachs, an internationally known economist and a former special advisor to UN Secretary General Kofi Annan, versus Naomi Klein, the Canadian journalist and author. But shortly before the ASA conference opened, Sachs pulled out. Unclear if it was related to the fact that Naomi Klein takes him on in her forthcoming book, “The Shock Doctrine: The Rise of Disaster Capitalism.”

How long until Klein wins the ASA’s “Excellence in the Reporting of Social Issues Award“?

But, just to repeat, Klein is not a sociologist herself.

What journalists should know about religion

In the last week, several journalists have addressed the issue of how journalists should talk with politicians about religion. Ross Douthat followed up on his August 29th column with a blog post providing examples of what he is trying to address. And last Friday, Amy Sullivan provided a number of steps journalists could take in order to write intelligently about the religious beliefs of politicians.

This brings several thoughts to mind:
1. What happened to religion writers among major newspapers or magazines? I think most of them have disappeared, even respected ones like Catherine Falsani who used to write for the Chicago Sun-Times. At a time when religion is alive and influential around the world, media sources don’t have dedicated people who can comment on these particular issues. Asking political writers to write about topics they don’t regularly cover seems like a problem. I know media outlets have had to make major cutbacks in certain areas but there are repercussions for this.
2. The burden seems to be on politicians who have “non-mainstream” religious beliefs to explain how they are not dangers to society. Perhaps this is due to the fact that Americans have more unfavorable feelings toward minority religions like Mormons, Muslims, Buddhists, and atheists/non-religious (not quite a minority “religion”). Of course, much of this debate could really be about whether evangelicals are mainstream or not. Their size would suggest they are mainstream as would their political influence since the late 1970s.

British sociologist wins case against reviewer

Reviews are a key part of the academic world as researchers, journalists, and others assess and judge the work of others. Within this world, a British court recently sided with a sociologist who had sued a reviewer:

A High Court judged ruled that Lynn Barber’s 2008 review of Seven Days in the Art World by Dr Sarah Thornton, a noted sociologist, was “spiteful” and contained serious factual errors. The Telegraph Group, owner of The Daily Telegraph, which published the article, has been ordered to pay Dr Thornton £65,000 in damages.

While the country’s critics regard such factual errors as justifiably punishable, the case still raises questions for scribes who have grown accustomed to saying what they like about whomever they please…

There is a long history of critical clashes. The most high profile are necessarily those that end up in court. In 1998 the journalist and TV presenter Matthew Wright “reviewed” the play The Dead Monkey starring David Soul, calling it “without doubt the worst West End show”. The chink in his armour was that he’d never actually seen it, and Soul won £30,000 in a libel case.

Sometimes, the clashes are less clear cut. One anonymous arts critic told The Independent about three legal threats that had recently landed across his desk, none of which ended up in court, incidents he described as “shots across the bows”. To avoid such clashes, critics may find it necessary to limit how often they tackle certain subjects. “My view is that a critic has to be honest and say what he or she likes,” said Brian Sewell, art critic at the London Evening Standard.

The story suggests there two components to the lawsuit: “spiteful” comments and factual inaccuracies. I imagine the case was decided in the sociologist’s favor mainly due to the factual errors in the review (which didn’t have to do with the book but about what the reviewer said about an interview with the author) rather than the critical comments which are common in reviews.

The case reminds of how I heard one academic describe reviews: they are opportunities to knock down other researchers and if you are gracious or perhaps even neutral in a review, it can be interpreted as a sign of weakness. Reading some reviewers (academic or journalistic), it is sometimes hard to imagine they would be happy with anything.