The Chicago Tribune had a story on the front page of its website a day ago that says Chicago residents are split on the new seating arrangements in new CTA cars. Unfortunately, the story has a fatal flaw: it is based on an unscientific poll.
The aisle-facing, bucket-style seats on the new CTA rail cars have prompted strong reactions among riders — though evenly split pro and con, an unscientific survey suggests.
More than 2,500 people participated in the online poll conducted this month by the Active Transportation Alliance, a Chicago-area group that promotes safe transportation, bicycle use and other alternatives to automobiles.
Forty-nine percent said they would prefer New York-style benches with no defined separation between passengers instead of the individual “scoop” seats that are on the CTA’s new 5000 Series rail cars, the Active Transportation Alliance reported.
Forty-eight percent of respondents said they prefer the scoop, or bucket-style, seats, and 3 percent said they had no preference, the poll found.
“While the poll results are unscientific and it was nearly a draw, one clear conclusion is that transit riders have strong opinions when it comes to issues of comfort and convenience,” said Lee Crandell, director of campaigns for the Active Transportation Alliance. “We’ve shared the results with the CTA and encouraged the agency to always seek input from the transit riders about significant changes to the system.”
While the newspaper perhaps should get some credit for acknowledging in the first paragraph that this was an unscientific poll, it then makes no sense to base the story on this information. One could talk about some divergent opinions on the seats without having to rely on an unscientific poll. Why not interview a few riders in the “man-on-the-street” style newspapers like? Should the CTA listen to those poll results provided by the Active Transportation Alliance? No – they suggest at least a few people don’t like the new seats but they aren’t necessarily a large number or a majority. In the end, I find this to be irresponsible. This poll tells us little about anything and even with the early disclaimer, is likely to confuse some readers.
I also think this story will blow over soon enough. New York riders seem to have done just fine with these seating arrangements and Chicago riders will get used to them as well.
The latest issue of Atlantic has an interesting article discussing why a number of US military officers are leaving the military. The argument: the military is too bureaucratic and doesn’t practice meritocracy so the brightest and more entrepreneurial officers leave for other fields.
All of this is interesting but I was struck by the data used for the article. Here is how the author describes the surveys he conducted and draws conclusions from:
In a recent survey I conducted of 250 West Point graduates (sent to the classes of 1989, 1991, 1995, 2000, 2001, and 2004), an astonishing 93 percent believed that half or more of “the best officers leave the military early rather than serving a full career.” By design, I left the definitions of best and early up to the respondents. I conducted the survey from late August to mid-September, reaching graduates through their class scribes (who manage e-mail lists for periodic newsletters). This ensured that the sample included veterans as well as active-duty officers. Among active- duty respondents, 82 percent believed that half or more of the best are leaving. Only 30 percent of the full panel agreed that the military personnel system “does a good job promoting the right officers to General,” and a mere 7 percent agreed that it “does a good job retaining the best leaders.”
This sort of paragraph is very helpful and is toward the front of the story. And the numbers look overwhelming, particularly the first cited figure about 93% believing the best officers leave early.
But there is an issue here: the generalizability of this data. The article suggests surveys were conducted with 250 officers spread across six graduating classes (presumably to help control for time effects). But does this represent West Point graduates on the whole? Does this even represent each graduating class? If one looks at the class page for the graduating class of 2004, there were almost 1,200 entering students. Even if a decent amount leave before graduating, this is a lot more than the 40 or so that would have been surveyed if we had equal representation out of the six graduating classes (250 total surveys divided by six graduate classes).
This does not necessarily mean that these survey results and their interpretation are necessarily wrong. But it should cast doubt: does this survey really speak for all West Point graduates or even more broadly, military officers as a whole? While conducting some sort of survey is better than simply working with anecdotes one hears from officers veterans, this survey could still be improved so that the results could be generalized to all officers. We need a larger N of officers to survey in order to have results that we could really trust.