The finding that Chicago suburbs pay more than get from the state feeds which narrative?

A new study looks at how much Illinois counties contribute to the state versus how much they receive. The results are lopsided:

For every dollar DuPage County taxpayers send to Springfield, the state returns 31 cents…

Cook County receives 80 cents for every dollar contributed, Lake County gets back 39 cents, Kane County sees 76 cents come back for every dollar, McHenry County sees 42 cents returned and Will County receives 68 cents for every dollar sent to Springfield.

“We have in this state a long-standing legend that downstate is supporting Cook County and Chicago. The farther south you drive, the more virulent that narrative becomes,” said John Jackson, one of the report’s two authors. “The biggest theme of this whole paper is that we make the case that facts are better than fiction in terms of public discourse on this topic.”…

“It’s just because geographic politics are powerful, so it’s in the interest of people running for office downstate to say we’re exporting money to fat cats in Chicago and the suburbs,” Martire said.

Finding evidence that counters one common narrative can be powerful. Narratives develop over time and take on a life of their own. The downstate versus Chicago narrative – probably more accurately given the realities of metropolitan economies, downstate versus the large Chicago region – has existed for a long time. Arguably, this goes back to the opening decades of the state where much of the population and power existed in the southern and central regions before the opening of the northern part of Illinois to settlement in the 1830s and 1840s.

At the same time, this data could be used to promote a different narrative: the Chicago counties are unfairly treated by the state. These counties generate a lot of wealth and are penalized by the state. Why are the good, hard-working taxpayers of these counties penalized for their success? Why can’t the state keep the money generated there to help address the numerous issues present in the Chicago region? Just based on the data, the situation looks pretty unequal. In the long run, this narrative (with evidence) with the sides switched better for Illinois?

More broadly, these kinds of analyses of geographic disparities in funding present some really thorny issues for larger governmental bodies such as states and the United States as a whole. Balancing urban versus rural interests also goes back to the founding of our country resulting in key ideas like the Senate being the more powerful chamber with two votes per state regardless of population and the electoral college as opposed to a popular vote.

Does it matter if Roseanne is set in a real place?

After thinking about whether Roseanne is set in Elgin, Illinois and the inconsistencies of the show’s location, I arrived at a broader question: does a fictional television show really need a location? And a second question follows: does it serve the writers or the viewers better to have a clear location?

To answer the first question, I think the answer is no. As noted in the earlier posts, much of the action in television dramas and sitcoms takes place among a limited number of characters in a limited number of locations. In some shows, the characters hardly ever leave their residence or work. In other shows, character are out and about more but they are often in generic locations that may signal something about a particular city – skyscrapers! lots of traffic! – but do not necessarily depend on a particular location. Think Friends: they are clearly in New York City yet the unique daily life of the city rarely is part of the plot (perhaps outside of the ongoing question of how people with those kinds of jobs can afford apartments like that). Could the show easily be set in Seattle or London or Houston without substantially altering the key relationships between characters and the narrative arcs? Many shows just need enough information to slot into a typical narrative that fits a location: the big city story, the suburban life, small town doings, etc.

To the second question, I think both the writers and viewers could be served well with some idea of where the show is taking place even as this geographic identity may mean little for the show. Our everyday lives are highly impacted by the spaces in which we operate, even if critics would argue suburbanization has rendered all the American suburbs the same or globalization has homogenized experiences within and across cultures. It might be hard to truly invest in a story or narrative arc if it literally could take place anywhere. Having a recognizable place or name at least gives people something to work with in their imaginations, even if the shows do not fully explore their geographic context. The small nods to geography can also serve to help differentiate shows from each other: the New York version is slightly different compared to the Los Angeles or the Chicago version. (Again, we usually do not get a broad palette of American locations but rather easily identifiable locations.) If anything, the restricted number of possible locations helps studios who can make backlots look like many places. (And you can see this on studio tours: we took a tour a few years ago of Warner Bros. where the set for Gilmore Girls, small town Connecticut, Desperate Housewives, suburban everywhere, and the big city were all a short distance from each other. And once you have viewed these sets up close, you see them all over in commercials, shows, and films.)

Coming back to Roseanne: I do not think it really matters that it is modeled on Elgin, Illinois or uses an exterior shot of a home from Evansville, Indiana. It could easily be set outside of Milwaukee, Cleveland, Buffalo, and dozens of other locations where working-class Americans live. Having a rough approximation of a location outside of Chicago may have helped writers and viewers place the show but it is not terribly consequential for the themes of the show or the characters.

 

Branding battle: “Chiraq” vs. “Chicago Epic”

Spike Lee and the city of Chicago have opposing views of how the city should be viewed. First, from Lee:

No sooner did the Wrap report that notable director Spike Lee has been tapped by Amazon Studios to make a movie titled Chiraq did the controversy and backlash begin to grow online because of the movie’s title. It wasn’t Lee who coined “Chiraq,” however. Chicago residents who have experienced the deadly shootings in “The Chi” gave it the moniker of Chiraq. The term combines Chicago with Iraq to compare the violence of the two places, as witnessed in the below documentary video previously released about “Chiraq,” but unrelated to Spike’s forthcoming movie…

Alderman Anthony Beale says Chiraq should have a new title, reports CBS Local. Beale adds that he doesn’t care what other name Lee uses for his new movie, but that it shouldn’t be Chiraq due to the violent images it brings forth. The alderman from the 9th ward didn’t make mention of the nickname coming from other sources than Lee.

Another politician was more forgiving of the Chiraq title. Senator Dick Durbin said he’d first like to give Spike a chance to explain what the Chiraq movie is all about before passing judgment. Although he says the Chiraq title is worrisome, he admitted he doesn’t know much more about the movie than the title.

Other politicians weighed in on Spike’s Chiraq, reported the Chicago Sun-Times?. Although Alderman Beale continued to point out criticisms, claiming Lee was stigmatizing Chicago with the Chiraq nickname, Mayor Rahm Emanuel refused to go that far. The mayor would only say that he’s focusing on the safety of the city.

Second, this news comes as the city is launching a national ad campaign to bring more tourists to Chicago:

The campaign, dubbed “Chicago Epic,” features a visually diverse 30-second TV commercial and far-flung ambitions. Target markets include San Francisco and Denver, but viewers throughout the country will likely see the spot over the next six weeks. Whether it changes minds about Chicago, or travel plans, remains to be seen…

Choose Chicago is funding the summer campaign with $2.2 million, up slightly from last year. About half of that budget will go to TV and online video. The rest will go to digital advertising, social media and paid search, hoping to sway online travel bookers as they plan their getaways…

Created by ad agency FCB Chicago, an 80-second long-form video was whittled down to a 30-second spot for the TV campaign. The spot features a distinctively Chicago voice urging visitors to be “part of something epic,” incorporating scenes of Divvy bikes, Lollapalooza, North Avenue Beach, Wicker Park and Alinea, recently named the best restaurant in the world by Elite Traveler. The forearms of renowned mixologist Charles Joly, which feature a tattoo of the Chicago flag, also have a starring role. Michael Jordan, the Chicago Theatre marquee and even the Chicago skyline ended up on the cutting-room floor for the edited TV spot…

“I think we’ll make ‘Chicago Epic’ as famous as ‘I Love New York,'” Fassnacht said. “That’s one of our goals — we have to make this iconic.”

There are several ways to view these competing narratives that could go a long way to influence the branding of the city:

1. Both contain elements of truth. Both don’t tell the full story. Chicago has experienced a lot of violence, even with murder rates that are significantly lower than in the past. Chicago has numerous interesting sites, even if many of its neighborhoods don’t match the glittering tourist locations.

2. The city of Chicago has said they want to boost tourism. This would help bring in more money and boost the city’s profile. Tourism is the sort of industry that can take advantage of existing locations and infrastructure (like the world’s busiest passenger airport) without requiring many big changes.

3. Chicago is clearly a global city and yet there is ongoing anxiety about whether Chicago can hold to its spot or whether it can truly compete with the cities at the top of the list.

4. It is unclear which narrative will win out.

The historical (in)accuracy of Assassin’s Creed Unity

Video games can help shape our understandings of historical events. Thus, a debate over the portrayal of the French Revolution in the new Assassin’s Creed:

The former leftist French presidential candidate, Jean-Luc Mélenchon, called it “propaganda against the people, the people who are [portrayed as] barbarians, bloodthirsty savages,” while the “cretin” that is Marie-Antoinette and the “treacherous” Louis XVI are portrayed as noble victims. “The denigration of the great Revolution is a dirty job to instill more self-loathing and déclinisme in the French,” he told Le Figaro. The secretary general of the Left Front, Alexis Corbière, said on his blog:

To all those who will buy Assassin’s Creed: Unity, I wish them a good time, but I also tell them that the pleasure of playing does not stop you from thinking. Play, yes, but do not let yourself be manipulated by those who make propaganda.

Ubisoft, the maker of the Assassin’s Creed series of video games, which has been going since 2007 and has sold more than 70 million copies, is in fact French. One of the makers of the game replied that Assassin’s Creed: Unity is a “consumer video game, not a history lesson” but did say that his team hired a historian and specialists on the Terror and other aspects of the Revolution. Le Monde lays out seven errors in the game here (in French).

In fact, the debate over who are the heroes and villains of the Revolution goes back to the 1790s. British counter-revolutionary thought often focused on the suffering of the monarchy in their stories, such as the King’s tearful goodbye to his family before his execution on Jan. 21st, 1793 or Marie-Antoinette’s perhaps apocryphal last words to her executioner after stepping on his foot just before her head was cut off: “Pardon me sir. I did not mean to do it.”

So perhaps the game simply reflects the ongoing debates of which actors in the French Revolution should be cast as heroes or villains? This all intrigued me because one of my classes recently considered how historical narratives are constructed and then played several historical video games to see how each portrays history. Some games clearly try to impart more historical accuracy – and these seem to be ones more intent on educational purposes – while others suffer from the gamification of history. This can lead to two things:

1. The games differ in their levels of ambiguity; after all, there has to be a winner. But, even as this debate illustrates, it is not always easy to depict who benefited or should have benefited from particular events. On one hand, it is easy to fight Nazis – there are a video game go-to for a clear enemy – but other events or periods are much more unclear. One solution is to simply drop in an outside story – as the Assassin’s Creed line does – and make it up from there.

2. This often means there is the potential to change history. This may just be a modern fad – This American Life recently asked some Americans about time travel and there was a subset of people who wanted to change big events:

Jonathan Goldstein

And even though they’ve been mulling this over for so long, many still reach for the most well-trodden sci-fi comic book staple.

Man 4

My first impulse about time travel is the same one that I would guess that everybody has. You know, thinking that I’m going to go back and I’m going to kill Hitler.

Sean Cole

What’s funny is that they know it’s kind of lame. You can hear it in their voices.

Man 4

Or kill Hitler when he’s a baby, or kill his mother or something.

Jonathan Goldstein

They preface it with phrases like–

Woman 1

It’s the thing everyone always says is–

Sean Cole

And then they say it anyway.

Woman 1

If there hadn’t been a Hitler–

Man 5

Put a bullet in Adolf Hitler’s head when he was still a student, I guess…

And of course, no one imagines that they’ll end up with an iron collar around their neck, working in a quarry. Instead, they have a starring role in the historical docu-drama. Like this guy, who’d set the controls for the Revolutionary War.

Man On Street 2

I don’t think I’d be like, a general in the field or anything like that. But I’d probably be more of like an adviser to Washington. Like Alexander Hamilton was, right? And a few other folks. So yeah.

Jonathan Goldstein

I love how you’re already an officer in this.

Man On Street 2

Exactly. Yeah.

Historical games can pose an interesting “what if?” yet also lead to improbable events or outcomes.

I would guess most of these action-oriented games are not concerned much about historical accuracy outside of how it can enhance the backdrop or the gameplay. Yet, given the sales of these games, the amount of time spent playing them, and who purchases them (often younger people), such games could go a long way toward influencing perceptions of the past.

When anti-government forces can control the public narrative about drone strikes in Yemen

While social media was praised in helping the Arab Spring movement, the new availability of Twitter in Yemen has changed who gets to control the public narrative about violence:

The result: AQAP and the Yemeni public have left the government far behind in an information war made possible by the spread of the Internet in the Arab world’s poorest nation. Authorities can no longer shape the narrative of counterinsurgency, particularly when it comes to controversial drone strikes…But the number of Internet users in the country increased nearly tenfold between 2010 and 2012, according to government figures, although even with that rapid expansion, less than a quarter of Yemenis have regular internet access.

Most drone strikes, which are believed to be US operations, target the most impoverished and isolated parts of Yemen where AQAP operates. The region’s remoteness plays into the group’s hands; it also makes it easy for the government to suppress any negative information, including civilian casualties from drone strikes and other aerial attacks.

But now Yemenis can easily, quickly share on-the-ground information. Last December, an airstrike targeted a wedding convoy, killing roughly a dozen civilians. The government initially identified the casualties as militants, but locals soon began posting photos of the dead on Facebook and tweeting the names of victims, directly challenging the government’s obfuscation.

Sounds like quite a change in a short amount of time. The availability of the Internet and social media threaten all sorts of traditional institutions that have relied on controlling information. All of the sudden, alternative viewpoints are available and regular citizens can pick and choose which to follow, believe, and propagate.

What does this do for American foreign policy? We generally disapprove of regimes that crack down on Internet availability (think China) but this is usually because we want to get our messages through. What happens when the same technologies are used to counter American narratives?

The evolving definition and usage of “selfie”

The word “selfie” was the Oxford Dictionary’s word of the year in 2013 but its usage and meaning continues to evolve:

A selfie isn’t just “a photograph that one has taken of oneself,” but also tends to be “taken with a smartphone or webcam and uploaded to a social media website,” as the editors at Oxford Dictionaries put it. That part is key because it reinforces the reason why we needed to come up with a new name for this kind of self-portraiture in the first place.

Think of it this way: A selfie isn’t fundamentally about the photographer’s relationship with the camera, it’s about the photographer’s relationship with an audience. In other words, selfies are more parts communication than self-admiration (though there’s a healthy dose of that, too).

The vantage point isn’t new; the form of publishing is.

This explains why we call the photo from the Oscars “Ellen’s selfie” — because she was the one who published it. Selfies tether the photographer to the subject of the photo and to its distribution. What better way to visually represent the larger shift from observation to interaction in publishing power?

Ultimately, selfies are a way of communicating narrative autonomy. They demonstrate the agency of the person behind the lens, by simultaneously putting that person in front of it.

The key to the selfie is not that people are talking photos of themselves for the first time in history; rather, they are doing it with new purposes, to tell their own stories to their online public. This is what social media and Web 2.0 are all about: putting the power into the hands of users to create their own narratives. The user now gets to decide what they want to broadcast to others. One scholar described it giving average people the ability to be a celebrity within their online social sphere. The selfie is also part of a shift toward telling these narratives through images rather than words – think about the relative shift in updating Facebook statuses years ago to now posting interesting pictures on Instagram.

Is the media narrative that bullying directly leads to suicide a social construction?

A member of the Poynter Institute argues the media narrative that bullying leads to suicide is too simple:

The common narrative goes like this: Mean kids, usually the most popular and powerful, single out and relentlessly bully a socially weaker classmate in a systemic and calculated way, which then drives the victim into a darkness where he or she sees no alternative other than committing suicide.

And yet experts – those who study suicide, teen behavior and the dynamics of cyber interactions of teens – all say that the facts are rarely that simple. And by repeating this inaccurate story over and over, journalists are harming the public’s ability to understand the dynamics of both bullying and suicide…

Yet when journalists (and law enforcement, talking heads and politicians) imply that teenage suicides are directly caused by bullying, we reinforce a false narrative that has no scientific support. In doing so, we miss opportunities to educate the public about the things we could be doing to reduce both bullying and suicide…

It is journalistically irresponsible to claim that bullying leads to suicide. Even in specific cases where a teenager or child was bullied and subsequently commits suicide, it’s not accurate to imply the bullying was the direct and sole cause behind the suicide.

I don’t know this literature too well outside of reading some work by Michael Kimmel on gender and bullying and Katherine Newman et al. regarding school shootings. Some thoughts:

1. Bullying is not a good thing, even if it doesn’t lead to tragic outcomes.

2. Even if a majority of kids who are bullied don’t commit suicide, that doesn’t mean that there isn’t a relationship. It might be that under certain conditions (perhaps social and environmental conditions or perhaps it has to do with more individual physiological traits) this relationship is more likely.

3. It seems that the media does not generally do very well in conveying complex stories. Perhaps it is because they don’t lend themselves to soundbites and headlines. Perhaps it is the need to find the winners, just like on ESPN. Perhaps the audience doesn’t want a complex story. But, look at any of the major events of recent years that have drawn a lot of media attention – from invading Iraq to Hurricane Katrina to the Trayvon Martin case – and you see relatively simple narratives for incredibly complex situations. Context matters.

As researchers look more at this issue, this is a reminder that the public perceptions of tragic events matter.

h/t Instapundit