Following (or not) the latest fashionable way to revive urban spaces

Blair Kamin dismisses a proposal to create a High Line like park along LaSalle Street in the Loop in part by appealing to history:

In 1979, as America’s downtowns struggled to meet the challenge of suburban shopping malls, the flavor of the month was the transit mall. Make cities more like suburbs, the thinking went, and they’ll be able to compete. So Chicago cut the number of traffic lanes on State Street from six to two— for buses only — and outfitted the ultrawide sidewalks with trees, flowers and bubble-topped bus shelters…

A recently issued study of the central Loop by commercial real estate brokers Cushman & Wakefield floats the idea of inserting a High Line-inspired elevated walkway through the heart of LaSalle Street. But unlike the High Line or Chicago’s 606 trail, which exude authenticity because they’re built on age-old elevated rail lines, the LaSalle Street walkway would be entirely new — more wanna-be cool than the real thing…

The pathway would combat the perception that LaSalle is a stuffy, “old school” street lined by intimidating temples of finance, the study claims. “With thoughtful modification,” it goes on, “LaSalle Street can become the live-work-play nucleus of the Central Loop.”

Kamin summarizes his proposed strategy:

In short, the way to confront the central Loop’s looming vacancies is to build carefully on existing strengths, rather than reach desperately for a hideous quick fix that would destroy one of the city’s great urban spaces.

A few thoughts in response:

1. Kamin cites two previous fashions – transit malls, linear parks – and cautions against following them. But, certainly there are other fashions from the urban era after World War Two that could be mentioned including: large urban renewal projects (often clearing what were said to be “blighted” or slum areas), removing above ground urban highways (see the Big Dig, San Francisco), mixed-income developments (such as on the site of the former Cabrini-Green high rises), transit-oriented development, waterfront parks, and more. Are all of these just fashions? How would one know? Certainly, it would be difficult for every major city to simply copy a successful change from another city and expect it to work in the same way in a new context. But, when is following the urban fashion advisable?

2. How often does urban development occur gradually and in familiar ways versus more immediate changes or disruptions? My sense is that most cities and neighborhoods experience much more of the first where change slowly accumulates over years and even decades. The buildings along LaSalle Street have changed as has the streetscape. But, the second might be easy to spot if a big change occurs or something happens that causes residents and leaders to notice how much might change. Gentrification could be a good example: communities and neighborhoods experience change over time but one of the concerns about gentrification is about the speed at which new kinds of change is occurring and what this means for long-time residents.

3. As places change, it could be interesting to examine how much places at the edge of change benefit from being the first or in the beginning wave. Take the High Line: a unique project that has brought much attention to New York City and the specific neighborhoods in which the park runs. As cities look to copy the idea, does each replication lose some value? Or, is there a tipping point where too many similar parks saturate the market (and perhaps this would influence tourists differently than residents)? I could also see where other cities might benefit from letting other places try things out and then try to correct the issues. If the High Line leads to more upscale development and inequality, later cities pursuing similar projects can address these issues early on.

Sociology = studying facts and interpretations of those facts

David Brooks hits on a lesson I teach in my Social Research class: studying sociology involves both looking for empirical patterns (facts) and the interpretations of patterns, real or not (meanings). Here is how Brooks puts it:

An event is really two things. It’s the event itself and then it’s the process by which we make meaning of the event. As Aldous Huxley put it, “Experience is not what happens to you, it’s what you do with what happens to you.”

In my class, this discussion comes about through reading the 2002 piece by Roth and Mehta titled “The Rashomon Effect: Combining Positivist and Interpretivist Approaches in the Analysis of Contested Events.” The authors argue research needs to look at what actually happened (the school shootings under study here) as well as how people in the community understood what happened (which may or may not have aligned with what actually happened but had important consequences for local social life). Both aspects might be interesting to study on their own – here is a phenomenon or here is what people make of this – but together researchers can get a full human experience where facts and meanings interact.

Brooks writes this in the context of the media. A good example of how this would be applied is the matter of journalists looking to spot trends. There are new empirical patterns to spot and point out. New social phenomena develop often (and figuring out where they come from can be a whole different complex matter). At the same time, we want to know what these trends mean. If psychologist Jean Twenge says there are troubling patterns as the result of smartphone use among teenagers and young adults, we can examine the empirical data – is smartphone use connected to other outcomes? – and what we think about all of this – is it good that this might be connected to increased loneliness?

More broadly, Brooks is hinting at the realm of sociology of culture where culture can be defined as patterns of meaning-making. The ways in which societies, groups, and individuals make meaning of their own actions and the social world around them is very important.

Win the suburbs, win 2020; patterns in news stories that make this argument

More than a year away from the 2020 presidential election, one narrative is firmly established: the path to victory runs through suburban voters. One such story:

Westerville is perhaps best known locally as the place the former Ohio state governor and Republican presidential candidate John Kasich calls home. But it – and suburbs like it – is also, Democrats say, “ground zero” in the battle for the White House in 2020…

In 2018, Democrats won the House majority in a “suburban revolt” led by women and powered by a disgust of Donald Trump’s race-based attacks, hardline policy agenda and chaotic leadership style. From the heartland of Ronald Reagan conservatism in Orange county, California, to a coastal South Carolina district that had not elected a Democrat to the seat in 40 years, Democrats swept once reliably Republican suburban strongholds…

“There is no way Democrats win without doing really well in suburbs,” said Lanae Erickson, a senior vice-president at Third Way, a centrist Democratic thinktank…

“There are short-term political gains for Democrats in winning over suburban voters but that doesn’t necessarily lead to progressive policies,” she said. In her research, Geismer found that many suburban Democrats supported a national liberal agenda while opposing measures that challenged economic inequality in their own neighborhoods.

Four quick thoughts on such news reports:

1. They often emphasize the changing nature of suburbs. This is true: the suburbs are becoming more racially, ethnically, and economically diverse. At the same time, this does not mean this is happening evenly across suburbs.

2. They often use a representative suburb as a case study to try to illustrate broader trends in the suburbs. Here, it is Westerville, Ohio, home to the Tuesday night Democratic debate. Can one suburb illustrate the broader trends in all suburbs? Maybe.

3. They stress that the swing voters are in the suburbs since city residents are more likely to vote for Democrats while rural residents are more likely to vote for Republicans. It will be interesting to see how Democratic candidates continue to tour through urban areas; will they spend more time in denser population areas or branch out to middle suburbs that straddle the line between solid Republican bases further away from the city and solid Democratic bases closer to the city?

4. Even with the claim that the suburbs are key to the next election, this often sheds little light on long-term trends. As an exception, the last paragraph in the quotation above stands out: suburban voters may turn one way nationally but this does not necessarily translate into more local political action or preferences.

Slight drop in millennial population in American big cities

The population of big cities may depend on millennials: will they flock to urban locations or leave for the suburbs? New data suggests slightly more of them are headed out of cities:

Cities with more than a half million people collectively lost almost 27,000 residents age 25 to 39 in 2018, according to a Wall Street Journal analysis of the figures. It was the fourth consecutive year that big cities saw this population of young adults shrink. New York, Chicago, Houston, San Francisco, Las Vegas, Washington and Portland, Ore., were among those that lost large numbers of residents in this age group…

The 2018 drop was driven by a fall in the number of urban residents between 35 and 39 years old. While the number of adults younger than that rose in big cities, those gains have tapered off in recent years.

Separate Census figures show the majority of people in these age groups who leave cities move to nearby suburbs or the suburbs of other metro areas.

City officials say that high housing costs and poor schools are main reasons that people are leaving. Although millennials—the cohort born between 1981 and 1996—are marrying and having children at lower rates than previous generations, those who do are following in their footsteps and often settling down in suburbs.

MillennialsCities2019Data

An interesting update: millennials as a whole are leaving cities but younger millennials are still going to cities while the oldest ones are leaving. Does this mean that the argument that young urbanites will still leave for the suburbs when they form families and have kids?

Maybe, maybe not. It would be helpful to know more:

1. How does the older millennial move out of cities compare to previous generations? Are they leaving cities at similar rates or not?

2. Is there significant variation (a) within cities over 500,000 people and (b) within smaller big cities (of which there are many)? The first point could get at some patterns related to housing prices. The second could get at a broader picture of urban patterns by not focusing just on the largest cities.

3. The true numbers to know (which are unknowable right now): what will the numbers be in the future? The chart above suggests some shifts even in the last decade. Which pattern will win out over time (or will the numbers be relatively flat, which they are for a number of the years discussed above)?

Reminder: “Twitter Is Not America”

A summary of recent data from Pew provides the reminder that Twitter hardly represents the United States as a whole:

In the United States, Twitter users are statistically younger, wealthier, and more politically liberal than the general population. They are also substantially better educated, according to Pew: 42 percent of sampled users had a college degree, versus 31 percent for U.S. adults broadly. Forty-one percent reported an income of more than $75,000, too, another large difference from the country as a whole. They were far more likely (60 percent) to be Democrats or lean Democratic than to be Republicans or lean Republican (35 percent)…

First, Pew split up the Twitter users it surveyed into two groups: the top 10 percent most active users and the bottom 90 percent. Among that less-active group, the median user had tweeted twice total and had 19 followers. Most had never tweeted about politics, not even about Twitter CEO Jack Dorsey’s meeting with Donald Trump.

Then there were the top 10 percent most active users. This group was remarkably different; its members tweeted a median of 138 times a month, and 81 percent used Twitter more than once a day. These Twitter power users were much more likely to be women: 65 percent versus 48 percent for the less-active group. They were also more likely to tweet about politics, though there were not huge attitudinal differences between heavy and light users.

In fancier social science terms, this suggests what happens on Twitter is not generalizable to the rest of Americans. It may not reflect what people are actually talking about or debating. It may not reflect the full spectrum of possible opinions or represent those opinions in the proportions they are generally held throughout the entire country. This does not mean that is no value in examining what happens on Twitter, but the findings are limited more to the population that uses it.

In contrast, the larger proportion of Americans who are on Facebook might appear to suggest that Facebook is more representative of the American population. But, another issue might arise, one that could dog social media platforms for years to come: how much content and interaction is driven by power users versus the percent of users who have relatively dormant accounts. I assume leaders of platforms would prefer more users become power users but this may not happen. What happens to any social media platform that has strong bifurcations between power users and less active users? Is this sustainable? Facebook has a goal to connect more people but this is unlikely to happen with such disparities in use.

This is why discussing or confirming trends seen on social media platforms might require more evidence from other sources or longer periods of time to verify. Even what might appear as widespread trends in social media could be limited to certain portions of the population. We may know more about smaller patterns in society that were once harder to see but putting together the big picture may be trickier.

 

The United States in its second prolonged period of immigration?

Many know that the decades at the end of the nineteenth century and early twentieth century were a period of significant immigration to the United States. This is regularly taught in history classes and often celebrated. While it can be difficult to understand larger patterns as they are happening, a recent Pew report provides evidence that a second long immigration period is happening now in the United States:

Nearly 14% of the U.S. population was born in another country, numbering more than 44 million people in 2017, according to a Pew Research Center analysis of the U.S. Census Bureau’s American Community Survey.

Pew_19.01.31_ForeignBornShare_ImmigrantshareofUS_2

This was the highest share of foreign-born people in the United States since 1910, when immigrants accounted for 14.7% of the American population. The record share was 14.8% in 1890, when 9.2 million immigrants lived in the United States.

Whether the trend line goes up, down, or plateaus remains to be seen (and immigration is a controversial topic at the moment). Still, even if it dropped in the coming years, now would still be part of a longer trend that people and scholars will look back at.

Putting the figures in international context might prove helpful as well:

Even though the U.S. has more immigrants than any other country, the foreign-born share of its population is far from the highest in the world. In 2017, 25 countries and territories had higher shares of foreign-born people than the U.S., according to United Nations data

Worldwide, most people do not move across international borders. In all, only 3.4% of the world’s population lives in a country they were not born in, according to data from the UN. This share has ticked up over time, but marginally so: In 1990, 2.9% of the world’s population did not live in their country of birth.

A number of countries could claim to be a “nation of immigrants” – a common refrain in the United States – though how all of that came to be would certainly differ as would how the immigrants were and are understood.

The changing concept of TV ratings

Recent report from Netflix about the number of viewers for certain movies and TV shows raises questions about what ratings actually are in today’s world:

These numbers were presumably the flashiest numbers that Netflix had to offer, but, hot damn, they are flashy—even if they should be treated with much skepticism. For one thing, of Netflix’s 139 million global subscribers, only about 59 million are American, something to bear in mind when comparing Netflix’s figures with the strictly domestic ratings of most linear channels. Another sticking point: What constitutes “watching”? According to Netflix, the numbers reflect households where someone watched at least 70 percent of one episode—given the Netflix model, it seems likely that most people started with Episode 1—but this doesn’t tell us how many people stuck with it, or what the average rating for the season was, which is, again, an important metric for linear channels…

Ratings are not just a reflection of how many people are watching a TV show. They are not just a piece of data about something that has already happened. They are also a piece of information that changes what happens, by defining whether we think of something as a hit, which has a knock-on effect on how much attention gets paid to that show, not just by other prospective viewers, but by the media. (Think how much more has been written on You now that we know 40 million people may have watched it.)

Consider, for example, how something like last year’s reboot of Roseanne might have played out if it had been a Netflix series. It would have been covered like crazy before its premiere and then, in the absence of any information about its ratings at all, would have become, like, what? The Ranch? So much of the early frenzy surrounding Roseanne had to do with its enormous-for-our-era ratings, and what those ratings meant. By the same token, years ago I heard—and this is pure rumor and scuttlebutt I am sharing because it’s a fun thought exercise—that at that time Narcos was Netflix’s most popular series. Where is Narcos in the cultural conversation? How would that position have changed if it was widely known that, say, 15 million people watch its every season?

Multiple factors are at play here including the decline of network television, the rise of cable television and streaming services, the general secrecy Netflix has about its ratings, and how today we define cultural hits. The last one seems the most interesting to me as a cultural sociologist: in a fragmented media world, how do we know what is a genuine cultural moment or touchstone compared to being a small fad or a trend isolated to a small group? Ratings were once a way to do this as we could assume big numbers meant it mattered to a lot of people.

Additionally, we today want quicker news about new trends and patterns. A rating can only tell us so much. It depends how it was measured. How does the rating compare to other ratings? Perhaps most importantly, the rating cannot tell us a lot about the lasting cultural contributions of the show or movie. Some products with big ratings will not stand the test of time while others will. Do we think people will be discussing You and talking about its impact on society in 30 years? We need time to discuss, analyze, and process what each cultural product is about. Cultural narratives involving cultural products need time to develop.