According to a few sources (here and here), Pittsburgh might claim the most bridges in the United States. Other people disagree (such as here).
All communities have features that they regard as unique or noteworthy. Bridges are an interesting choice: beyond having a lot of them, they are easy to see and are necessary for transportation.
Like in many matters, it may depend on measurement for claiming this civic title: what counts as a bridge?
As more actors express concerns about how social media use affecting the mental health of “children and teens,” this article suggests it can be hard to directly measure this link:
It doesn’t help that mental health is influenced by many factors, and no single treatment works for every person. “It’s not as straightforward as: What is the right antibiotic for that ear infection?” said Megan Moreno, a scientist and pediatrician at the University of Wisconsin-Madison, and co-director of the Center of Excellence on Social Media and Youth Mental Health at the American Academy of Pediatrics…
Among the reasons that make it difficult to isolate the role of social media in kids’ mental health is that the relationship between mental health and tech use is a two-way street, the panel from the National Academies of Sciences, Engineering, and Medicine said. A person’s mental state might influence how he or she uses the platform, which in turn affects his or her state of mind.
Randomized, controlled studies on whether social media caused the mental-health crisis are impractical because exposure to social media is now everywhere, researchers say. In addition, platforms are constantly changing their features, hobbling efforts to run long-term studies, they say.
A decade ago, Munmun De Choudhury, a computer scientist at Georgia Tech, was part of a team that showed that groups promoting disordered eating were skirting Instagram’s moderation efforts. De Choudhury says that such studies probably would be impossible today because social-media companies no longer allow access to public data, or charge hefty fees for it…
Research into the roots of distress in young people has found that other factors—bullying, or lack of family support—have stronger associations with mental-health outcomes, compared with social-media use.
These are different issues. This includes having access to data from platforms as well as data over time. Additionally, it takes work to separate out different influences on mental health. Randomized controlled trials that could help with this are difficult to put together in this situation. Other factors are shown to influence mental health.
Some think there is enough data to make the argument about social media use influencing mental health. For example, social psychologist Jonathan Haidt puts together evidence in his latest book The Anxious Generation. His approach is one that social scientists can take: there seem to be consistent patterns over time and other factors do not seem to account as well for the outcomes observed. And if there is a growing consensus across studies and scholars, this is another way for scientific findings to advance.
This is an ongoing situation as policy efforts and research efforts follow sometimes intertwining paths. If a state restricts social media use for teenagers and then mental health issues drop, would this count as evidence for social media causing mental health distress?
-Which conversations to count? Is this primarily about formal debate in the legislature or public hearings about the possibilities? Can media reports (whether TV, print, radio, others) count as time? Do digital conversations (texting, emails, in particular apps) count?
-How to count less formal conversations. If conversations take place behind the scenes as opposed to in public settings, can they be found or discovered? What kind of work is needed to track these down?
-Are people willing to talk about their talking? Some might be more willing, some less so. Or perhaps people would be more willing to talk after some major decision is made.
-Do we have some ballpark numbers of how many hours go into major decisions in governments or organizations? What is a “typical” range?
Given the scope of possible changes and the implications whether change occurs or not, the process and the time devoted to it could be worthy of study.
Goldman Sachs Group Inc. and Wells Fargo & Co. economists expect the government’s preliminary benchmark revisions on Wednesday to show payrolls growth in the year through March was at least 600,000 weaker than currently estimated — about 50,000 a month.
While JPMorgan Chase & Co. forecasters see a decline of about 360,000, Goldman Sachs indicates it could be as large as a million.
These are not just numbers; this data has implications for policies and economic conditions. Why are they being revised?
Once a year, the BLS benchmarks the March payrolls level to a more accurate but less timely data source called the Quarterly Census of Employment and Wages, which is based on state unemployment insurance tax records and covers nearly all US jobs. The release of the latest QCEW report in June already hinted at weaker payroll gains last year…
For most of the recent years, monthly payroll data have been stronger than the QCEW figures. Some economists attribute that in part to the so-called birth-death model — an adjustment the BLS makes to the data to account for the net number of businesses opening and closing, but that might be off in the post-pandemic world…
Ronnie Walker at Goldman Sachs says the QCEW figures are likely to overstate the moderation in employment growth because they will strip out up to half a million unauthorized immigrants that were included in the initial estimates.
In other words, this is a measurement issue. The first measure comes from a particular set of data and the revision utilizes a different set of data that takes more time to put together. There might also be discrepancies on what is included in each set of data, not just differences in the sources of data. This mismatch leads to later revisions.
Given our data and analysis abilities today, isn’t there some way to improve the system? Could we get (a) more complete data quicker or (b) better estimates in the first place or (c) even new data sources that could provide better information? To have an initial set of figures that people respond to and then a later set of figures that people respond to seems counterproductive given the stakes involved. .
Erin is one of millions, from teachers to therapists to managers to hairdressers, whose work relies on relationship. By some accounts, the U.S. is moving from a “thinking economy” to a “feeling economy,” as many deploy their emotional antennae to bear witness and reflect back what they understand so that clients, patients, and students feel seen. I’ve come to call this work “connective labor,” and the connections it forges matter. It can be profoundly meaningful for the people involved, and it has demonstrable effects: We know that doctor–patient relationships, for instance, are more effective than a daily aspirin to ward off heart attacks.
But this work is increasingly being subjected to new systems that try to render it more efficient, measurable, and reproducible. At best, firms implement these systems assuming that such interventions will not get in the way of workers and clients connecting. At worst, they ignore or dismiss those connections altogether. Even these complex interpersonal jobs are facing efforts to gather information and assessment data and to introduce technology. Moneyball has come for connective labor…
Connective labor is increasingly being subjected to new systems that try to make it more predictable, measurable, efficient—and reproducible. If we continue to prioritize efficiency over relationship, we degrade jobs that have the potential to forge profound meaning between people and, along the way, make them more susceptible to automation and A.I., creating a new kind of haves and have-nots: those divided by access to other people’s attention.
To quantify relationships could be difficult in itself. It requires attaching measurements to human connections. Some of these features are easier to capture than others. In today’s world, if a conversation or interaction or relationship happens without “proof,” is it real? This proof could come in many forms. A social media post. A digital picture taken. Activity recorded by a smart watch. An activity log written by hand or captured by a computer.
Then to scale relationships is another matter. A one to one connection multiplied dozens of time throughout a day or hundreds or thousands of times across a longer span presents other difficulties. How many relationships can one have? How much time should each interaction take? Are there regular metrics to meet? What if the relationship or interaction goes a less predictable direction, particularly when it might require more time and care?
Given what we can measure and track now and the scale of society today, the urge to measure relationships will likely continue. Whether people and employees push back more strongly against the quest to quantify and be efficient remains to be seen.
The concept of reducing these shades of pain to a single number dates back to the 1970s. But the zero-to-10 scale is ubiquitous today because of what was called a “pain revolution” in the ’90s, when intense new attention to addressing pain—primarily with opioids—was framed as progress. Doctors today have a fuller understanding that they can (and should) think about treating pain, as well as the terrible consequences of prescribing opioids so readily. What they are learning only now is how to better measure pain and treat its many forms.
About 30 years ago, physicians who championed the use of opioids gave robust new life to what had been a niche speciality: pain management. They started pushing the idea that pain should be measured at every appointment as a “fifth vital sign.” The American Pain Society went as far as copyrighting the phrase. But unlike the other vital signs—blood pressure, temperature, heart rate, and breathing rate—pain had no objective scale. How to measure the unmeasurable? The society encouraged doctors and nurses to use the zero-to-10 rating system. Around that time, the FDA approved OxyContin, a slow-release opioid painkiller made by Purdue Pharma. The drugmaker itself encouraged doctors to routinely record and treat pain, and aggressively marketed opioids as an obvious solution…
But this approach to pain management had clear drawbacks. Studies accumulated showing that measuring patients’ pain didn’t result in better pain control. Doctors showed little interest in or didn’t know how to respond to the recorded answer. And patients’ satisfaction with their doctor’s discussion of pain didn’t necessarily mean they got adequate treatment. At the same time, the drugs were fueling the growing opioid epidemic. Research showed that an estimated 3 to 19 percent of people who get a prescription for pain medication from a doctor developed an addiction…
A zero-to-10 scale may make sense in certain situations, such as when a nurse uses it to adjust a medication dose for a patient hospitalized after surgery or an accident. And researchers and pain specialists have tried to create better rating tools—dozens, in fact, none of which was adequate to capture pain’s complexity, a European panel of experts concluded. The Veterans Health Administration, for instance, created one that had supplemental questions and visual prompts: A rating of 5 correlated with a frown and a pain level that “interrupts some activities.” The survey took much longer to administer and produced results that were no better than the zero-to-10 system. By the 2010s, many medical organizations, including the American Medical Association and the American Academy of Family Physicians, were rejecting not just the zero-to-10 scale but the entire notion that pain could be meaningfully self-reported numerically by a patient.
Measurement in many areas is not an easy process. There appear to be multiple complicating factors in this situation: pain perception can differ across patients; people are self-reporting pain; reports of pain are tied to particular medical options; doctors, nurses, and others are interpreting reports of pain; and there are numerous ways this could be measured.
If measurement is so difficult, what else could be done? I would guess people will continue to look for accurate measurement tools. Having such tools could prove very beneficial (and perhaps profitable?). It could also hint at the need for relational holistic care where a point-in-time report of pain is understood within a longer-term understanding between patient and provider. And greater scientific understanding of pain – and managing it – could help.
In the meantime, imprecise measurement of pain will continue. Should this affect how we answer the 0-10 question when asked?
Vincent is among a growing group of middle-class Americans — most recently defined in 2022 by the Pew Research Center as households earning between $48,500 and $145,500 — who don’t feel they can’t afford to live a traditional middle-class life, replete with a home and a comfortable retirement…
Collins suspects that most middle-class Americans feel anxious about their financial situation due to financial shock fatigue — the exhaustion of navigating one big economic shock after another — as well as a lack of financial planning…
Financial anxiety has hit an all-time high, according to a survey from Northwestern Mutual, and a survey from Primerica found that half of middle-class households say their financial situation is “not so good” or outright “poor.”…
Buying a home may be the greatest example of a tenet of middle-class life feeling out of reach for many, and that struggle is very real rather than merely negatively perceived.
The suggestion is that people feel less certain of their social class status because of financial uncertainty at the moment and in recent years. They may have resources, particularly a certain income level, but they do not feel secure.
What might this mean for defining the middle class? Perhaps this should lead to changing what it means. If people do not feel that certain markers provide a middle class status, then change the markers. These variables might need to change as economic conditions change.
It would also be interesting to see what social class those feeling financial anxiety say they are in. Traditionally, being in the middle class was a sign of making it and being successful. Would someone who might be classified as middle class by income and other markers say they are working class? Is there a big shift away from identifying as middle class?
The biggest point of contention between the two camps revolves around “unreported income,” more commonly known as tax evasion. Tax returns are the best data source available for studying income distributions, but they’re incomplete—most obviously because people don’t report all of the income that they’re supposed to. This information gap requires inequality researchers to make some educated guesses about how unreported income is distributed, which is to say, about who is evading the most taxes. Piketty, Saez, and Zucman assume that it’s the people who already report a lot of income: Think of the well-paid corporate executive who also stashes millions of dollars in an offshore account. Auten and Splinter, by contrast, assume that those who evade the most taxes are people who report little or no income: Think plumbers or housekeepers who get paid in cash. They believe, in other words, that members of the 99 percent are a lot richer than they look…
To take the true measure of inequality, economists need a way to account for all the income and expenses that don’t show up on people’s tax returns. The method that Piketty, Saez, and Zucman pioneered, and that Auten and Splinter follow, was to take the gross domestic product—a measure of all of the spending in the national economy every year—and figure out who exactly is receiving how much of it. (Technically, they use something called gross national income, which is a close cousin of GDP.) The benefit of this approach is that nothing gets left out. The drawback is that, well, nothing gets left out. GDP measures the total production of an entire economy, so it includes all sorts of expenditures that don’t seem like income at all.
Much of the difference between the authors’ estimates of inequality hinges on how they treat government spending on things that benefit the public at large, such as education, infrastructure, and national defense. Because this spending is part of gross national income, it must be allocated to someone in order for the math to work out. Piketty, Saez, and Zucman take the view that this stuff really shouldn’t be considered income, so they allocate it in a way that doesn’t change the overall distribution. Auten and Splinter, however, argue that at least some of this money should count as income. Citing research indicating that education spending tends to disproportionately benefit lower- and middle-income kids, they decide to allocate the money in a way that increases the bottom 99 percent’s share of income—by a lot. Austin Clemens, a senior fellow at the Washington Center for Equitable Growth, calculates that in Auten and Splinter’s data set, a full 20 percent of income for those in the bottom half of the distribution “comes in the form of tanks, roads, and chalkboards.”…
The deeper you get into how GDP is actually calculated and allocated, the more you feel as though you’ve fallen through a wormhole into an alternate dimension. Let’s say you own a house. Government statisticians imagine that you are renting out that house to yourself, calculate how much money you would reasonably be charging, and then count that as a form of income that you are, in essence, paying yourself. This “imputed rent” accounts for about 9 percent of all GDP, or more than $2 trillion. Or suppose you have a checking account at a major bank. Statisticians will calculate the difference between what the bank pays you in interest on that account (usually close to nothing) and what you could have earned by investing that same money in safe government bonds. That difference is then considered the “full value” of the benefits you are receiving from the bank—above and beyond what it actually charges you for its services—and is therefore considered additional income for you, the depositor. All of these choices have some theoretical justification, but they have very little to do with how normal people think about their financial situation.
These are common issues working with all sorts of variables that matter in life: trying to collect good data, operationalization, missing data, judgment calls, and then difficulty in interpreting the results. In this case, it affects public perceptions of income inequality and big questions about the state of society.
Is this just an arcane academic debate? Since academics tend to want their work to matter for society and policy, this particular discussion matters a lot. Every day, economic news is reported. People have their own experiences. Humans like to compare their own experiences to those of others now and in the past. People search for certainty and patterns. The question of inequality is a recurrent one for numerous reasons and having good data and interpretations of that data matters for perceptions and actions.
The way that academics tend to deal with this is to continue to measure and interpret. Others will see this debate and find new ways to conceptualize the variable and collect data. New studies will come out. Scholars of this area will read, discuss, and write about this issue. There will be disagreement. Conditions in the world will change. And hopefully academics will get better at measuring and interpreting the concept of income.
Lear had already established himself as a top comedy writer and captured a 1968 Oscar nomination for his screenplay for “Divorce American Style” when he concocted the idea for a new sitcom, based on a popular British show, about a conservative, outspokenly bigoted working-class man and his fractious Queens family. “All in the Family” became an immediate hit, seemingly with viewers of all political persuasions.
Lear’s shows were the first to address the serious political, cultural and social flashpoints of the day – racism, abortion, homosexuality, the Vietnam war — by working pointed new wrinkles into the standard domestic comedy formula. No subject was taboo: Two 1977 episodes of “All in the Family” revolved around the attempted rape of lead character Archie Bunker’s wife Edith.
Their fresh outrageousness turned them into huge ratings successes: For a time, “Family” and “Sanford,” based around a Los Angeles Black family, ranked No. 1 and No. 2 in the country. “All in the Family” itself accounted for no less than six spin-offs. “Family” was also honored with four Emmys in 1971-73 and a 1977 Peabody Award for Lear, “for giving us comedy with a social conscience.” (He received a second Peabody in 2016 for his career achievements.)
Some of Lear’s other creations played with TV conventions. “One Day at a Time” (1975-84) featured a single mother of two young girls as its protagonist, a new concept for a sitcom. Similarly, “Diff’rent Strokes” (1978-86) followed the growing pains of two Black kids adopted by a wealthy white businessman.
Other series developed by Lear were meta before the term ever existed. “Mary Hartman, Mary Hartman” (1976-77) spoofed the contorted drama of daytime soaps; while the show couldn’t land a network slot, it became a beloved off-the-wall entry in syndication. “Hartman” had its own oddball spinoff, “Fernwood 2 Night,” a parody talk show set in a small Ohio town; the show was later retooled as “America 2-Night,” with its setting relocated to Los Angeles…
One of Hollywood’s most outspoken liberals and progressive philanthropists, Lear founded the advocacy group People for the American Way in 1981 to counteract the activities of the conservative Moral Majority.
The emphasis here is on both television and politics. Lear created different kinds of shows that proved popular as they promoted particular ideas. He also was politically active for progressive causes.
How might we know that these TV shows created cultural change? Just a few ways this could be established:
-How influential were these shows to later shows and cultural products? How did television shows look before and after Lear’s work?
-Ratings: how many people watched?
-Critical acclaim: what did critics think? What did his peers within the industry think? How do these shows stand up over time?
But, the question I might want to ask is whether we know how the people who watched these shows – millions of Americans – were or were not changed by these minutes and hours spent in front of the television. Americans take in a lot of television and media over their lifetime. This certainly has an influence in the aggregate. Do we have data and/or evidence that can link these shows to changed attitudes and actions? My sense is that is easier to see broad changes over time but harder to show more directly that specific media products led to particular outcomes at the individual (and sometimes also at the social) level.
These are research methodology questions that could involve lots of cultural products. The headline above might be supportable but it could require putting together multiple pieces of evidence and not having all the data we could have.
By the Nielsen company’s count, 7.8 million people watched Amazon Prime’s coverage of last Thursday’s NFL game between New Orleans and Arizona. But Amazon says no, there were actually 8.9 million people watching…
Neither company is saying the other is wrong, but neither is backing down, either. The result is confusion, most notably for advertisers.
Nielsen, as it has for years, follows the viewing habits in a panel of homes across the country and, from that limited sample, derives an estimate of how many people watch a particular program. That number is currency in the media industry, meaning it is used to determine advertising rates.
Amazon, in the first year of an 11-year contract to stream Thursday night games, says it has an actual count of every one of its subscribers who streams it — not an estimate. The games are also televised in the local markets of the participating teams, about 9% of its total viewership each week, and Amazon uses Nielsen’s estimate for that portion of the total…
But with Netflix about to introduce advertising, that can all change very rapidly. And if other companies develop technology that can measure viewing more precisely, the precedent has now been set for publicly disputing Nielsen’s numbers.
There could be multiple methodological issues at play here. One involves who has a more accurate count. If Amazon can directly count all viewers, that could be the more accurate number. However, not all television providers have that ability. A second concern is how different providers might count viewership. Does Amazon reveal everything about its methods? Nielsen is an independent organization that theoretically has less self-interest in its work.
All of this has implications for advertisers, as noted above, but it also gets at understandings of how many people today view or consume particular cultural products. Much has been said about the fragmentation of culture industries with people having the ability to find all sorts of works. Accurate numbers help us make sense of the media landscape and uncover patterns. Would competing numbers or methods lead to very different narratives about our collective consumption and experiences?