Efficiency the reason we have the telephone button layout we have today

Bell Labs made a number of important discoveries decades ago including making the choice of how telephone buttons should be laid out:

This layout is so standardized that we barely think about it. But it was, in the 1950s, the result of a good deal of strategizing and testing on the part of people at Bell Labs. Numberphile has dug up an amazing paper — published in the July 1960 issue of “The Bell System Technical Journal” — that details the various alternative designs the Bell engineers considered. Among them: “the staircase” (II-B in the image above), “the ten-pin” (III-B, reminiscent of bowling-pin configurations), “the rainbow” (II-C), and various other versions that mimicked the circular logic of the existing dialing technology: the rotary.

Everything was on the table for the layout of the ten buttons; the researchers’ only objective was to find the configuration that would be as user-friendly, and efficient, as possible. So they ran tests. They experimented. They sought input. They briefly considered a layout that mimicked a cross.

And in the end, though, Numberphile’s Sarah Wiseman notes, it became a run-off between the traditional calculator layout and the telephone layout we know today. And the victory was a matter of efficiency. “They did compare the telephone layout and the calculator layout,” she says, “and they found the calculator layout was slower.”

It is interesting that they searched for what was most efficient. This is not surprising; telephones are pieces of technology and the user is likely to want to dial the number as quickly as possible so they can get on with the phone call. But, efficiency isn’t necessarily everything. Imagine Steve Jobs and Apple, an organization known for their designs, made this initial choice: would they have chosen something more elegant or would they have selected efficiency as well? It is a small thing yet it hints at George Ritzer’s McDonaldization thesis where efficient and rational approaches tend to win out in our world.

A side note: Bell Labs should be better known in the United States for their role in developing new technologies.

Transforming Rosemont from a small suburb to a entertainment and commercial center

The suburb of Rosemont, Illinois has changed quite a bit in recent decades with a strong push from local leaders:

“Now Rosemont pretty much has everything people need,” Stephens said. “There is no need to go to downtown Chicago.”

That’s essentially been the philosophy of Rosemont since its incorporation in 1956. The village covers only 2.5 square miles. But it’s blessed with being at the center of a transportation hub. It’s in the shadows of O’Hare International Airport. It stands at the convergence of I-90 and I-294. And it has a stop on the CTA’s Blue Line el.

Donald Stephens’ ambition was to convince travelers to O’Hare that they didn’t need to go to Chicago. So he built hotels and restaurants, the Donald A. Stephens Convention Center, Rosemont Horizon (now Allstate Arena), Rosemont Theatre, Rosemont Stadium for softball, Muvico 18, a movie multiplex, and MB Financial Park, a de-facto town square filled with restaurants and entertainment venues, including a bowling alley and ice skating rink…

Beyond a great location, Rosemont made the decision early on that it wanted to attract commercial development, said Steve Hovany, president of Strategy Planning Association, a Schaumburg-based real estate consulting firm.

This sounds like a classic case of the political economy model for urban growth. One key family, now spanning two separate mayors, made decisions alongside business and local leaders to pursue economic growth. They made use of an existing advantage in the community, being located near transportation options, and attracted new opportunities. The only piece missing from the article is some explanation from the leaders themselves why they did all of this. Just to put Rosemont on the map? Or, to make money for leaders as well as the community who then benefits quite a bit from property and sales taxes (items many suburbs wish they had).

Using algorithms to analyze the literary canon

A new book describes efforts to use algorithms to discover what is in and out of the literary canon:

There’s no single term that captures the range of new, large-scale work currently underway in the literary academy, and that’s probably as it should be. More than a decade ago, the Stanford scholar of world literature Franco Moretti dubbed his quantitative approach to capturing the features and trends of global literary production “distant reading,” a practice that paid particular attention to counting books themselves and owed much to bibliographic and book historical methods. In earlier decades, so-called “humanities computing” joined practitioners of stylometry and authorship attribution, who attempted to quantify the low-level differences between individual texts and writers. More recently, the catchall term “digital humanities” has been used to describe everything from online publishing and new media theory to statistical genre discrimination. In each of these cases, however, the shared recognition — like the impulse behind the earlier turn to cultural theory, albeit with a distinctly quantitative emphasis — has been that there are big gains to be had from looking at literature first as an interlinked, expressive system rather than as something that individual books do well, badly, or typically. At the same time, the gains themselves have as yet been thin on the ground, as much suggestions of future progress as transformative results in their own right. Skeptics could be forgiven for wondering how long the data-driven revolution can remain just around the corner.

Into this uncertain scene comes an important new volume by Matthew Jockers, offering yet another headword (“macroanalysis,” by analogy to macroeconomics) and a range of quantitative studies of 19th-century fiction. Jockers is one of the senior figures in the field, a scholar who has been developing novel ways of digesting large bodies of text for nearly two decades. Despite Jockers’s stature, Macroanalysis is his first book, one that aims to summarize and unify much of his previous research. As such, it covers a lot of ground with varying degrees of technical sophistication. There are chapters devoted to methods as simple as counting the annual number of books published by Irish-American authors and as complex as computational network analysis of literary influence. Aware of this range, Jockers is at pains to draw his material together under the dual headings of literary history and critical method, which is to say that the book aims both to advance a specific argument about the contours of 19th-century literature and to provide a brief in favor of the computational methods that it uses to support such an argument. For some readers, the second half of that pairing — a detailed look into what can be done today with new techniques — will be enough. For others, the book’s success will likely depend on how far they’re persuaded that the literary argument is an important one that can’t be had in the absence of computation…

More practically interesting and ambitious are Jockers’s studies of themes and influence in a larger set of novels from the same period (3,346 of them, to be exact, or about five to 10 percent of those published during the 19th century). These are the only chapters of the book that focus on what we usually understand by the intellectual content of the texts in question, seeking to identify and trace the literary use of meaningful clusters of subject-oriented terms across the corpus. The computational method involved is one known as topic modeling, a statistical approach to identifying such clusters (the topics) in the absence of outside input or training data. What’s exciting about topic modeling is that it can be run quickly over huge swaths of text about which we initially know very little. So instead of developing a hunch about the thematic importance of urban poverty or domestic space or Native Americans in 19th-century fiction and then looking for words that might be associated with those themes — that is, instead of searching Google Books more or less at random on the basis of limited and biased close reading — topic models tell us what groups of words tend to co-occur in statistically improbable ways. These computationally derived word lists are for the most part surprisingly coherent and highly interpretable. Specifically in Jockers’s case, they’re both predictable enough to inspire confidence in the method (there are topics “about” poverty, domesticity, Native Americans, Ireland, sea faring, servants, farming, etc.) and unexpected enough to be worth examining in detail…

The notoriously difficult problem of literary influence finally unites many of the methods in Macroanalysis. The book’s last substantive chapter presents an approach to finding the most central texts among the 3,346 included in the study. To assess the relative influence of any book, Jockers first combines the frequency measures of the roughly 100 most common words used previously for stylistic analysis with the more than 450 topic frequencies used to assess thematic interest. This process generates a broad measure of each book’s position in a very high-dimensional space, allowing him to calculate the “distance” between every pair of books in the corpus. Pairs that are separated by smaller distances are more similar to each other, assuming we’re okay with a definition of similarity that says two books are alike when they use high-frequency words at the same rates and when they consist of equivalent proportions of topic-modeled terms. The most influential books are then the ones — roughly speaking and skipping some mathematical details — that show the shortest average distance to the other texts in the collection. It’s a nifty approach that produces a fascinatingly opaque result: Tristram Shandy, Laurence Sterne’s famously odd 18th-century bildungsroman, is judged to be the most influential member of the collection, followed by George Gissing’s unremarkable The Whirlpool (1897) and Benjamin Disraeli’s decidedly minor romance Venetia (1837). If you can make sense of this result, you’re ahead of Jockers himself, who more or less throws up his hands and ends both the chapter and the analytical portion of the book a paragraph later. It might help if we knew what else of Gissing’s or Disraeli’s was included in the corpus, but that information is provided in neither Macroanalysis nor its online addenda.

Sounds interesting. I wonder if there isn’t a great spot for mixed method analysis: Jockers’ analysis provides the big picture but you also need more intimate and deep knowledge of the smaller groups of texts or individual texts to interpret what the results mean. So, if the data suggests three books are the most influential, you would have to know these books and their context to make sense of what the data says. Additionally, you still want to utilize theories and hypotheses to guide the analysis rather than simply looking for patterns.

This reminds me of the work sociologist Wendy Griswold has done in analyzing whether American novels shared common traits (she argues copyright law was quite influential) or how a reading culture might emerge in a developing nation. Her approach is somewhere between the interpretation of texts and the algorithms described above, relying on more traditional methods in sociology like analyzing samples and conducting interviews.

Author argues the singular American suburban dream is splintering into multiple dreams

In the new book The End of the Suburbs, Leigh Gallagher argues the suburban dream is changing:

That gets to what you say at the very end: the American dream won’t be singular anymore. There will be different dreams.

And they will be dreams. They won’t be houses. They won’t be buildings. Somewhere along the way the American Dream morphed from being a dream, an opportunity, to being a house. That’s no longer the case for a lot of people…

The future you outline are these “urban burbs”-style developments where people don’t have to drive more than a mile or two and they can reach other urban burbs by transit. How close are we to that on a broad scale?

We’re far away from being these network of nodes where everybody is hooked up to everyone else by public transit and we all read three hours more a day. We’re far from that. But the important thing is, people are recognizing that we can’t just keep doing what we’ve been doing. It’s not satisfying people. And it’s no longer meeting the market demand. Home-builders only react when they think the market wants something. And they’re starting to react.

One could argue that even at the peak of mass suburbanization, sometime between the late 1940s and mid 1960s, there have always been some different visions of suburbia. The common image is similar to what happened in the Levittowns: mostly white city dwellers fleeing the city and seeking out more private spaces in the suburbs. But, even then there were pockets of different kinds of suburbs, whether they were more industrial suburbs, suburbs with mostly African-American residents (see Places of Their Own by Andrew Wiese), and working-class suburbs (see My Blue Heaven by Becky Nicolaides).

Thus, this may an issue of the dominant trends in building and development (more urban suburban places) but it is also about the dominant image or narrative of the suburbs, particularly that of critics, falling apart. If suburbs become more dense on the whole, does it make them more palatable to everyone? How dense do they need to be before they are viewed as something very different?

Krugman: prediction problems in economics due to the “sociology of economics”

Looking at the predictive abilities of macroeconomics, Paul Krugman suggests there is an issue with the “sociology of economics”:

So, let’s grant that economics as practiced doesn’t look like a science. But that’s not because the subject is inherently unsuited to the scientific method. Sure, it’s highly imperfect — it’s a complex area, and our understanding is in its early stages. And sure, the economy itself changes over time, so that what was true 75 years ago may not be true today — although what really impresses you if you study macro, in particular, is the continuity, so that Bagehot and Wicksell and Irving Fisher and, of course, Keynes remain quite relevant today.

No, the problem lies not in the inherent unsuitability of economics for scientific thinking as in the sociology of the economics profession — a profession that somehow, at least in macro, has ceased rewarding research that produces successful predictions and rewards research that fits preconceptions and uses hard math instead.

Why has the sociology of economics gone so wrong? I’m not completely sure — and I’ll reserve my random thoughts for another occasion.

This is an occasional discussion in social sciences like economics or sociology: how much are they really like a science in the sense of making testable predictions (not about the natural world but for social behavior) versus whether they are more interpretive. I’m not surprised Krugman takes this stance but it is interesting that he says the issue is within the discipline itself for rewarding the wrong things. If this is the case, what could be done to reward successful predictions? At this point, Krugman is suggesting a problem without offering much of a solution. As a number of people, like Nassim Taleb and Nate Silver, have noted in recent years, making predictions is quite difficult, requires a more humble approach, and requires particular methodological and statistical approaches.

How related are home sales and car sales?

Americans like big houses as well as cars. But, are sales of homes related to sales of cars?

Driving to work the other day I heard a radio analyst assert that the recent increase in home sales is responsible for the increase in automobile sales (McMansions come with at least two car garages you know!) The short piece didn’t offer much in terms of quantitative information and this made me wonder what data was used to support such a claim. The analyst could have looked at SEC (Securities and Exchange Commission) filings, the equities and derivatives market, or perhaps research from industry associations such as the National Association of Realtors; the latter would prompt me to consider confirmation bias.

If only considering home sales, Federal Reserve Board economist Andrew Paciorek recently published an engaging paper describing the effects of household formation on housing demand. Paciorek asserts that in the past 30 years the aging population has moved into smaller homes, which is intuitive from the practicality it offers seniors. Paciorek also postulates that the poor labor market has depressed the headship rate, which is defined as the percent of people who are heads of household via U.S. Census population projections.

According to the S&P/Case Shiller Home Price Index report, the average U.S. home is now worth approximately 10 percent more than it was a year ago, marking the largest annual improvement since the market turned south in 2006. What of the automobile market though? American popular culture paints home and car ownership as inseparable in the “American Dream”. The most recent J.D. Power report projects August sales to increase 12 percent compared to last year, the highest monthly sales volume since 2006.

It would be easy to paint a picture of recovery for these industries based on sales revenues, although there is no indication of a casual relationship between the two. These reports are meant for the average consumer only in a sense to stir up positive sentiment, which in turn spurs more discretionary spending. It is more plausible that these reports are meant for the real stakeholders: shareholders and potential investors. We can surmise that in a world of algorithmic high frequency trading and complex derivatives based on yet other derivatives, that the common equities market does not always correlate to the real-world P&L performance. I recall a former boss’s retort of traditional value investing: ‘The market can stay irrational longer than you can stay solvent’.

The conclusion here is that this is a “common sense explanation” without much merit in data. And, I wonder if this is a classic case of the casual observer making a spurious association: both car sales and home sales go back in a better economy.

This is also interesting because of the number of times in the last decade or so when journalists and commentators have linked the building of McMansions to consuming other large objects, particularly SUVs. The idea behind these comparisons is that Americans in general have learned to consumer more bigger items. However, I’ve never seen any data that the same people who purchase McMansions are necessarily the same people purchasing SUVs, super-sized fast food, bulk items at big box stores, and other large items that fit into a category of excessive consumption.

“The U.S. is now a country where many people live alone in a land of 3-bedroom houses”

Putting together recent data on household type and housing supply in the United States, Emily Badger comes to this conclusion:

As we’ve written before, American households have been getting smaller as our houses, conversely, have actually been getting bigger. But the disconnect between those two trends may be felt the most strongly by people who live alone, whether they’re 22-year-old women who aren’t yet married, or 70-year-old retired widows. As more Americans are opting to live alone than ever before, that now seems like an entirely unremarkable choice. But for years we’ve been building houses for that big nuclear family that’s now less common. And housing data released earlier this summer by the Census Bureau, illustrated at right, suggests that the U.S. is now a country where many people live alone in a land of 3-bedroom houses.

Interesting claim but without knowing exactly if the single-person households are living in the three bedroom homes, it is difficult to support.

A thought: I wonder if household types/family life can change much more quickly than the housing stock. That housing supply data includes a lot of homes built in past decades, both in eras when homes were smaller with larger families (pre-1960s) and when homes have been larger (the last few decades). It will take a long time for the housing market to fully adjust to more people living alone. Micro-apartments may be catching on in a few big cities but smaller housing for solo households is still limited.

But, it would also be interesting to ask single-person households how many bedrooms they would prefer to have if they could. Three bedrooms allows for space for guests as well as other kinds of rooms (used as storage/closets, hobby rooms, etc.). Two bedrooms does the same thing but with less space and four bedrooms probably provides too much space.

“The Best Map Every Made of America’s Racial Segregation”

This is a lofty claim about a map but these maps clearly show racially divided neighborhoods in American cities. What makes these maps so good?

1. Data and mapping software that allows for mapping at smaller levels. Instead of focusing on municipal boundaries, counties, or census tracts, we can now get at smaller units of analysis.

2. The colors on these maps are visually interesting. I don’t know how much they play around with that but having an eye-popping map doesn’t hurt.

3. Perhaps most important: there are clear patterns to map here. As documented clearly in American Apartheid twenty years ago, American communities are split on racial and ethnic lines.

Aircraft carriers sailing off the shores of Chicago in World War II

Chicago may be inland but during World War II, aircraft carriers sailed in its waters. See lots of pictures here.

Chicago has a history of military production and training, though it is hard to tell this now.

Born into digital lives: average newborn online within an hour of birth

The newborns of today arrive online very quickly:

The poll found that parents were the most likely to upload pictures of the newborns (62 per cent), followed by other family members (22 per cent) and friends (16 per cent).

The most popular platform for displaying these first baby images was Facebook, followed by Instagram and Flickr…

Marc Phelps of baby photo agency http://www.posterista.co.uk, which commissioned the survey, said: “The fact that a picture of the average newborn is now online within an hour just goes to highlight the enormous impact social media has had on our lives in the past five years, and how prevalent these pages are in helping to keep loved ones informed on the special occasions in our lives, such as the birth of a new child.

Some more on the survey:

The poll by print site http://www.posterista.co.uk, which surveyed 2,367 parents of children aged five and under, aimed to discover the impact social media have had on the way new parents share information and images of their offspring…

The top five reasons cited for sharing these images online included keeping distant family and friends updated (56%), expressing love for their children (49%), describing it as an ideal location to store memories (34%), saying it is a great way to record children’s early years (28%), and to brag to and “better” other parents’ photos (22%).

It sounds like complete digital immersion. The most common reason given for this practice mirror the main reasons users give for participating in SNS like Facebook: to remain connected with others. But, the next four reasons differ. The second and fifth reasons suggest posting photos about newborns is about social interactions, first with the new baby (positive, though the baby doesn’t know it – plus, this could be part of a public performance of how love is shown in the 2010s) but then also in competition with others (negative). The third and fourth reasons are more about new digital tools; instead of developing film or printing pictures, SNS can be online repositories of life (offloading our memories online).

Thinking more broadly, what are the ethics of posting pictures of people online who haven’t given their permission or don’t know they are online? This could apply to children but this could also apply to friends or even strangers who end up in your photos. Some have suggested companies like Facebook have information on people who don’t have profiles through the information provided by others. Plus, if you don’t go online, others might think you are suspicious. So, perhaps the best way to protect your content online is not to withdraw and try to hide but rather to rigorously monitor all possible options…