Tiny houses may be missing TVs, other modern technologies

Tiny houses differ from McMansions in their size but perhaps also in another feature: a lack of TVs and other modern media technologies.

As I browsed the pages of both company’s full color, Robb Report-quality catalogs, one thing really stood out: In no picture of a fully furnished room did I see a single television. That can’t be a coincidence.

These are not the “Jewel Box” new homes filled with automation and electronics Gordon Gekko and his minions are supposedly building as all Baby Boomers are forced to downsize. Jewel Boxes? More like thumb drives if we are making an accurate size comparison.

There are clearly challenges to designing relevant A/V, home theater, whole house entertainment/convenience and security for a tiny home. Multi-purpose structures and thoughtful use of hydraulics just begin the scratch the surface. An exhibit at the Museum of the City of New York has a full size working model of a mini apartment that shows all sorts of folding and sliding stuff including a television. It almost looks like two different apartments, literally day and night.

This could suggest tiny houses are not just about having smaller houses: it is part of a larger lifestyle package away from consumerism that includes restricting television consumption. However, these two things don’t necessarily have to go together: tiny house or micro-apartment dwellers may have strong interests in different media including streaming TV and video games. I would suspect many tiny house owners have a laptop, tablet, and/or smartphone but I would also guess they don’t want their small homes to be dominated by things like large TVs that are often the focal points of social spaces in McMansions.

A call to “begin creating synthetic sociology”

Two academics call for “synthetic sociology”:

Well, it’s time we begin creating synthetic sociology. Along with Nicholas Christakis, I recently laid out the potential for this new field:

We wanted to see if this could be done in humans. Like crabs, humans have specific kinds of behavior that can be predicted, in groups. To harness this, we created a survey on Amazon’s Mechanical Turk, surveying lots of people at once.

We asked a couple hundred people to complete a string of 1’s and 0’s, and asked them to make it “as random as possible.” As it happens, people are fairly bad at generating random numbers—there is a broad human tendency to suppose that strings must alternate more than they do. And what we found in our Mechanical Turk survey was exactly this: Predictably, people would generate a nonrandom number. For example, faced with 0, 0, there was about a 70 percent chance the next number would be 1.

From this single behavioral quirk, it is theoretically possible to construct a way in which a group of humans can act as what is known as a logic gate in computer science. By running such a question through a survey of enough people, and feeding those results to other people, you can turn them into what computer scientists call a “NOR” gate—a tool to take two pieces of binary input and yield consistent answers. And with just a handful of NOR gates, you can make a binary adder, a very simple computing device that can add two numbers together.

What this means is that, given sufficient numbers of people, and their willingness to answer questions about random bits, we can re-deploy humans for a purpose they were not intended, namely to act as a kind of computer—doing anything from adding two bits to running Microsoft Word (albeit really, really slowly).

On one hand, it sounds like we are far from using these methods to have humans finish complicated tasks yet, on the other hand, this continues to build upon research about social networks and how information and other traits can be passed along and built on in a group of people. As these academics suggest, we have come some distance in recent decades in understanding and modeling human behavior and advances are likely to continue to come in the near future.

This also isn’t the first time that I have heard of social scientists using Amazon’s Mechanical Turk for studies. For a relatively small amount of money, researchers can find a willing group of participants for experiments or other tasks.

The “functional religion” of Steve Jobs, Apple

After seeing the response to Steve Jobs’ death, a commentator at the Washington Post looks at some sociological research on Apple and concludes that Jobs was the leader of a religion-like movement:

In a secular age, Apple has become a religion, and Steve Jobs was its high priest.

Apple introduced the iPod in 2001, and that same year, an Eastern Washington University sociologist, Pui-Yan Lam, published a paper titled “May the Force of the Operating System Be With You: Macintosh Devotion as Implicit Religion.” Lam’s research struck close to home, quite literally — her husband has a mini-museum of Apple products in the basement…

And what it stands for, apparently, is more than just gleaming products and easy-to-use operating systems. Lam interviewed Mac fans, studied letters they wrote to trade magazines and scrutinized Mac-related Web sites. She concluded that Mac enthusiasts “adopted from both Eastern and Western religions a social form that emphasized personal spirituality as well as communal experience. The faith of Mac devotees is reflected and strengthened by their efforts in promoting their computer of choice.”…

If that sounds like academic gobbledygook, consider how Apple devotees see the world. Back when Lam’s paper was published, there was a palpable sense of a battle between good and evil. Apple: good. Bill Gates: evil. Apple followers, Lam wrote, pined for a world where “people are judged purely on the basis of their intelligence and their contribution to humanity.” They saw Gates representing a more “profane” world where financial gain was priorities one, two and three.

This is an argument based on the work of Emile Durkheim. The argument is one that can be applied to many things that take on the functions of religion such as providing meaning (Apple vs. other corporations, beauty vs. functionality), participating in common rituals (buying new products), and uniting people around common symbols (talking with other Mac users). For example, some have suggested that the Super Bowl also is a “functional religion”: Americans come together to watch football, united in their patriotic and competitive beliefs while holding parties to watch the game and the commercials. Or baseball can be viewed as a “primitive religious ritual.”

While the comments beneath this story suggest some people think otherwise, this is not necessarily a slam against Apple or Steve Jobs. Durkheim argued that individuals need communal ties and we can find this in a number of places: the relationships formed in religious congregations, team-building activities in the office, and at bars and coffee shops where we try to connect with others during our daily routines. This does not mean Apple was necessarily a “false religion”: of course, we could talk about whether people could or should find ultimate meaning in a brand or products but we could also acknowledge that the social aspects of Apple made it more than just a set of technological product.

Conclusions about PC vs. Mac users based on an unscientific web survey

Based on the headline, this looks like an interesting story: “Mac vs. PC: The stereotypes may be true.” But there is a problem:

An unscientific survey by Hunch, a site that makes recommendations based on detailed user preferences, found that Mac users tend to be younger, more liberal, more fashion-conscious and more likely to live in cities than people who prefer PCs.

While the first part of this paragraph is treated as a clause that barely affects the rest of the text, it really is the key to the story. Hunch’s survey respondents identify as 52% PC users and 25% Mac users with 23% percent identifying with neither (and what do we do this category?). This compares to PC vs. Mac world market share of 89% to 11%. This is evidence that the online sample doesn’t quite match up with what computer users are actually buying. Voluntary web surveys are difficult to work with for this reason: even if there are a lot of respondents, we don’t know whether these respondents are representative of larger populations.

Perhaps CNN does cover themselves. The headline does suggest that these stereotypes “may” be correct and the second paragraph suggests the stereotypes may contain “some truth.” But a more cynical take regarding both CNN and Hunch is that they simply want more web visits from devoted PC or Mac defenders. Perhaps the fact that all of this is based on an unscientific survey is less important than driving visitors to one’s site and asking people to comment at the bottom of both stories.

The prospect of the automated grading of essays

As the American public debates the exploits of Watson (and one commentator suggests it should, among other things, sort out Charlie Sheen’s problem) how about turning over grading essays to computers? There are programs in the works to make this happen:

At George Mason University Saturday, at the Fourth International Conference on Writing Research, the Educational Testing Service presented evidence that a pilot test of automated grading of freshman writing placement tests at the New Jersey Institute of Technology showed that computer programs can be trusted with the job. The NJIT results represent the first “validity testing” — in which a series of tests are conducted to make sure that the scoring was accurate — that ETS has conducted of automated grading of college students’ essays. Based on the positive results, ETS plans to sign up more colleges to grade placement tests in this way — and is already doing so.

But a writing scholar at the Massachusetts Institute of Technology presented research questioning the ETS findings, and arguing that the testing service’s formula for automated essay grading favors verbosity over originality. Further, the critique suggested that ETS was able to get good results only because it tested short answer essays with limited time for students — and an ETS official admitted that the testing service has not conducted any validity studies on longer form, and longer timed, writing.

Such programs are only as good as the algorithm and method behind it. And it sounds like this program from ETS still has some issues. The process of grading is a skill that teachers develop. Much of this can be quantified and placed into rubrics. But I would also guess that many teachers develop an intuition that helps them quickly apply these important factors to work that they read and grade.

But on a broader scale, what would happen if the right programs could be developed? Could we soon reach a point where professors and teachers would agree that a program could effectively grade writing?

The Turing Test and what makes us human

Each year, the Loebner Prize competition takes place where judges are asked to interact through computer terminals with humans and computer programs. The judges then vote on whether they were talking with a human or a computer in this version of the Turing Test.

Two things struck me in this article:

1. The writer frames this issue of “mind vs. machine” as a more fundamental question about what separates humans from other animals:

Philosophers, psychologists, and scientists have been puzzling over the essential definition of human uniqueness since the beginning of recorded history. The Harvard psychologist Daniel Gilbert says that every psychologist must, at some point in his or her career, write a version of what he calls “The Sentence.” Specifically, The Sentence reads like this:

The human being is the only animal that ______.The story of humans’ sense of self is, you might say, the story of failed, debunked versions of The Sentence. Except now it’s not just the animals that we’re worried about.

We once thought humans were unique for using language, but this seems less certain each year; we once thought humans were unique for using tools, but this claim also erodes with ongoing animal-behavior research; we once thought humans were unique for being able to do mathematics, and now we can barely imagine being able to do what our calculators can.

If this is the case, then the Turing Test, or Ken Jenning’s battle with a computer on Jeopardy!, become the final battleground as humans lose what we thought made us distinct: thinking or reasoning. But this quick definition of human selfhood seems to leave out a very important component: the human search for meaning. Sociologists, like Peter Berger, have talked about this for decades in order to help explain the prevalence of religion. Berger argued that religions arise because people have enduring questions about why the world is the way it is and what their place is in it. Do other animals pose such existential questions? Is spirituality or religiosity what might really separate humans from other animals?

2. As the writer explains his strategy for showing the judges that he is human, he recounts past conversations computer programs have had with humans. Several of these were quite interesting. One program from the 1960s, Eliza, took a more therapeutic approach to dealing with humans:

User: Men are all alike.

Eliza: In what way?

User: They’re always bugging us about something or other.

Eliza: Can you think of a specific example?

User: Well, my boyfriend made me come here.

Eliza: Your boyfriend made you come here.

User: He says I’m depressed much of the time.

Eliza: I am sorry to hear you are depressed.

User: It’s true. I am unhappy.

Eliza: Do you think coming here will help you not to be unhappy?

User: I need some help, that much seems certain.

Eliza: What would it mean to you if you got some help?

User: Perhaps I could learn to get along with my mother.

Eliza: Tell me more about your family.

This program was apparently quite effective in fooling people and some even went off to have therapeutic conversations with this program. Another program acted belligerent at random points and threw some people off.

It sounds like these computer programs will continue to get more sophisticated.

A reminder that information overload is not just limited to our particular era in history

There is an incredible amount of data one can access today through a computer and high-speed Internet connection: websites, texts, statistics, videos, music, and more. While it all may seem overwhelming, a Harvard history professor reminds us that facing a glut of information is not a problem that has been faced only by people in the Internet age:

information overload was experienced long before the appearance of today’s digital gadgets. Complaints about “too many books” echo across the centuries, from when books were papyrus rolls, parchment manuscripts, or hand printed. The complaint is also common in other cultural traditions, like the Chinese, built on textual accumulation around a canon of classics…

It’s important to remember that information overload is not unique to our time, lest we fall into doomsaying. At the same time, we need to proceed carefully in the transition to electronic media, lest we lose crucial methods of working that rely on and foster thoughtful decision making. Like generations before us, we need all the tools for gathering and assessing information that we can muster—some inherited from the past, others new to the present. Many of our technologies will no doubt rapidly seem obsolete, but, we can hope, not human attention and judgment, which should continue to be the central components of thoughtful information management.

As technology changes, people and cultures have to adapt. We need citizens who are able to sift through all the available information and make wise decisions. This should be a vital part of the educational system – it is no longer enough to know how to access information but rather we need to be able to make choices about which information is worthwhile, how to interpret it, and how to put it into use.

Take, for example, the latest Wikileaks dump. The average Internet user no longer has to rely on news organizations to tell him or her how to interpret the information (though they would still like to fill that role). But simply having access to a bunch of secret material doesn’t necessarily lead to anything worthwhile.

From awe to impatience with machines

Christine Rosen at InCharacter.org writes about our relationship with machines. Her argument: people in the 1800s and early 1900s were awed by machines while today, “the more personalized and individualized our machines have become, the less humility we feel in using them.” Rosen suggests how this came about:

The awe experienced by earlier generations was part of a different worldview, one that demonstrated greater humility about many things, not least of which concerned their own human limits and frailties. Today we believe our machines allow us to know a lot more, and in many ways they do. What we don’t want to admit – but should – is that they also ensure that we directly experience less.

A thought-provoking essay. Machines are now so common and cheap that I think we often hardly recognize how they have changed our lives. In fact, new machines need to be almost life-altering (or have some new image attached to them) to gain our attention. Many of our common machines, like the automobile or many kitchen appliances, haven’t changed all that much over time as they still perform the same basic functions.

Having a sense of awe about a machine might also help us recognize some of the downsides of using new machines. If we are used to computers, we don’t think much anymore about the implications of joining a site like Facebook. Or we may not consider how having a search engine like Google affects how we think or gather and process information. We tend to accept new machines today as inevitable signs of progress (and we are progressing, right?) rather than stepping back and assessing what they mean.