Reflections on reasons some people hold out against smartphones

I found this overview of reasons why some people haven’t yet adopted smartphones to be quite interesting having been one of “those people” up until a few months ago. Here are the five reasons given for why some people haven’t made the switch:

  • Fear of addiction. “I don’t want to end up falling victim to the smartphone, where I dive in and get lost for hours at a time,” dumbphone owner 24-year-old Jim Harig, 24 told The Times‘ Teddy Wayne.
  • The benefits of disconnectivity. “I also fear my own susceptibility to an e-mail-checking addiction,” writes Wayne. “The pressure to always be in communication with people is overwhelming,”  Erica Koltenuk tells the Journal‘s Sue Shellenbarger.
  • Cost. “These die-hards say they are reducing waste and like sidestepping costly service contracts,” writes Shellenbarger.
  • Durability. “I want a phone that you could drop-kick into a lake and go get it and still be able to make a call,” says Patrick Crowley, who bought a new phone 5 years ago.
  • Anti-consumerism. “[David] Blumenthal sees no need to ‘keep running out and buying new things if you can patch them and they hold together,'” explains the Journal.

Until this past December, I would have argued for the first three reasons. Here are my experiences of these three reasons in the four months I have had a smartphone:

1. Fear of addiction. I didn’t want to be a person who pulls out their phone at every dull moment. I don’t think I do this today but the phone is undeniably handy in several situations. Since I love learning and information, it is invaluable to be able to look things up. Also, in moments that where I would have been waiting already, say the barber shop or in line, I can quickly look things up and use my time well (what a rationalization…). Third, a smartphone is indispensable while traveling whether one needs a map, restaurant reviews, airline info, and more. I would say that addiction is hard to combat though.

2. Disconnectivity. I like the occasional experience of being disconnected. In fact, I think it is necessary to disconnect occasionally from all electronic/digital media. Here is my personal measure of addiction: if I can still enjoy a longer period of time (a few hours to a few days) without feeling a consistent need to check my phone, I’m in good shape. The smartphone should be a tool, not my life. The phone can enhance my interaction with others but it can also be a hindrance and I want to be mindful of this. Additionally, I have refused to connect my phone to my work email and I don’t want any apps that would allow me to do work through my phone.

3. Cost. I’m still irritated about this issue but there are cheaper options than the contract carriers. My wife and I got phones from Virgin Mobile and while it is not perfect, it is cheaper than any of the contract options. Perhaps this is simply the price of living in the modern world and considering that these phones are like little computers, it is a worthwhile investment.

All in all, the smartphone world is a nice one even if I have lost the “pride” mentioned in this article of being someone who can still hold out against the powerful forces of technology and consumerism. But I can still be part of the camp that relishes not having an iPhone

Modern skeuomorphs are touches of the past in a digital age

Clive Thompson discusses skeumorphs, “a derivative object that retains ornamental design cues to a structure that was necessary in the original” (Wikipedia definition), in a digital world:

Now ask yourself: Why does Google Calendar—and nearly every other digital calendar—work that way? It’s a strange waste of space, forcing you to look at three weeks of the past. Those weeks are mostly irrelevant now. A digital calendar could be much more clever: It could reformat on the fly, putting the current week at the top of the screen, so you always see the next three weeks at a glance…

Because they’re governed by skeuomorphs—bits of design that are based on old-fashioned, physical objects. As Google Calendar shows, skeuomorphs are hobbling innovation by lashing designers to metaphors of the past. Unless we start weaning ourselves off them, we’ll fail to produce digital tools that harness what computers do best.

Now, skeuomorphs aren’t always bad. They exist partly to orient us to new technologies. (As literary critic N. Katherine Hayles nicely puts it, they’re “threshold devices, smoothing the transition between one conceptual constellation and another.”) The Kindle is easy to use precisely because it behaves so much like a traditional print book.

But just as often, skeuomorphs kick around long past the point of reason. Early automobiles often included a buggy-whip holder on the dashboard—a useless fillip that designers couldn’t bear to part with.

I’ve noticed the same thing on my Microsoft Outlook calendar: the default is to show the full month of February even today when I don’t really care to look back at February and would much rather see what is coming up in March. I can alter it somewhat in the options by displaying two months at a time but it still shows all the earlier part of February.

What would be interesting to hear Thompson discuss is the half-life of skeuomorphs. If they are indeed useful for helping users make a transition from an old technology to a new one, how long should the old feature stick around? Is this made more complicated when the product has a broader audience? For example, iPhone users could be anyone from a 14 year old to an 80 year old. Presumably, the 14 year old might want the changes to come more quickly and tends to acquire the newer stuff earlier but the device still has to work for the 80 year old who is just getting their first smartphone and is doing partly so because they only recently became so cheap. How do companies make this decision? Could a critical mass of users “force”/prompt a change?

This is also a good reminder that new technologies sometimes get penalized for being too futuristic or too different. If skeu0morphs are used, users will make the necessary steps over time toward new behaviors and ways of seeing the world. Perhaps Facebook falls into this category. The method of having “friends” all in one category is often clunky but if users had to simply open their information to anyone, who would want to participate? However, by gradually changing the structure (remember we once had networks which were a comforting feature because you could easily place/ground people within an existing community), Facebook users can be moved toward a more open environment.

In general, social change takes time, even if the schedule in recent decades has become more compressed.

College student survives 90 day “Amish Project” without technology

This is a news story that could only be written in our times: a University of Wisconsin-Madison student voluntarily unplugged from all media for 90 days and lived to tell about it. Here is a quick description of his “Amish Project”:

From October to December, he unplugged from social media, email, texts, and cell phones because he felt that we spend more quality time with gadgets and keyboards than we do with the people we really care about.

During his social experiment, he found that some people he counted among his close friends really weren’t that close after all. He also discovered that taking a break from his relationship with social media and really paying attention to the people around him can revive real-life romance.

And a few short thoughts from the student about his experiences:

[on getting started] I mean, I struggle with that because everyone wants to know about it, and wants to know how different it is. It’s hard, because I was just going to turn off my phone at first. That was the thing that bothered me most, but I realized that if I turned off the phone, people were just going to email me all the time or send me a million Facebook messages. It’s kind of a hard thing, because we’re getting to the point where if you’re not responding to people’s text messages within an hour of when they send them, or within a day for emails, it’s just socially unacceptable. It’s been hard for me since I’ve been back. I’ve been bad with my phone and people are, like, “What the hell? I text messaged you…” So I haven’t been up to social standards in terms of responding and people don’t really understand that, I guess…

[on finishing the project and returning to technology] It’s definitely different, but I catch myself doing exactly what I hated. Someone is talking to me and I’m half-listening and reading a text under the table. For me, it’s trying to be more aware of it. It kind of evolved from being about technology to more of just living in the moment. I think that’s what my biggest thing is: There’s not so much chasing for me now. I’m here now, and let’s just enjoy this. You can be comfortable with yourself and not have to go to the crutch of your phone. For me, that’s more what I will take away from this.

A few thoughts:

1. The author concludes that this means “texts and Facebook wall posts can serve as an attractive veneer making relationships seem more genuine than they really are.” I wonder how many people feel this way and if many do, do they simply keep going along out of habit or because of social pressure?

2. It seems like a lot of things that there possible for this student without technology might be much more difficult for the average adult. At college, it is much easier to find people, run into others, and pass notes, even on a big campus like UW-Madison. Could the average adult who lives alone and commutes to work make this work? Perhaps the key here is living near or very close to people one cares about.

3. What if it becomes “cool” to unplug from technology or turns into a status symbol rather than a reasoned choice about paying more attention to the people that mater?

4. I find the set-up to stories like these to be humorous: how in the world could people have survived without the technology we have today?!? Somehow they managed. The comparison here to the Amish is funny as well – there is a whole lifestyle associated with this that this college student isn’t truly considering.

5. This story presents a contrast between “authentic/real” relationships versus “superficial” relationships. Is it really that easy to categorize relationships? Research suggests most people use technology like Facebook to try to maintain a connection between people they already know – is that necessarily so bad? Perhaps it does detract from the present but it also makes us more aware of our broader social networks.

Considering Steve Jobs and the role of cultural context in innovation

David Brooks explores the “innovation stagnation thesis” and one of the ideas of this argument is that cultural context matters for innovation:

Third, there is no essential culture clash. Look at the Steve Jobs obituaries. Over the course of his life, he combined three asynchronous idea spaces — the counterculture of the 1960s, the culture of early computer geeks and the culture of corporate America. There was LSD, “The Whole Earth Catalogue” and spiritual exploration in India. There were also nerdy hours devoted to trying to build a box to make free phone calls.

The merger of these three idea networks set off a cascade of innovations, producing not only new products and management styles but also a new ideal personality — the corporate honcho in jeans and the long-sleeve black T-shirt. Formerly marginal people came together, competed fiercely and tried to resolve their own uncomfortable relationships with society.

The roots of great innovation are never just in the technology itself. They are always in the wider historical context. They require new ways of seeing. As Einstein put it, “The significant problems we face cannot be solved at the same level of thinking we were at when we created them.”

If you want to be the next Steve Jobs and end the innovation stagnation, maybe you should start in hip-hop.

So what exactly is Brooks saying? People who want to be innovators need to embrace or immerse themselves into diverse cultural systems so that they can then synthesize different ideas in new ways? Or is it that innovators like Jobs are only possible in certain cultural contexts and our current cultural context simply doesn’t push people into these different ideas or doesn’t promote this?

Sociologists of culture would have something to say about this. While Jobs clearly had unique individual skills, the production approach would emphasize how his combination of cultural contexts was made possible. He came of age in an era when individuals were encouraged to seek out new ideas and learn how to express themselves. He started a computer company in a field that didn’t have many dominant players and two guys working in a garage could create one of the world’s most enduring brands. He was alive in an era when information technology was a hot area and perhaps ranked higher in people’s interests that things like space exploration and medical cures. (One way to think about this is to wonder if Jobs could have been successful in other fields. Were his skills and context translatable into other fields? Could Jobs have helped find a cure for cancer rather than create personal computing devices? Should he have tackled their other fields – what is the opportunity cost to the world of his choice?) He had the education and training (though no college degree) that helped him to be successful.

In the end, we could ask how as a culture or a society we could encourage more people to become innovators. Is studying hip-hop really the answer? What kind of innovation do we want most in our society – scientific progress or self-expression or dealing with social problems or something else? When we talk about pushing math and science in schools, what innovations do we want our students to produce?

McMansions are too costly in terms of money and relationships

In another article about McMansions in Australia (and I have been seeing more and more of these  – perhaps due to the recent news that the country has the largest new homes on average), one writer suggests McMansions cost too much and have a negative impact on relationships:

Australians live in the world’s biggest homes but new research shows our trend to upsize our living space is reversing. The average size of new houses being built in this country is getting smaller as people start to realise that living in a McMansion does not make sense. While the financial implications of owning a large home have surely been considered, there are other costs that are not as obvious…

The reasons are obvious- it costs too much. Far from being energy efficient, the financial burden that comes with a bigger pad can weigh too heavily on a household already struggling to keep up with the rising cost of living. There are bigger gas and power bills and mortgage repayments not to mention the hassle of having to spend time and money maintaining and keeping the whole thing clean … no wonder we are thinking again.

Another problem of the larger, have-it-all home is that we have less need to leave it to meet our daily needs. Social interaction is being replaced by home-based activity for our convenience. It is easier to get on the treadmill, ‘chat’ to someone on Facebook, play tennis on the wii and shop online instead of getting out into our communities.

There is no substitute for real communication and the lack of it can affect our sense of well-being. Mental health issues such as depression and the feeling of isolation that many people experience is the reason some programs are being developed, specifically aiming to get people out of the house, talking to others and active in their communities. ‘The Shed’ for men and ‘R U OK’ Day are a couple of examples.

The financial costs of McMansions are clear, particularly if you include costs beyond the price of the home and consider the impact on other areas like cars, roads, infrastructure, and filling/furnishing a larger home.

The relational impact of McMansions has also been covered by others, particularly since they seem to encourage more private lives. But, my mind jumped to the next step in the argument illustrated in this article: how small would houses need to be in order to encourage interaction even among family members? If a McMansion is roughly 3,000-6,000 square feet, it seems like it would be fairly easy for family members to avoid each other. But, if a home is 2,000 square feet, would families necessarily interact more? Perhaps if we went back to the era of Levittown sized homes, around 900 square feet, this could induce some interaction.

But even in smaller homes, there are other factors at work. At the end of the article, the writer suggests that perhaps the real problem isn’t the size of the home:

I am conscious of creating an environment where communication is encouraged and valued so we know what’s going on in each other’s lives. There are no computers, TVs or other electronic entertainment in the bedrooms. Our living space is used for meals, games, entertainment, homework and handstands. It’s a bit cluttered but it’s homely and there’s always someone to talk to.

Technology could play a role as could cultural ideas about the need for “time alone.”

In the end, a smaller home probably increases the number of times people have to run into each other but it doesn’t necessarily mean that they will have deeper, more meaningful relationships. There are larger issues at work here beyond the number of square feet a home has or whether the home has a porch close to the street.

College students don’t know how to use Google

I recently heard about this study at a faculty development day: college students have difficulty understanding and using search results.

Researchers with the Ethnographic Research in Illinois Academic Libraries project watched 30 students at Illinois Wesleyan University try to search for different topics online and found that only seven of them were able to conduct “what a librarian might consider a reasonably well-executed search.”

The students “appeared to lack even some of the most basic information literacy skills that we assumed they would have mastered in high school,” Lynda Duke and Andrew Asher write in a book on the project coming out this fall.

At all five Illinois universities, students reported feeling “anxious” and confused when trying to research. Many felt overwhelmed by the volume of results their searches would turn up, not realizing that there are ways to narrow those searches and get more tailored results. Others would abandon their research topics when they couldn’t find enough sources, unaware that they were using the wrong search terms or database for their topics.

The researchers found that students did not know “how to build a search to narrow or expand results, how to use subject headings, and how various search engines (including Google) organize and display results.” That means that some students didn’t understand how to search only for news articles, or only for scholarly articles. Most only know how to punch in keywords and hope for the best.

Such trust in technology. Wonder where this came from?

I like how anthropologists were involved in this study. Including an observation component could make this data quite unique. I don’t think many people would think that ethnographic methods could be used to examine such up-to-date technology.

Several other thoughts:

1. How many adults could explain how Google displays pages?

1a. If people knew how Google organized things, would they go elsewhere for information?

2. Finding and sorting through information is a key problem of our age. The problem is not a lack of information or possible sources; rather, there is too much.

3. Who exactly in schools should be responsible for teaching this? Librarians, perhaps, but students have limited contact. Preferably, all teachers/professors should know something about this and talk about it. Parents could also impart this information at home.

4. I’m now tempted to ask students to include all of their search terms in final projects so that I can check and see whether they actually sorted through articles or they simply picked the top few results.

Telling graphs about American infrastructure spending

A number of commentators in recent years have pointed out the relatively small amount of spending on infrastructure by the American government. Here is another take on this, complete with some handy graphs. Additionally, here is some interpretation about government spending on education and technology:

Productivity-enhancing spending, according to Meeker, comes from three main sources: infrastructure, education and research and development investment. We’ve seen infrastructure spending collapse as a share of the budget since the 1960s. What about education and R&D?

In 1970, the U.S. (at the federal, state and local level) spent twice as much on education as health care. Twenty years later, health care closed the gap, and today, total government spending on health care is about 33 percent higher than education spending, which is more or less even with its 1970s levels.

Second, look at technology. R&D spending exploded in the late 1950s and 1960s on the back of government investments in aeronautics and science. Fifty years later, federal R&D has fallen below 1950s levels as a share of GDP, while the private sector has picked up the slack.

So after looking at figures like this, I want to ask what kind of strategies could be utilized to tackle the issue of infrastructure spending, particularly with budget issues looming all over the country?

San Fran “coffeehouse and tech incubator” inspired by idea of “third places”

Starbucks CEO Howard Schultz has said in recent years that the company seeks to become a “third place,” a space between work and home. This term was popularized by sociologist Ray Oldenburg in The Great Good Place. But exactly how a coffee shop should operate in order to be a third place is up for debate. A new San Francisco firm, The Summit Cafe, envisions a coffeeshop plus a center for technological incubation:

With its copious power outlets, Gouda-wrapped meatballs, and a curated magazine rack featuring vintage Steve Jobs covers, the Summit café sits at the intersection of San Francisco’s three most conspicuous tribes: techies, foodies, and yuppies. Yet what separates the Summit from being just another Wi-Fi boîte is the dual-purpose nature of the 5,000-square-foot space. One floor above the Laptop Mafia, the café features a cluster of offices where groups of programmers and developers toil away in an effort to launch the next Twitter—or at least the next OkCupid. Created by i/o Ventures, a Bay Area startup accelerator comprising former executives from MySpace (NWS), Yahoo! (YHOO), and file-sharing site BitTorrent, the Summit is equal parts Bell Labs and Central Perk—and probably the country’s first official coffeehouse tech incubator. Every four months, i/o selects and funds a handful of small tech ventures to the tune of $25,000 each in return for 8 percent of common stock. In addition to the cash, each team gets four months of office space at the Summit, mentoring from Web gurus like Russel Simmons of Yelp, and discounts on all the Pickle & Cheese Plates or White Snow Peony Tea they could possibly need. Since the café opened on Valencia Street last fall, two companies have already been sold, including damntheradio, a Facebook fan management tool. To hedge against any potential risk, i/o also rents half of the Summit’s other desk space to independent contractors and fledgling Web entrepreneurs. It’s even experimenting with an arrangement in which customers can pay $500 for a dedicated desk—on top of a $250 membership fee.

Is this sort of thing only possible in San Francisco (high-tech culture) or perhaps just in major cities?

But this space does seem more like a work space than a true third place. Are there people who come here just to hang out? Do fledgling companies that come here mix with other fledgling companies to form new ideas and firms?

How the John Edwards affair became news

How exactly certain scandals come to light when they do is often an interesting tale. The former editor of the National Enquirer explains how his investigative team put together the story of John Edwards’ affair. The tale involves the use of technology and a profiler who provided insights into how to trap Edwards in his lies:

I knew there was no viable scenario for Edwards to confess to the Enquirer. I faced the bitter realization that another news organization would reap the benefits of our team’s hard work and get the confession, but I also knew that ultimately that confession would validate the Enquirer‘s earlier story as well as the new one.

Behind the scenes we exerted pressure on Edwards, sending word though mutual contacts that we had photographed him throughout the night. We provided a few details about his movements to prove this was no bluff.

For 18 days we played this game, and as the standoff continued the Enquirer published a photograph of Edwards with the baby inside a room at the Beverly Hilton hotel.

Journalists asked if we had a hidden camera in the room. We never said yes or no. (We still haven’t). We sent word to Edwards privately that there were more photos.

He cracked. Not knowing what else the Enquirer possessed and faced with his world crumbling, Edwards, as the profiler predicted, came forward to partially confess. He knew no one could prove paternity so he admitted the affair but denied being the father of Hunter’s baby, once again taking control of the situation.

Perhaps this story isn’t anything unusual – technology makes information gathering a lot easier. Yet it is somewhat shocking to me that plenty of powerful people, like John Edwards or Tiger Woods, think that they can get away with things in the long run. Sure, the National Enquirer had to spend months tracking down this story but in the end, it was doable and effectively changed the public perception of John Edwards forever. Is there something that happens when people are put in powerful positions that changes their perceptions of what they can and can’t get away with?

Is it even possible for the powerful to get away with things like this any more? How many “scandals” are lurking out there somewhere? It is certainly a far cry from the days of the 1950s and before when sportwriters routinely shied away from reporting on what athletes did away from home and political reporters didn’t talk about everything.

A reminder that information overload is not just limited to our particular era in history

There is an incredible amount of data one can access today through a computer and high-speed Internet connection: websites, texts, statistics, videos, music, and more. While it all may seem overwhelming, a Harvard history professor reminds us that facing a glut of information is not a problem that has been faced only by people in the Internet age:

information overload was experienced long before the appearance of today’s digital gadgets. Complaints about “too many books” echo across the centuries, from when books were papyrus rolls, parchment manuscripts, or hand printed. The complaint is also common in other cultural traditions, like the Chinese, built on textual accumulation around a canon of classics…

It’s important to remember that information overload is not unique to our time, lest we fall into doomsaying. At the same time, we need to proceed carefully in the transition to electronic media, lest we lose crucial methods of working that rely on and foster thoughtful decision making. Like generations before us, we need all the tools for gathering and assessing information that we can muster—some inherited from the past, others new to the present. Many of our technologies will no doubt rapidly seem obsolete, but, we can hope, not human attention and judgment, which should continue to be the central components of thoughtful information management.

As technology changes, people and cultures have to adapt. We need citizens who are able to sift through all the available information and make wise decisions. This should be a vital part of the educational system – it is no longer enough to know how to access information but rather we need to be able to make choices about which information is worthwhile, how to interpret it, and how to put it into use.

Take, for example, the latest Wikileaks dump. The average Internet user no longer has to rely on news organizations to tell him or her how to interpret the information (though they would still like to fill that role). But simply having access to a bunch of secret material doesn’t necessarily lead to anything worthwhile.