CBS infringes…itself

From the left-hand-doesn’t-know-what-the-right-hand-is-doing department, CBS appears to have infringed its own copyrighted works:

A CBS reporter embedded a video of one of their own pieces of content onto a CBS-owned web property. Only to have it soon yanked down by lawyers (or lawyer-bots – AKA auto-DMCA patrol).

Click on over to the original piece on The Future Buzz to see the screenshot, which is pretty hysterical.

A $4000 mistake

Talk about turning lemons into lemonade.  A Canadian-based copywriting firm is attempting to parlay a very expensive mistake into favorable publicity:

“Like many other creative types in the web industry, our copywriters were not clear on image copyright laws, and we were taught an expensive lesson,” said Rick Sloboda, Senior Web Copywriter at Webcopyplus, which provides designers and businesses optimized web content. “We’re sharing our story, so others can learn from our experience and avoid the same mistake.”

In May, 2010, with the assumption Web images without copyright notices were “public domain” and free to use, a Webcopyplus copywriter used Google images to find an unmarked 400 x 300 pixel scenic photo to complement an article for a tourism client’s blog.

Webcopyplus has posted additional details on their blog, as well as some resources for obtaining stock photography in a way that won’t get one sued (including Creative Commons photos available via Flickr).

San Fran “coffeehouse and tech incubator” inspired by idea of “third places”

Starbucks CEO Howard Schultz has said in recent years that the company seeks to become a “third place,” a space between work and home. This term was popularized by sociologist Ray Oldenburg in The Great Good Place. But exactly how a coffee shop should operate in order to be a third place is up for debate. A new San Francisco firm, The Summit Cafe, envisions a coffeeshop plus a center for technological incubation:

With its copious power outlets, Gouda-wrapped meatballs, and a curated magazine rack featuring vintage Steve Jobs covers, the Summit café sits at the intersection of San Francisco’s three most conspicuous tribes: techies, foodies, and yuppies. Yet what separates the Summit from being just another Wi-Fi boîte is the dual-purpose nature of the 5,000-square-foot space. One floor above the Laptop Mafia, the café features a cluster of offices where groups of programmers and developers toil away in an effort to launch the next Twitter—or at least the next OkCupid. Created by i/o Ventures, a Bay Area startup accelerator comprising former executives from MySpace (NWS), Yahoo! (YHOO), and file-sharing site BitTorrent, the Summit is equal parts Bell Labs and Central Perk—and probably the country’s first official coffeehouse tech incubator. Every four months, i/o selects and funds a handful of small tech ventures to the tune of $25,000 each in return for 8 percent of common stock. In addition to the cash, each team gets four months of office space at the Summit, mentoring from Web gurus like Russel Simmons of Yelp, and discounts on all the Pickle & Cheese Plates or White Snow Peony Tea they could possibly need. Since the café opened on Valencia Street last fall, two companies have already been sold, including damntheradio, a Facebook fan management tool. To hedge against any potential risk, i/o also rents half of the Summit’s other desk space to independent contractors and fledgling Web entrepreneurs. It’s even experimenting with an arrangement in which customers can pay $500 for a dedicated desk—on top of a $250 membership fee.

Is this sort of thing only possible in San Francisco (high-tech culture) or perhaps just in major cities?

But this space does seem more like a work space than a true third place. Are there people who come here just to hang out? Do fledgling companies that come here mix with other fledgling companies to form new ideas and firms?

Is search engine optimization key to Huffington Post’s success?

This article suggests the Huffington Post’s value (exhibited in its recent sale to AOL) is based more on search engine optimization than on news or citizen journalism:

In addition to writing articles based on trending Google searches, The Huffington Post writes headlines like a popular one this week, “Watch: Christina Aguilera Totally Messes Up National Anthem.” It amasses often-searched phrases at the top of articles, like the 18 at the top of the one about Ms. Aguilera, including “Christina Aguilera National Anthem” and “Christina Aguilera Super Bowl.”

As a result of techniques like these, 35 percent of The Huffington Post’s visits in January came from search engines, compared to 20 percent for CNN.com, according to Hitwise, a Web analysis firm.

Mario Ruiz, a spokesman for The Huffington Post, said search engine optimization played a role on the site but declined to discuss how it was used.

Though traditional print journalists might roll their eyes at picking topics based on Google searches, the articles can actually be useful for readers. The problem, analysts say, is when Web sites publish articles just to get clicks, without offering any real payoff for readers.

This is an ongoing issue with online news providers: simply producing good journalistic content doesn’t get the same number of clicks as celebrity and gossip-laden stories. And as the article suggests, some search engines, such as Google, may fight back by reducing the rank or placement of pages or sites that rely heavily on popular keywords.

But aren’t these sorts of practice inevitable when making money on the Internet is based around page views and clicking on advertisements? The goal has to be simply getting the most viewers rather than providing the best or more complete or most useful content.

Just how much did Facebook and Twitter contribute to changes in Egypt?

With the resignation of Hosni Mubarek, there is more talk about how the Internet, specifically social media sites like Facebook and Twitter, helped bring down a dictator in Egypt:

Dictators are toppled by people, not by media platforms. But Egyptian activists, especially the young, clearly harnessed the power and potential of social media, leading to the mass mobilizations in Tahrir Square and throughout Egypt. The Mubarak regime recognized early on that social media could loosen its grip on power. The government began disrupting Facebook and Twitter as protesters hit the streets on Jan. 25 before shutting down the Internet two days later.

In addition to organizing, Egyptian activists used Facebook, YouTube, and Twitter to share information and videos. Many of these digital offerings made the rounds online but were later amplified by Al Jazeera and news outlets around the world. “This revolution started online,” Ghonim told Blitzer. “This revolution started on Facebook.”

Egypt’s uprising followed on the heels of Tunisia’s. In each case, protestors employed social media to help oust an authoritarian government–a role some Western commentators expected Twitter to play in Iran during the election protests of 2009.

This article, and others, seem to want it both ways. On one hand, it seems like social media played a role. But when considering whether they were the main factor, the articles back away. Here is how this same article concludes:

It’s true that tweeting alone–especially from safe environs in the West–will not cause a revolution in the Middle East. But as Egypt and Tunisia have proven, social media tools can play a significant role as as activists battle authoritarian regimes, particularly given the tight control dictators typically wield over the official media. Tomorrow’s revolution, as Ghonim would likely attest, may be taking shape on Facebook today.

Or it may not. Ultimately, we need more data. For example, we could match Facebook or Twitter activity regarding Egypt with the level of protests on specific days – did more online traffic or activity lead to bigger protests? This would at least establish a correlation. Why can’t we match GPS information from people using Facebook or Twitter while they were protesting on the streets? This would require more private data, primarily from cell phone companies, but it would be fascinating to look for patterns in this data. And how exactly do these cases from Egypt and Tunisia help us understand what didn’t happen in Iran?

These questions about the role of social media need some answers and perhaps some innovative insights into data collection. And a thought from another commentator are helpful to keep in mind:

Evgeny Morozov writes in his new book, “The Net Delusion: The Dark Side of Internet Freedom,” that only a small minority of Iranians were actually Twitter users. Presumably, many tweeting about revolution were doing so far from the streets of Tehran.

“Iran’s Twitter Revolution revealed the intense Western longing for a world where information technology is the liberator rather than the oppressor,” Morozov wrote, according to a recent Slate review. In his book, Morozov writes how authoritarian regimes can use the Internet and social media to oppress people rather than such platforms only working the other way around.

Perhaps we only want it to be true that social media use can lead to revolution. If there are enough articles written suggesting that social media helped in Egypt and Tunisia, does it make it likely that in the future social media will play a pivotal and even decisive role in social movements? Morozov seems to suggest this is a Western idea, probably rooted in Enlightenment ideals where information can (and should?) disrupt tradition and authoritarianism.

The smell of a bad net neutrality argument

NOTE:  There are follow-up posts available here and here.

Forget the obvious jokes about broadcast content being an open sewer:  Alan J. Roth over at the Congress Blog actually, literally thinks that Washington, D.C. sewage treatment has a lot to teach Netflix:

I have two jobs. One of them – the full-time job that pays the bills – involves directing government affairs for a trade association of internet service providers (ISPs) and telecom companies. The other – a volunteer position – is my service on the Board of Directors of the District of Columbia Water and Sewer Authority (WASA).

And what is this connection that Mr. Roth has seen betwixt his two modes of employ?

I read Netflix CEO Reed Hastings’ January 26th letter to his shareholders [link here] offering his views on who should bear the costs of transporting and delivering his company’s high- volume, bandwidth-hogging Internet video service to its customers. A light bulb went on in my head: There’s a lesson that Hastings and his customers could take from how the Washington area pays for sewage disposal.

At this point, I’m dubious but curious.  Roth goes on to explain that the District owns a major sewage treatment plant in Blue Plains that serves suburbs beyond D.C.:

The suburbs’ sewage gets to Blue Plains via the same kind of “regional front doors” that Hastings described in his shareholder letter. A series of interconnection points link the suburbs’ sewer lines with the “last mile” that WASA operates through DC on the way to final treatment.

So there’s the analogy:  Roth thinks that sewage line’s “last mile” can be compared with with broadband’s last mile.  What’s his point?

Unlike Netflix’s self-serving suggestion that it should pay only to transport its bits to a regional gateway, after which the costs of delivery to the end point would fall on others, [the regional sewer services in D.C.] approached the costs of last-mile delivery differently. Each wholesale customer – that is, each suburban authority sending sewage to DC – pays a pro-rata share of the capital costs for Blue Plains and related transmission facilities, based on an agreed-upon allocation of the plant’s capacity. Operating and maintenance costs are shared based on each suburban customer’s actual flow of sewage to the plant.

By contrast, the “Netflix model” proposes to spread the costs created by Netflix customers to other consumers who derive no benefit from Netflix’s video bits. If WASA operated this way, suburban retail ratepayers would be billed by their own wastewater authorities for the relatively smaller costs of transporting their sewage to the interconnection points at the DC line. After that, DC retail ratepayers would have to pay all the costs of not only transporting suburban sewage to its ultimate destination at Blue Plains, but also for all the costs of processing and treating the suburbs’ waste there.

If I understand Roth’s analogy correctly, he has completely misapplied it.  Consider:

1.  Netflix is the analogue to the Blue Plains treatment plan.  Netflix provides the value (clean water/streamed video) that the consumer ultimately wants.  Local ISP’s are, in contrast, merely the D.C. suburbs with in-home connections but without adequate sewage treatment facilities.

2.  Netflix has built (or rented) its own sewer/data lines right to the point where the suburb/ISP takes over.

3.  Why shouldn’t the ISP only be paid for “the relatively smaller costs of transporting their sewage to the interconnection points”?

Am I missing something here?  Or is this argument really as self-defeating as it seems?

U.S. intellectual property enforcement actions: the report

CNET News alerted me to yesterday’s release of the 2010 U.S. Intellectual Property Enforcement Coordinator Annual Report on Intellectual Property Enforcement (92 page PDF):

The 92-page report…reads a lot like a report that could have been prepared by lobbyists for the recording or movie industry: it boasts the combined number of FBI and Homeland Security infringement investigations jumped by a remarkable 40 percent from 2009 to 2010.

Nowhere does the right to make fair use of copyrighted material appear to be mentioned, although in an aside on one page Espinel mentions that the administration wants to protect “legitimate uses of the Internet and… principles of free speech and fair process.”

This is the first annual report released by the Office of the United States Intellectual Property Enforcement Representative (official website) since its creation in late 2008 and the Senate confirmation of the first Intellectual Property Enforcement Coordinator (“copyright czar”) in late 2009.  Although it covers a wide range of intellectual property issues, I will mostly limit this post to copyright-related items.

Here are some “highlights” from the report:

1.  Policy statement regarding Internet enforcement actions (pp. 5-6):

The debate over the proper role of government in the online environment extends to the issue of intellectual property enforcement: that is, reducing the distribution of pirated or counterfeit goods online or via the Internet, including digital products distributed directly over the Internet or physical products advertised or ordered via the Internet. The choices made in the area of intellectual property enforcement can have spillover effects for government action, regulation or intervention in other areas. Therefore, this office has given considerable thought to the best approach towards enforcement in the online environment. As outlined below, we believe the right approach is one that combines forceful criminal law enforcement with voluntary and cooperative action by the private sector consistent with principles of transparency and fair process. [emphasis added]

Almost as an after-thought, the report later notes (p. 7) that,

without mandating business models, we believe it is important to encourage the development of alternatives for consumers that meet their legitimate needs and preferences. We note some activity in the marketplace to develop new and more flexible methods of distribution and will look for opportunities to support those efforts.

2.  Summary of the current state of the proposed Anti-Counterfeiting Trade Agreement (ACTA) (Wikipedia backgrounder) (pp. 22-23):

ACTA requires, among other things, that signatories establish effective intellectual property enforcement legal frameworks, including obligations to:

  • establish criminal procedures and penalties for willful trademark counterfeiting or copyright piracy, or importation or use, on a commercial scale, and aiding and abetting criminal conduct, and authorizes criminalizing camcording;
  • establish laws that impose imprisonment and destruction as penalties for criminal violations of enforcement laws;
  • establish civil enforcement laws that enhance the tools available to rightholders to crack down on counterfeiting and piracy, including by providing for meaningful damages for rightholders, the destruction of counterfeit goods and also including appropriate safeguards against abuse and to protect privacy as appropriate;
  • ensure that civil and criminal enforcement laws are equally applicable to copyright infringement occurring online; and
  • establish anti-circumvention laws to protect the use of technological protection measures (digital locks).

3.  Summary of successful efforts to recruit private-sector actors into IP enforcement (pp. 27-28)

We believe that most companies share the view that providing services to infringing sites is inconsistent with good corporate business practice and we are beginning to see several companies take the lead in pursuing voluntary cooperative action.

For example, earlier this year, MasterCard withdrew services from Limewire, a well-known file-sharing site. In addition, MasterCard has done an internal assessment of its processes to address infringing sites and has begun a number of cooperative discussions with rightholders….On December 2, 2010, Google announced a number of steps it will take to make its response time to complaints more rapid, to limit the ability of websites used to sell infringing goods to obtain ad revenue and to increase access to legitimate sites….We need to eliminate financial gain derived from infringement. While some products are sold directly, other sites obtain revenue from advertising. The IPEC is in the process of gathering information about the online advertising business to see if there are means to limit illegal sites from using ad revenue as a business model.

4.  Statistical summary of (generally) increased investigations/enforcement/arrests/convictions/seizures (pp. 31-32):

  • In FY 2010, ICE HSI intellectual property investigations increased by more than 41% and ICE HSI arrests increased by more than 37% from FY 2009.
  • In FY 2010, FBI intellectual property investigations increased by more than 44% from FY 2009….
  • In FY 2010, courts sentenced 207 intellectual property defendants. More than half—121—received no prison term, 38 received sentences of 1-12 months in prison, 27 received sentences of 13-24 months in prison, 10 received sentences of 25-36 months in prison, 7 received sentences of 37-60 months in prison and 4 received sentences of more than 60 months in prison….
  • CBP and ICE HSI had 19,959 intellectual property seizures in FY 2010. The domestic value of the seized goods—i.e., the value of the infringing goods, not the  manufacturer’s suggested retail price (MSRP) for legitimate product—was $188.1 million. The estimated MSRP of the seized goods—i.e., the value the infringing goods would have had if they had been genuine—was $1.4 billion.

***

A final note:  the report trumpets success–a lot.  Examples abound, but perhaps the most amusing is a case involving counterfeit Cisco equipment sold to the Marines for use in battlefield-critical networks in Iraq.  I’m certainly glad that the government caught this, but do they really have to mention it three separate times (on pages 5, 41, and 50) in the report?

Roundup of additional commentary:

Potential expansion for domain suffixes

Even amidst discussions that the Internet has run out of addresses, there is talk about expanding the list of available domain suffixes beyond the current 21 options. It sounds like these proposals would allow for all sorts of suffixes and this, inevitably, leads to questions about who would get to control certain domains:

This massive expansion to the Internet’s domain name system will either make the Web more intuitive or create more cluttered, maddening experiences. No one knows yet. But with an infinite number of naming possibilities, an industry of Web wildcatters is racing to grab these potentially lucrative territories with addresses that are bound to provoke.

Who gets to run .abortion Web sites – people who support abortion rights or those who don’t? Which individual or mosque can run the .islam or .muhammad sites? Can the Ku Klux Klan own .nazi on free speech grounds, or will a Jewish organization run the domain and permit only educational Web sites – say, remember.nazi or antidefamation.nazi? And who’s going to get .amazon – the Internet retailer or Brazil?

The decisions will come down to a little-known nonprofit based in Marina del Rey, Calif., whose international board of directors approved the expansion in 2008 but has been stuck debating how best to run the program before launching it. Now, the Internet Corporation for Assigned Names and Numbers, or ICANN, is on the cusp of completing those talks in March or April and will soon solicit applications from companies and governments that want to propose and operate the new addresses.

Sounds like we could have some battles on our hands for particular suffixes. Perhaps the companies or organizations with the most money will win.

But many of the options in this article are set up as “good” options versus “bad” options. If given a choice, how many people would want the .nazi domain to be controlled by the Ku Klux Klan? And some of the other options presented in the story, such as whether someone who wants musicians and agents to be able to get .music addresses while the music industry wants to control this for their larger purposes, are less clear. ICANN, the organization who controls the domains, says they have considered this: “For people who might propose controversial domains – such as .nazi, which ICANN officials have worried about – approval will be based on the applicant’s identity and intentions, and on the grounds of “morality and public order.” How in the world will they be able to do this in a way that is satisfying to multiple parties? Is there a way to decide this before the domains are sold or are we simply in for long rounds of litigation?

Trying to count the people on the streets in Cairo

This is a problem that occasionally pops up in American marches or rallies: how exactly should one estimate the number of people in the crowd? This has actually been quite controversial at points as certain organizers of rallies have produced larger figures than official government or media estimates. And with the ongoing protests taking place in Cairo, the same question has arisen: just how many Egyptians have taken to the streets in Cairo? There is a more scientific process to this beyond a journalist simply making a guess:

To fact-check varying claims of Cairo crowd sizes, Clark McPhail, a sociologist at the University of Illinois and a veteran crowd counter, started by figuring out the area of Tahrir Square. McPhail used Google Earth’s satellite imagery, taken before the protest, and came up with a maximum area of 380,000 square feet that could hold protesters. He used a technique of area and density pioneered in the 1960s by Herbert A. Jacobs, a former newspaper reporter who later in his career lectured at the University of California, Berkeley, as chronicled in a Time Magazine article noting that “If the crowd is largely coeducational, he adds, it is conceivable that people might press closer together just for the fun of it.”

Such calculations of capacity say more about the size of potential gathering places than they do about the intensity of the political movements giving rise to the rallies. A government that wants to limit reported crowd sizes could cut off access to its cities’ biggest open areas.

From what I have read in the past on this topic, this is the common approach: calculate how much space is available to protesters or marchers, calculate how much space an individual needs, and then look at photos to see how much of that total space is used. The estimates can then vary quite a bit depending on how much space it is estimated each person wants or needs. These days, the quest to count is aided by better photographs and satellite images:

That is because to ensure an accurate count, some computerized systems require multiple cameras, to get high-resolution images of many parts of the crowd, in case density varies. “I don’t know of real technological solutions for this problem,” said Nuno Vasconcelos, associate professor of electrical and computer engineering at the University of California, San Diego. “You will have to go with the ‘photograph and ruler’ gurus right now. Interestingly, this stuff seems to be mostly of interest to journalists. The funding agencies for example, don’t seem to think that this problem is very important. For example, our project is more or less on stand-by right now, for lack of funding.”

Without any such camera setup, many have turned to some of the companies that collect terrestrial images using satellites, but these companies have collected images mostly before and after the peak of protests this week. “GeoEye and its regional affiliate e-GEOS tasked its GeoEye-1 satellite on Jan. 29, 2011 to collect half-meter resolution imagery showing central Cairo, Egypt,” GeoEye’s senior vice president of marketing, Tony Frazier, said in a written statement. “We provided the imagery to several customers, including Google Earth. GeoEye normally relies on our partners to provide their expert analysis of our imagery, such as counting the number of people in these protests.” This image was taken before the big midweek protests. DigitalGlobe, another satellite-imagery company, also didn’t capture images of the protests, according to a spokeswoman, but did take images later in the week.

Because these images are difficult to come by in Egypt, it is then difficult to make an estimate. As the article notes, this is why you will get vague estimates for crowd sizes in news stories like “thousands” or “tens of thousands.”

Since this is a problem that does come up now and then, can’t someone put together a better method for making crowd estimates? If certain kinds of images could be obtained, it seems like an algorithm could be developed that would scan the image and somehow differentiate between people.

Untimely end for source of Internet meme about Internet

The NBC employee who released footage of Bryant Gumbel, Katie Couric, and Elizabeth Vargas struggling to define the Internet back in 1994 has been fired. So much for (corporate) information wanting to be free.

But I also don’t quite understand what all the fuss has been about. Sure, their conversation sounds silly to us today. But this was only 16 years ago. If anything, this clip and its popularity demonstrates how quickly the Internet has become an part of everyday life. Back in 1994, the Internet was not used by the common American. My family got AOL in the next year or two, I remember a friend’s family having Prodigy around this time, but most people had no access and realistically, no need for access. Couric, Gumbel, and Vargas were like many Americans: just trying to figure out what this new technology was and how it was used.

More broadly, this released video fits with patterns of more modern people laughing at or commenting on how much better life is now compared to the past. From the vantage point of 2011, we can see the benefits of the Internet and we are bombarded with messages from companies suggesting we need even more of it (in our phones, in our treadmills, etc.). But anytime new technology is introduced, it takes time for the mass public to figure out whether it is a good change or not.