Vous êtes sur la page 1sur 261

Pixel Nation

80 Weeks of World Wide Wade

by

Wade Roush

Xconomy.com
Pixel Nation: 80 Weeks of World Wide Wade by Wade Roush

Copyright © 2008-2010 Xconomy Inc. All Rights Reserved

Published by Xconomy (www.xconomy.com).

No part of this book may be used or reproduced in any manner whatsoever without
written permission except in the case of brief quotations embodied in critical articles or
reviews.

To inquire about reproduction or to report errors, please e-mail: editors@xconomy.com.

All product names and registered trademarks used in this book are the property of their
respective owners.

Cover photo by Wade Roush.


Contents
Introduction
1: Reinventing Our Visual World, Pixel By Pixel
2: The Coolest Tools for Trawling & Tracking the Web
3: Google Earth Grows a New Crop of 3-D Buildings, and Other Web Morsels to Savor
4: Turn Your HDTV into a Digital Art Canvas
5: Unbuilt Boston: The Ghost Cloverleaf of Canton
6: An Elegy for the Multimedia CD-ROM Stars
7: The Future‘s So Bright, I Gotta Wear Screens
8: Science Below the Surface
9: Gazing Through Microsoft‘s WorldWide Telescope
10: Megapixels, Shmegapixels: How to Make Great Gigapixel Images With Your
Humble Digital Camera
11: You Say Staccato, I Say Sfumato: A Reply to Nicholas Carr
12: Space Needle Envy: A Bostonian‘s Ode to Seattle
13: You‘re Listening to Radio Lab—Or You Should Be
14: Can Evernote Make You into a Digital Leonardo?
15: Are You Ready to Give Up Cable TV for Internet Video?
16: Turn your iPhone or iPod into a Portable University
17: In Defense of the Endangered Tree Octopus, and Other Web Myths
18: Pogue on the iPhone 3G: A Product Manual You Won‘t Be Able to Put Down
19: Photographing Spaces, Not Scenes, with Microsoft‘s Photosynth
20: What Web Journalists Can Learn from Comics
21: ZvBox‘s Unhappy Marriage of PC and HDTV
22: GPS Treasure Hunting with Your iPhone 3G
23: Boston Unblurred: Debunking the Google Maps Censorship Myth
24: Four Ways Amazon Could Make Kindle 2.0 a Best Seller
25: Playful vs. Preachy: Sizing Up TV‘s New Science Dramas
26: Is Brown the New Green? Why Boston‘s Ugly, Expensive Macallen Condos
Shouldn‘t Be a Model For Green Buildings
27: The Encyclopedia of Life: Can You Build A Wikipedia for Biology Without the
Weirdos, Windbags, and Whoppers?
28: In Google Book Search Settlement, Readers Lose
29: In the World of Total Information Awareness, ―The Last Enemy‖ Is Us
30: Attention, Startups: Move to New England. Your Gay Employees Will Thank You.
31: Springpad Wants to Be Your Online Home for the Holidays, And After
32: Speak & Spell: New Apps Turn Phones into Multimedia Search Appliances
33: Former ―Daily Show‖ Producer Karlin is Humorist Behind WonderGlen Comedy Site
34: The 3-D Graphics Revolution of 1859—and How to See in Stereo on Your iPhone
35: Ditch That USB Cable: The Coolest Apps for Sending Your Photos Around
Wirelessly
36: Have Xtra Fun Making Movies with Xtranormal
37: E-Book Readers on the iPhone? They‘re Not Quite Kindle Slayers Yet
38: WonderGlen Comedy Portal Designed to Plumb Internet‘s Unreality, Says Karlin
39: How I Declared E-Mail Bankruptcy, and Discovered the Bliss of an Empty Inbox
40: Public Radio for People Without Radios
41: Plinky: The Cure for Blank Slate Syndrome
42: Massachusetts Technology Industry Needs a New Deal, Not a New Brand
43: Three New Reasons To Put Off Buying a Kindle
44: Top 9 Tech Updates: Photosynth, Geocaching, Google Earth, and More
45: Google Voice: It‘s the End of the Phone As We Know It
46: Tweets from the Edge: The Ins and Outs (and Ups and Downs) of Twitter
47: Will Hunch Help You Make Decisions? Signs Point to Yes
48: Boston Can Survive, Even Thrive, Without Today‘s Globe
49: RunKeeper‘s Mad Dash to the Marathon Finish
50: Cutting the Cable: It‘s Easier Than You Think
51: Why Kindle 2 is the Goldilocks of E-Book Readers
52: People Doing Strange Things With Soldering Irons: A Visit to Hackerspace
53: Will Quick Hit Score Big? Behind the Scenes with Foxborough‘s Newest Team
54: Are You a Victim of On Demand Disorder?
55: German Web 2.0 Clothing Retailer Spreadshirt Finds Boston Fits It to a T
56: Boston‘s Digital Entertainment Economy Begins to Sense Its Own Strength
57: The Eight (Seven…Six?) Information Devices I Can‘t Live Without
58: Personal Podcasting with AudioBoo, UK‘s ―Twitter for Voice‖
59: Art Isn‘t Free: The Tragedy of the Wikimedia Commons
60: Project Tuva or Bust: How Microsoft‘s Spin on Feynman Could Change the Way We
Learn
61: Shareaholic Becomes the Link-Sharing Tool of Choice
62: Startups Give E-mail a Big Boost on the iPhone with ReMail and GPush
63: Why It‘s Crazy for Authors to Keep Their Books Off the Kindle
64: A Manifesto for Speed
65: Seven Projects to Stretch Your Digital Wings: Part One
66: Seven Projects to Stretch Your Digital Wings: Part Two
67: Seven Projects to Stretch Your Digital Wings: Part Three
68: Ansel Adams Meets Apple: The Camera Phone Craze in Photography
69: How to Launch a Professional-Looking Blog on a Shoestring
70: Facing Up to Facebook
71: The Kauffman Foundation: Bringing Entrepreneurship Up to Date in Kansas City
72: Sony, Google Point the Way Toward a More Open Future for E-Books
73: Is it Real or Is It High Dynamic Range?
74: Using Google‘s Building Maker to Change the Face of Boston
75: Digital Magazines Emerge—But Glossy Paper Publishers Haven‘t Turned the Page
on the Past
76: Tablet Fever: How Apple Could Go Where No Computer Maker Has Gone Before
77: Entrepreneurship May Work Like A Clock, But It Still Needs Winding
78: The Apple Paradox: How a Company That‘s So Closed Can Foster So Much Open
Innovation
79: What‘s So Magical About an Oversized iPhone? Plenty—And There‘s More to Come
80: Kindle Conniptions: How I Published My First E-Book
Introduction
We live inside two parallel realities. One is the old physical world of forests and
farms and families, shopping malls and subways, commerce and war. The other is the
reflected world of words, sounds, and images reproduced mechanically or electronically.
This second reality is no less important for being less solid, for it's the one that carries
many of our hopes, dreams, and fears. It's where we record our collective wisdom and
our collective neuroses, our achievements and our catastrophes. It's the way make sense
of many of the things that are not within immediate view.
And right now, that reflected reality is being remade from scratch, for perhaps the
fifth or sixth time since the emergence of mass media in the 19th century. Such
transitions are always uncomfortable, since they dislocate many creators and consumers
of mass culture, even as they empower others. But when the dust settles, we usually find
that our imaginations have been enriched. This was true of the birth of newspapers and
dime novels, radio and movies, television and the early Internet, and it will be true of the
current revolution in social, mobile, multimedia communication, which is giving rise to
what I call the Pixel Nation.
The 80 essays in this book represent one slice of this revolution, captured in real
time at the end of the 21st century's first decade and the beginning of its second. As chief
correspondent since 2007 for Xconomy, a network of news websites covering high-tech
entrepreneurship in several of America's most important innovation hubs, I've had two
wonderful opportunities: to write every day about some of the smartest people in the
technology world, and to do so for an online publication that is itself part of the digital
transformation. I started out as a professional technology writer around the same time that
the Web was emerging, in 1995, so the various publications that employed me always
had websites, at least as supplements to their print products. But Xconomy is my first
adventure in fully digital journalism. And now that I've tasted Web-only writing—its
immediacy and reach, its relentless but exciting pace, its intrinsic hypertextual
connectedness—I don't think I could ever go back. (Not that there are many print-centric
publications still hiring writers!)
But while Xconomy is a digital publication accessible exclusively through such
newfangled technologies as Web browsers, e-mail readers, RSS aggregators, and mobile
devices, our business is founded on the same philosophy that guided the great city
newspapers of the twentieth century. It's the simple idea, often dressed up these days with
the buzzword "hyperlocal," that readers are most interested in what's going on in their
neighborhoods. That doesn't mean we ignore larger trends. Just the opposite: as
Xconomy's founder, CEO, and editor-in-chief Bob Buderi puts it, you can learn a lot
about global issues by examining them through a local lens. But it does mean we should
look to the inventors, entrepreneurs, investors, and companies in our local communities—
Boston, San Diego, and Seattle so far, with more to come—for the best stories about how
technological innovation takes root, spreads, and ultimately transforms the economy.
One exception to Xconomy's local focus is my weekly column, World Wide Wade,
which appears most Fridays on Xconomy's national page and on all of our city sites. The
column was born out of my confession in early 2008 to Bob and to Rebecca Zacks,
Xconomy's co-founder, executive editor, and chief operating officer, that I missed writing
about digital media topics like the Apple iPhone (which hit stores just two days after
Xconomy's own launch in 2007); the problem was that since these big subjects didn't
always have a Boston-area hook, they couldn't easily be shoehorned into our hyperlocal
model. "Okay," they said. "Go ahead and write about that stuff—just do it in a column."
We published my first "WWW" essay on April 4, 2008, and I've written one every
Friday since, except for holidays, vacations, and a few weeks when the column got
crowded out by local news. On February 5, 2010, we published the 80th iteration. And
that nice round number—reached just as the new decade is opening, and just as a new
phase in the history of computing seems to be opening with the introduction of the Apple
iPad—seems as good an occasion as any to collect the columns into the book you are
reading now.
For readers who haven't been regular followers of World Wide Wade, I want to
offer a brief preview of what you'll find in these pages. While I have ranged across the
country and the world for subject matter, my columns are still tied to specific times—
namely, the dates they were first published (given at the top of each chapter). That means
a few of the technologies and companies mentioned in the early columns, such as
GalleryPlayer, are already obsolete or defunct. To bring those stories up to the present—
meaning, up to February 2010—I've added updates where appropriate.
To facilitate exploration, I have also left most of the original hyperlinks from the
Web versions of these columns intact. If you're perusing this book on an Amazon Kindle
or a similar reading device, clicking on the links should take you to those Web pages in
the device's browser. If you're reading a PDF version of this book on a computer, the
links will open the respective Web pages in a new browser window.
One goal for the column has been to guide readers toward the digital experiences
that best illustrate how interactive, user-driven digital media are enriching our lives. So
reviews of mobile and Web-based software applications, from Evernote and Google
Earth to the Public Radio Player and Microsoft's Project Tuva, make up the single biggest
theme in the collected columns. Quite often, I've also used the columns to profile the
organizations building such applications, such as Plinky, Shareaholic, Spreadshirt,
Springpad, and Quick Hit.
It will become obvious that I'm also fascinated by the latest hardware gadgets, such
as the iPhone, MyVu's wearable LCD displays, ZeeVee's ZvBox, and, of course, the iPad.
The explosion of creative tools for consumer digital photography is another major
strand—as is the arrival of the e-book era in publishing with devices such as the Kindle.
There's also a fair amount of miscellany here, including reviews of TV shows,
history lessons on 19th-century stereo photography and 1990s-era CD-ROM publishing,
political commentary on the state of economic development in Massachusetts,
ruminations on the future of journalism, two pieces about comedy producer Ben Karlin,
and even a tribute to the mythical Pacific Northwest tree octopus. And every so often, I
philosophize in defense of the digital revolution, as in columns that take issue with
Nicholas Carr's contention in an Atlantic cover article that Google is making us stupid
and with John Freeman's argument in the Wall Street Journal that we'd be better off
without e-mail.
If there's any consistent theme in the columns, it's that we should welcome the
digital media revolution for the new powers of personal expression, exploration, and
creativity it's bringing—even if these come at the cost of some passing chaos and
confusion.
Considering the hectic pace of change in digital media, I suspect that I won't have
the opportunity to write another 80 editions of World Wide Wade before the column, and
perhaps Xconomy itself, morph into something quite different. All the more reason to
bundle up these pieces now, to provide a convenient window on some of the events and
ideas that fueled Xconomy's coverage during its first three years of existence.
Before closing I want to thank the colleagues at Xconomy who have made that time
so enjoyable—and who have all, at one time or another, edited my columns. I'm
especially grateful to Bob Buderi and Rebecca Zacks for hiring me in the first place and
for encouraging me to start writing the column. Working alongside such consummate and
creative journalists as Bruce Bigelow, Greg Huang, Ryan McBride, and Luke
Timmerman has been a privilege, and I've also enjoyed interacting with Xconomy's
Boston-based business and sales staff, including Richard Freierman, Greg Calkins, and
Steve Woit. Of course, none of our work would be possible without the support of
Xconomy's investors and underwriters, who've backed up their belief in our mission with
cash. Finally, I'm grateful to everyone who has supported and counseled me since I
returned to Boston from the West Coast in 2007—especially my parents Paul and Patricia
Roush, my brother Jamie, and my indispensable friend Graham Gordon Ramsay.
Cambridge, Massachusetts
February 2010
1: Reinventing Our Visual World, Pixel
By Pixel
April 4, 2008
Every week I come across news items, tech trends, and useful gadgets and services
that I know Xconomy‘s readers would find interesting, but that don‘t fit with our usual
lineup of hyperlocal news stories about Boston‘s innovation scene. To create an outlet for
such random finds—and, frankly, to get me off Bob and Rebecca‘s backs about all the
cool stories we‘re missing—we‘ve decided to carve out a bit of space for articles that
don‘t necessarily relate to New England. And here it is: my new weekly column, World
Wide Wade. (Please pardon the goofy title, but it fits with my intentions, which are that
the column take a very wide, occasionally offbeat view of the technology world.)
On my first couple of outings I‘m going to try to tie together a few projects and
products relating to the reinvention of the visual Internet. I think we‘re in the early stages
of a radical shift in the types of imagery and image-related tasks that are supported by the
Web and software connected to the Web. Anyone who uses a digital camera or even a
camera phone ought to be excited about this shift, which is going to make it possible to
share, explore, and possibly even inhabit the digital images that we‘re all capturing in
increasing numbers and at increasing resolution. In today‘s column, I‘m going to talk
mainly about tools for organizing and viewing still 2-D photographs. In a future column,
I‘ll look at 3-D—and later on, perhaps, at video and animation, which are obviously
undergoing their own revolutions.
Finding fun, convenient ways to organize and share our digital photos is a challenge
that‘s been around since the advent of consumer-level digital photography a decade ago.
This technology took a big step forward around 2004 with the emergence of Picasa, a
snazzy and flexible photo album organizer, and Flickr, a photo sharing service that
introduced great social features like photo annotation and tagging. (Picasa eventually
became a Google product and Flickr was snatched up by Yahoo.) After that, the new
ideas seemed to peter out for a few years. But finally we‘re starting to see some
innovation again.
In fact, I saw some Wednesday night at the Web Innovators Group meeting in
Cambridge, where one of the presentations was from Brookline, MA-based Raizlabs,
maker of a handsome—indeed, almost too pretty for Windows—PC photo organizing
application called PicMe. The freely downloadable software has too many features to list
here, so I‘ll only describe two. First is its beautifully intuitive method of organizing your
photos into 3-D stacks, with each stack representing a folder on your hard drive. You can
flip through the photos in a given stack using forward and backward arrows, or inspect
rows of stacks by scrolling past them in 3-D, as if you were flying over skyscrapers in
Manhattan. It‘s a very nice way to browse through a big photo collection, and is a bit
reminiscent of other recent interface innovations such as the Cover Flow feature on iPods
and iPhones.
The other nice thing about PicMe is its drag-and-drop method for sharing photos: if
you want to e-mail a photo to your mom, just drag it off a stack and drop it on her entry
in the contact list on the PicMe screen‘s left side. If you want to upload a photo (or a
whole stack) to your Flickr, Facebook, or MySpace account, just drop it on that entry.
What PicMe demonstrates is that the interface makes all the difference. If you
usually upload photos straight from your digital camera into folders on Windows,
chances are slim that you‘re going to go back and lovingly review them later using the
clunky Windows Explorer thumbnail display (or even the Finder application on Mac OS
X, which is almost as bad). But if your computer can turn your pictures into objects you
can almost hold and slide around as if they were 5×7 prints on a big table—or send off to
others as easily as if you were slipping them into a mailbox—you‘re much more likely to
enjoy the task. You‘re probably going to rediscover old photos you‘d forgotten about, or
decide to share pictures on a whim with others who might enjoy them.
I‘m just as excited about two other image-exploration applications called Seadragon
and Photosynth, though neither is yet available to use with your personal images, as
PicMe is. Both are projects at Microsoft Live Labs, a division of the software giant
devoted to building innovative Internet-based software. Seadragon is an interface for
zooming in and out on visual information of all sorts, from photographs to entire Web
pages, across a huge range of scales and resolutions. Imagine, for example, being able to
see an entire edition of a newspaper in a single glance, then have the option of zooming
in on just one page, or just one article, or a few lines of text in an advertisement—or even
an entire microfilm-like product catalog embedded in that ad.
Or imagine seeing all of your photos in one big spread, and being able to zoom in
instantly on any individual photo—or theoretically even further, exploring the world
reflected in every raindrop. When you‘re not dealing with images on physical paper, after
all, there‘s no reason for a limit on how much you can zoom in or out. And that‘s the
whole premise of Seadragon: that our computer interfaces should flexibly scale visual
material so that we can browse it at any desired level of detail.
Photosynth, meanwhile, is an interface that‘s built on SeaDragon but is designed
specifically for correlating large groups of photographs into simulated 3-D environments,
which then become the interfaces for exploring the individual photos. It‘s so mind-
blowingly cool that I‘m going to have to put off a real description until a future column,
where I‘ll have more space to talk about the blurring boundary between still 2-D
photography and virtual 3-D environments. In the meantime, if you‘re interested you
should really just go and watch this demo from the May 2007 TED Conference in
Monterey, CA. You can also download a preview version of Photosynth (Windows only).
I can‘t close without mentioning one more imaging project that pushes the
boundaries of scale, resolution, and zooming. It‘s the Gigapan project, which allows
people with consumer-grade digital cameras to contribute to a growing collection ultra-
high-resolution, 360-degree panoramic images taken at locations around the planet. The
trick to making one of these images is to attach a camera to a robotic, tripod-mounted
pan-and-tilt drive that precisely aligns it for a series of dozens or hundreds of
photographs, then use special software to stitch the images together into giant ―gigapixel‖
panoramas. You can browse some of these images at www.gigapan.org, and for an even
more immersive experience, you can see the photographic panoramas overlaid upon their
real locations inside Google Earth (a free, downloadable 3-D map browser for Mac and
Windows).
It‘s hard to convey in words the personal excitement I feel when I‘m able to use
tools like PicMe, Photosynth, or the Gigapan browser to explore photo collections. Such
tools liberate our digital photos from the computer folders where they‘ve been
languishing and make them into objects we can scrutinize, play with, and redeploy in
hundreds of different ways. But then again, I‘m someone who tends to feel about my
photographs the way Amerigo Vespucci probably did about his maps, or Giambattista
Bodoni about his typefaces. Welcome to my world.
Author's Update, February 2010: Raizlabs discontinued PicMe in August, 2009.
“It‟s very difficult to be in the consumer photo space” when Google, Flickr, and
Facebook "essentially set the bar and give away their services for free or close to free,"
Greg Raiz told me. I've written more about the Gigapan project; see Chapter 10.
Meanwhile, Microsoft has introduced a public version of Photosynth; that's the subject of
Chapter 19.
2: The Coolest Tools for Trawling &
Tracking the Web
April 11, 2008
Ahh, Boston in springtime. Duck boat armadas on the Charles. The vinegary smell
of the wood-chip mulch landscapers spread everywhere. Tow trucks hauling away cars so
that street sweepers can get at the dead leaves accumulating since October. A guy on a
recumbent bike pulling a train of three skateboarders along the Esplanade.
I could have spent the rest of this 70-degree afternoon watching the city come back
to life from my park bench. But out of commitment to you, dear reader, I‘ve wandered
back inside to write my first annual spring roundup of new and/or improved tools for
finding and tracking information on the Web. If you‘d rather be outside yourself, just
bookmark this article using Diigo (of which more below) and come back to it when the
temps dip back below 40—which will probably be this weekend, knowing New England.
The advent of the RSS syndication format has made it so easy to grab and repurpose
chunks of information from around the Web that there‘s a sudden surfeit of websites that
aggregate content from other sites. But not all of these aggregators are equal, and I
thought I would share a few of my favorites.
For convenience, versatility, and beauty, it‘s hard to beat Netvibes, which somehow
manages to array 100 or more RSS headlines across a single page without looking
cluttered. I‘ve been using it for a couple of years now, and have always been impressed
by how easily I can add feeds to my Netvibes page and organize them across multiple
tabs. And with the launch of Netvibes‘ Ginger release about a month ago, the site is even
more powerful than before, acting as a platform for sharing and republishing your
favorite finds. While it‘s still best at aggregating RSS feeds and podcasts, Netvibes can
also be used as a gateway to your e-mail, social-networking, and photo-sharing accounts;
communications tools such as Twitter, Skype, and instant-messaging programs; and
hundreds of widgets that bring you everything from weather reports to TV schedules.
If Netvibes is all about customization, Alltop is all about simplicity. Launched last
month by prominent Silicon Valley entrepreneur Guy Kawasaki, Alltop is a clean and
spartan collection of the five most recent headlines from 50 to 100 leading blogs in each
of 55 categories, from books to extreme sports to Linux. If you mouse over one of the
headlines, the first few sentences of the story pop up in a bubble (Netvibes does this too,
but Alltop‘s snippets are longer, which I appreciate). As far as I can tell, Guy himself
determines which blogs are worthy of inclusion in Alltop, but I‘m impressed by his
editorial taste so far. (Full disclosure: Xconomy is one of the top blogs listed in Alltop‘s
Venture Capital category.)
Many software geeks protested after Alltop‘s launch that there was little new
technology under the hood, beyond the site‘s sleek transparent banner. I think that‘s
exactly the point. It‘s so simple that you can learn how to use it in seconds, and you never
have to fool with adding an RSS feed manually (a concept still entirely foreign to most
Internet users).
I think there‘s always room for Web developers to try out new variations of
established products such as RSS aggregators. That‘s exactly what the brand-new site
Naubo is doing in the area of news spidering, a genre pioneered years ago by Google
News. Naubo is, in fact, a virtual copy of Google News, right down to the way its
columns are laid out and color-coded—except that it‘s all about technology. I‘m really
enjoying the way Naubo surfaces the latest key stories in the world of software,
hardware, and the Internet from both the blogosphere and traditional news sources like
Reuters and Computerworld. In principle you could personalize Google News to
emphasize certain subjects, but it has only one category for Sci-Tech, whereas Naubo has
more than a half-dozen, including sections devoted specifically to Apple, Microsoft, and
Linux.
Sometimes, the aggregators lead you to articles or sites that you want to save and
remember. And for that, I have another favorite tool: Diigo. While it would be easy to
describe Diigo as a social bookmarking service, that would make it sound too much like
Del.icio.us or Furl or Reddit (all of which I‘ve tried and tired of at various times). It‘s
really more of a research tool with social, collaborative features.
Most importantly, Diigo (which is operated through a toolbar that works in the
Firefox, Internet Explorer and Flock browsers) allows you to bookmark pages on a list
that‘s saved forever online and accessible from anywhere. No more messing around with
your Web browser‘s built-in bookmarks, which won‘t be available to you if you happen
to log into the web from a different computer. Just as fun, Diigo makes it easy to
highlight passages within a Web page—so you can return later and see what it was that
caught your attention—and even attach floating ―sticky notes.‖
You can also attach tags to your bookmarks to make them easier to find later on,
and you can click on individual tags to see what other Diigo users are bookmarking
publicly under those tags. (As a journalist, I‘m secretive enough about what I‘m
researching online that I tend to keep my Diigo bookmarks private.) In late March, Diigo
rolled out Version 3 of its system, which includes enhanced ―social browsing‖ features
such as the ability to see how other people have annotated a given Web page, follow what
your friends are bookmarking, or subscribe to other users‘ bookmarks based on tags.
These are nice additions, but I‘ve always appreciated Diigo mainly for its simple, reliable
bookmarking and highlighting tools; I‘ve got close to 700 bookmarks on the site going
back to early 2006.
Recently I‘ve been experimenting with another new collaborative Web information-
sharing tool called Twine, which is still in beta testing and is built around small
communities of people following specific subjects or ―twines‖ such as virtual worlds and
semantic Web apps. I haven‘t used it enough yet to be able to tell whether it‘s a sound
concept, but I‘ll let you know in a future column. Meanwhile, if you like trying out new
Web 2.0 applications as much as I do, CNET‘s Webware blog, edited by Rafe
Needleman, highlights several new ones every day. But before you go there—go outside!
Unless, of course, spring has gone back into hiding by the time you read this.
Author's Update, February 2010: Alas, one of the difficulties for any developer of
Web 2.0 services is that there's always new competition coming along, and unless you
can figure out a way to keep the switching costs very high, you're likely to lose users to
newer, shinier tools. I use Google Reader now to track RSS feeds, mainly because its
interface is simpler. For bookmarking of Web materials, I've switched from Diigo over to
Evernote (the subject of Chapter 14).
3: Google Earth Grows a New Crop of 3-
D Buildings, and Other Web Morsels to
Savor
April 18, 2008

One of my goals with this column—which is now in its third week—is to tell you
about new stuff on the Web that‘s so delicious you just have to taste it. Here are three
morsels to tide you over until next time.
The first is a quick appetizer: Very Short List, an e-mail newsletter funded by
IAC/Interactive Corp. VSL has been around since mid-2006, but I just discovered a
couple of weeks ago. If you sign up, every day they‘ll send you one—exactly one—
nugget of entertainment or media content that, in the site‘s words, hasn‘t already been
hyped to within an inch of its life. So far, every item I‘ve received has been intriguing at
least (an amazing TV ad for a soccer video game), and often utterly engrossing (an online
museum of online museums).
For the main course: I suggest Google Earth 4.3. This week Google rolled out the
latest version of its free geographic browser for Windows and Mac, which lets you tour a
3-D simulation of the entire planet built on the company‘s database of real satellite and
aerial photographs.
Like its competitors, Microsoft Virtual Earth and NASA‘s Worldwind, Google
Earth started out as a digital atlas, showing huge amounts of classical map and
photographic data that was itself 2-D but happened to be draped over a spherical globe,
which mainly made it easier to shift between top-down views of different locations. As
the product has evolved, however, the sphere forming the scaffolding for the map data
has gained realistic 3-D topography, followed by other real-world touches such as 3-D
buildings and even clouds based on real-time reports from the National Weather Service.
In other words, it‘s gradually becoming what Yale computer scientist David Gelernter
first termed a ―mirror world‖—a software model that tries to recreate the human
environment as accurately as possible.
The latest version provides improvements in both content and navigation that nudge
it even farther in this direction—which is a blessing for people like me who are intrigued
by virtual worlds and all the possibilities they offer for new kinds of learning and
interaction (though it should be noted that some traditional map mavens like Stefan
Geens, the author of the Ogle Earth blog, feel that the profusion of cosmetic
improvements in Google Earth is diminishing its information value as an atlas).
The most visible addition to Google Earth 4.3 is an expanded crop of 3-D buildings
for dozens of cities around the world, along with extremely realistic textures or ―skins‖
for those buildings. In past versions of Google Earth, most 3-D buildings were
represented by gray boxes of the appropriate shape and height. In 4.3, most of the 3-D
models, including hundreds of Boston buildings, are now clothed with photographs of the
actual structures. (Don‘t ask me how Google pulled this off: The process of creating
photorealistic 3-D models of buildings was, until recently, a tedious one tackled mainly
by enthusiastic amateurs, who used Google‘s SketchUp 3-D modeling program and
uploaded their finished models to Google‘s open-source 3-D Warehouse. Clearly Google
has found a way to automate the whole process.)
The program‘s 3-D buildings are now so detailed that it‘s possible to ―fly‖ to a
given location in the Google Earth landscape and get a view that‘s astonishingly close to
actually being there. To see what I mean, compare the two images here: one is a
photograph I took yesterday from the roof of the building in Cambridge, MA, where
Xconomy is headquartered. The other is a screenshot from Google Earth with the
imaginary ―camera‖ positioned in roughly the same spot.

When comparing these two images, keep in mind what makes the Google Earth
version so remarkable: It‘s entirely synthetic. No one from Google went out and took a
picture from that perspective (although Google‘s vast collection of Street View
photographs is now integrated into Google Earth—but that‘s a different story). Rather,
it‘s a reconstructed view based entirely on 3-D modeling, pasted-on photographic skins,
Google‘s map data, and some very sophisticated computer graphics algorithms.
Google Earth 4.3 contains a ton of other great improvements, but I‘ll just mention
two more. One is the sun. Now you can turn on a feature that puts a simulated sun into
the proper spot in the simulated sky and lets you adjust the time of day with a slider,
generating realistic shadows on buildings and landforms. Finally, the Google Earth team
has completely revamped the program‘s navigation controls to make panning, zooming,
tilting, and otherwise moving around inside the 3-D environment much more intuitive—
which is to say, much more like a videogame or a Second Life-style virtual world. If
you‘re a longtime user of Google Earth, the new controls might take some getting used
to, but ultimately you‘ll appreciate the added flexibility. Meanwhile, if you‘ve never
downloaded Google Earth before, there‘s never been a better time to start exploring.
And now for dessert: Go check out MyLOC, the newest online resource from the
Library of Congress. Launched April 12, the site is a history buff‘s dream, containing a
digital collection of historic books, maps, and other resources from the Library‘s vast
archives. The site—the online counterpart of an exhibit at the Library‘s Jefferson
Building in Washington, D.C.—provides some clever Flash and Microsoft Silverlight
multimedia tools for browsing individual books, including a Gutenberg Bible and several
volumes from Thomas Jefferson‘s personal library. Bon appetit.
Author's Update, February 2010: Google Earth is now on version 5.1, and just
keeps getting cooler. To help speed up the process of adorning its virtual globe with 3-D
buildings, Google has introduced a tool called Building Maker that allows average Web
users to create their own 3-D structures from aerial photos and submit them for inclusion
in Google Earth; that's the subject of Chapter 74.
4: Turn Your HDTV into a Digital Art
Canvas
April 25, 2008
You no longer need to be a multi-billionaire to have large-scale digital art in your
home.
When Bill Gates built his 40,000-square-foot mansion on Lake Washington in the
early 1990s, one of the most talked-about features was a 22-foot-wide, rear-projection
video wall in the reception hall, showing digitized versions of fine art, historic
photographs, and the like. Gates founded Corbis, now one of the world‘s largest digital
stock photo agencies, on the theory that many other people would also enjoy watching a
rotating selection of paintings and photographs on large-screen displays in their homes.
At the time, that wasn‘t exactly affordable for the hoi polloi. But thanks to good old
Moore‘s Law—which applies to the transistors in LCD and plasma screens as much as it
does to those inside CPUs—the hardware required to turn your own home into a digital
art museum is finally within reach. All you need is a high-definition flat-screen TV
(incredibly, 42-inch versions with full 1,080-pixel vertical resolution are now available
for under $1,000); a Windows or Macintosh computer; and a cable to hook the
computer‘s external monitor port to your TV‘s video input jacks. (I recently got a VGA-
to-DVI cable for $22 at CablesToGo.com.)
Once you‘ve connected your PC to your TV—which may take some fiddling, as
you‘ll need to go into your computer‘s control panel and pick the proper external-monitor
display settings—there are two pathways to watching great high-definition images. If you
don‘t mind shelling out a few extra bucks for some fantastic professionally produced
imagery, I highly recommend a visit to GalleryPlayer. This small Seattle company was
founded in 2003 and originally provided digital images from Corbis for large displays in
commercial spaces such as hotels and offices; to use it, you needed to buy a $3,000 image
server and pay $195 per month for a rotating selection of images. But in a measure of
how quickly the digital-imaging market has changed, GalleryPlayer‘s software is now
free (Windows only, sadly) and images, which can be purchased and downloaded over
the Internet, cost about $1 apiece—less if you buy them in packs.
GalleryPlayer has a huge library of images to choose from, ranging from National
Geographic nature photography to fine art from some of the world‘s best museums,
including Boston‘s own Museum of Fine Arts. Each image is accompanied by a museum-
style caption that appears on screen briefly, telling you about the image‘s provenance. If
you do try GalleryPlayer, I recommend splurging early—there‘s a 50 percent discount on
your first purchase.
If you‘re a digital photographer, there are two perfectly good alternatives to
GalleryPlayer that will cost you absolutely nothing: Slickr (for Windows) and FlickrFan
(for the Mac). Technically, these free programs are screen savers—but if you hook your
computer up to your HDTV and set your computer‘s power options so that the screen
never shuts down, you‘ve got an instant digital art exhibit. What‘s neat about these
programs is that they‘ll display either photos stored in specific folders on your computer
or pictures you‘ve uploaded to your Flickr photo-sharing account. Both programs also
animate your photos in ―Ken Burns‖ style, meaning they slide gracefully across the
screen—a nice break from GalleryPlayer‘s static images. If you‘ve got a lot of old photos
that you never bother to look at on your PC, Slickr and FlickrFan offer a great way to
resurrect them.
For Boston residents, or anyone else who gets their cable TV service from Comcast,
there‘s an extra piece of good news. If you already have an HDTV and a Comcast high-
definition set-top box, you can watch high-definition digital slide shows from
GalleryPlayer without the need for a PC or special gallery software.
GalleryPlayer shows are a free part of the On Demand service from Comcast. But
they‘re buried several levels deep in the On Demand menu, so many customers don‘t
even know about them. To find them, click the On Demand button on your Comcast
remote, then choose ―HD On Demand,‖ then ―TV Entertainment,‖ then ―GalleryPlayer.‖
You‘ll see a selection of about ten half-hour shows, each comprised of about 30 stunning,
high-resolution photos or paintings set to pleasant jazz, classical, and New Age
background tunes. The images change every month, and cover themes such as African
wildlife, underwater photography, space imagery, Van Gogh paintings, and autumn in
New England.
Boot up GalleryPlayer, Slickr, or FlickrFan at your next cocktail party and your
guests will think you‘re the Bill Gates of your block.
Author's Update, February 2010: GalleryPlayer went out of business in August,
2008, and Comcast no longer offers the GalleryPlayer service. Slickr and FlickrFan are
still available. Flickr itself has a nice slide-show feature that looks great on a high-
definition television in full-screen mode. And if you have a wireless Roku Player (see
Chapter 50) you can now add a Flickr channel that makes browsing your Flickr photos
on your TV a snap.
5: Unbuilt Boston: The Ghost Cloverleaf
of Canton
May 2, 2008
Last Halloween, a Boston startup called Untravel Media published a multimedia
walking tour called ―Boston‘s Little Lanes and Alleyways‖ that guides listeners through
some of the city‘s oddest secret passageways and back streets. I took the tour myself, and
found that its dramatic combination of photography, music, and narration taught me
about a side of the city that even most Bostonians know nothing about. (Untravel also has
a number of other interesting tours, including a brand new one about the historical
neighborhoods around Harvard Square.)
In the spirit of ―Little Lanes,‖ I thought I‘d tell you about another strange and
mostly-forgotten piece of Boston‘s past: the half-abandoned cloverleaf where I-95 meets
I-93, on the western edge of the Blue Hills reservation between Canton, MA, and Milton,
MA. I stumbled across this forlorn, fascinating place last fall at the end of a hike around
the reservation. And while highway construction may sound like an odd subject for a
column that‘s supposed to be about technology and the Web, I see the Canton cloverleaf
as an important technological artifact in its own right. It‘s a telling symbol of our own
occasional indecision about what we value more: technological conveniences
(automobiles, in this case) or coherent, livable communities.
The ghost cloverleaf, which connects to the reservation‘s trail system, is an odd
sight indeed: a network of curving ramps and a disused six-lane expressway that
suddenly dead-ends in a dense, marshy forest. It‘s fully outfitted with curbs, drains, and
lane markings, but is used today mainly as a refuse dump and long-term parking lot for
construction equipment owned by the Massachusetts Highway Department. As you walk
along the empty pavement, the main sounds are the chirping of crickets and the distant
roar of cars on I-93.
I‘ve always had a morbid curiosity about great half-built or never-built projects, so I
immediately wanted to know what happened here. As I learned from an assortment of
websites, the cloverleaf was constructed between 1962 and 1968, and is the northern half
of what was originally intended to be a fully working interchange between I-95, aka the
Southwest Expressway, and I-93, aka Route 128, aka the Yankee Division Highway.
From here, the state‘s highway blueprints called for the Southwest Expressway to
continue about 10 miles north into Boston. It would have barreled through farmland and
residential neighborhoods in Milton and joined up with the American Legion Highway,
which would have been converted into an expressway running along the eastern edge of
Franklin Park. From there, the expressway would have turned Blue Hill Avenue into a
six-lane gash through Roxbury and Dorchester, eventually connecting with I-695 near the
present-day intersection of Massachusetts Avenue and Southampton Street (which
happens to be about four blocks from where I live in the South End).
Never heard of I-695? That‘s because it was never built, either. Also called the
Inner Belt, it was part of a scheme laid out in 1948 to help interstate drivers and truckers
avoid the congestion in downtown Boston by circling through outer Boston, Brookline,
Cambridge, and Somerville. Perhaps it was a good idea at one time. But had this 7-mile
loop been constructed, the Boston cityscape would be immeasurably different today.
Think of the city you know, and then think of a 300-foot-wide right-of-way thrust
along the following route: from I-93 near Massachusetts Avenue and Southampton Street
in Roxbury west to the Ruggles T station; then across Huntington Avenue, flattening the
Museum of Fine Arts and slicing through the Emerald Necklace where the Fenway joins
the Riverway; then turning north to cross Beacon Street and Commonwealth Avenue,
intersecting with the Mass Pike in a huge tangle of ramps, and soaring across the Charles
River on a tall, ugly, concrete bridge overshadowing the BU Bridge, then descending
back into Cambridgeport, paralleling Pearl Street and Prospect Street and blighting
Central Square and eastern Somerville; and finally curving eastward along Washington
Street and rejoining I-93.
Demolition to make way for the Inner Belt began on the Roxbury end of the route
in 1962. But in one of the first examples of a major community uprising against
afederally funded highway project, people in the affected neighborhoods organized a
massive campaign to get the project cancelled. Fred Salvucci, an MIT civil engineering
graduate and transportation advisor to Boston Mayor Kevin White—and later
transportation secretary under Massachusetts Governor Michael Dukakis—became one of
the project‘s loudest opponents. His influence led Representative Thomas P. ―Tip‖
O‘Neill to famously tell Federal Highway Administration chairman Lowell Bridwell that
the Inner Belt and the Southwest Expressway ―would create a China Wall dislocating
7,000 people just to save someone in New Hampshire 20 minutes on his way to the South
Shore.‖
As sentiment against highway overbuilding gathered across the nation, the Inner
Belt and Southwest Expressway projects gradually fizzled, and in 1974 the state traded in
the promised federal highway dollars in exchange for mass-transit funding. But even
though the expressways went unbuilt, they left artifacts that are still visible around
Boston. There‘s the Canton cloverleaf; there‘s Roxbury‘s Melnea Cass Boulevard, whose
surprising width is the legacy of the demolition that extended all the way to Tremont
Street; and there‘s even a ―ramp to nowhere,‖ a spur jutting off the elevated section of I-
93 in Somerville where the Inner Belt was supposed to have connected to the interstate.
As we zoom along today‘s urban and interstate freeways, we don‘t think much
about the the cityscape that came before, or of the historical communities and the ribbons
of natural landscape that had to be erased to make way for our internal-combustion-
driven lifestyles. But in that forest in Canton, there‘s a permanent reminder of a road that
never was—and of the living neighborhoods that community action and a reexamination
of our priorities kept intact.
6: An Elegy for the Multimedia CD-ROM
Stars
May 9, 2008
On balance, I‘m a fan of all things Web. But every successful new medium disrupts
or transforms the media that came before—just as the movies killed vaudeville, TV killed
episodic radio, MP3s are upending the music industry, and Netflix is killing the
neighborhood video store—and it‘s important to recognize the value that can be lost in
this process. Today I‘d like to deliver a short elegy for the educational multimedia CD-
ROM, which has been replaced, but not surpassed, by the Internet.
For a brief time in the late 1990s—roughly between the release of Microsoft
Windows 95 in 1995 and the widespread availability of DSL-speed Internet access
starting around 2000—the typical home computer had a powerful graphical interface
capable of displaying at least 256 colors, but was effectively a digital island. At 28 or 56
kbps modem speeds, access to what little photographic, audio, or video content there was
on the Web was painfully slow. The only practical vehicle for getting multimedia content
onto PC screens and making it interactive in real time was the optical CD-ROM drive, a
standard feature of most home computers by 1996 or so.
With nowhere else to turn, artists, writers, and producers excited about the
possibilities of interactivity churned out a huge volume of CD-ROM-based games,
educational software, reference materials, and ―edutainment‖ titles. It‘s this last category
that particularly fascinates me. Using, for the most part, a single authoring and playback
platform called Macromedia Director (now Adobe Director), publishers created learning-
oriented CD-ROMs on everything from volcanoes to Impressionism to World War II
history. The theory behind most of these titles was that adding sounds, visuals, animation,
and narration to the dry facts of history, art, science, or engineering—and giving users
tools for navigating their own way through this material—would lend such subjects an
immediacy and vibrancy that older media, such as textbooks, encyclopedias, and TV
documentaries, simply couldn‘t match.
And for the most part, the theory was correct. I‘ve got a large collection of old CD-
ROM titles that I still pop into my Windows laptop from time to time—the way an
audiophile who won‘t part with his vinyl albums might break out the old LP player. As I
re-watch these titles, it‘s hard to avoid the conclusion that multimedia authoring, as an art
form, reached a kind of pinnacle around 1996-97. That was the era, for example, of
Dorling Kindersley‘s live-action, interactive version of David Macaulay‘s classic The
Way Things Work, and of James Cameron‟s Titanic Explorer from Fox Interactive, a
stunning 3-disc collection of blueprints, news footage, and first-person accounts of the
sinking of the Titanic, structured around sequences from Cameron‘s blockbuster movie.
But the absolute masters of the CD-ROM genre were a team of producers brought
together by Corbis, the digital image archive founded by Bill Gates in 1989. As I
explained in a previous column, the original idea behind Corbis was to license the digital
versions of the world‘s best art and photography for display in consumers‘ homes. That
part of the vision didn‘t come to fruition until recently; in the 1990s, meanwhile, the
company went through a number of incarnations as it searched for a realistic business
model, eventually emerging as an online stock-photo archive focused purely on image
licensing (there‘s a pretty good history here).
From 1994 to 1996, one of Corbis‘s strategies was to publish cutting-edge
multimedia titles that showcased its archive‘s rich content. And the series of six CD-
ROMs it created—especially A Passion for Art (1995), an interactive tour of the Barnes
Foundation museum outside Philadelphia, and Leonardo da Vinci (1996), which was
built around a digitized version of Leonardo‘s Codex Leicester, purchased by Bill Gates
in 1994 for $30.8 million—astonished most critics, including yours truly in a review
published in Technology Review 10 years ago this spring. The da Vinci disc contains the
most accessible and bewitching introduction to Leonardo‘s thinking and methods I‘ve
ever seen. And the Barnes CD-ROM is such an uncannily faithful recreation of the actual
museum—with its unparalleled collection of Impressionist masterpieces by Cezanne,
Renoir, Matisse, Van Gogh, and others—that when I had the opportunity to visit the
foundation several years ago, I felt as if I already knew my way around the entire
building, and was able to walk straight to the galleries that held my favorite paintings.
Unfortunately, the Corbis titles never sold well enough (or, if my guess is correct,
were never marketed aggressively enough) to cover the vast expense of producing
them—for A Passion for Art alone, Corbis had to process and piece together thousands of
photographs and hire art critics to write essays about 327 separate paintings. But it was a
grand experiment, for these productions, which are long since out of print, reached a level
of elegance, sophistication, and intelligence that has never been topped.
(The curious can still find used versions of some of the titles on eBay. And in a
blessing for art lovers, the Barnes Foundation commissioned a remastered version of A
Passion for Art—complete with stereo sound, millions of colors, and 1024×768-pixel
image resolution—in 2003. It‘s available for $35 from the foundation‘s online gift store).
But slack demand and poor marketing are only part of the explanation for the
brevity of the CD-ROM‘s Golden Age. With the spread of broadband Internet
connectivity around the turn of the millennium, Web surfers gained access to growing
amounts of multimedia content online—I hardly have to describe the deluge of digital
videos, movies, TV shows, podcasts, and other material now available from the likes of
iTunes and YouTube. And as soon as the old input/output limitations on home computers
began to lift, the CD-ROM—which can hold only about 650 megabytes of information—
began its slide into antiquity.
There‘s a funny thing about resource limitations: they tend to inspire artists to
amazing heights of creativity. Now that bandwidth is, for all practical purposes, free and
unlimited, it seems that no one in the community of Web designers and developers
bothers to create tightly scripted interactive multimedia productions of the caliber that
Corbis and other houses achieved in the 1990s using technology much less advanced than
what‘s available today.
Many of the conventions of the old CD-ROM format could be profitably reinvented
and adapted for the new era of broadband, wirelessness, 3-D graphics, and high-
definition displays. But aside from a few projects using platforms like Adobe‘s Flash and
Microsoft‘s Silverlight, I haven‘t seen anything that approaches the standards set by
Corbis more than a decade ago. Schumpeter watched capitalistic societies engage in a
continuous, technology-driven overhaul and called it ―creative destruction.‖ But
sometimes it‘s just plain destruction.
7: The Future’s So Bright, I Gotta Wear
Screens
April 16, 2008
Liquid-crystal displays are getting bigger by the minute. These days, you can buy a
huge 58-inch wide-screen LCD HDTV for under $3,000. Heck, at that price, you could
buy 64 of them and hire Los Gatos, CA-based 9X Media to assemble them into a video
wall large enough to hold its own in Times Square.
But at the same time, interestingly enough, LCDs are also getting smaller. In
Westborough, MA, there‘s a company called Kopin making LCD screens that are much
smaller than the proverbial postage stamp. Kopin‘s VGA CyberDisplay, which has a
resolution of 640×480 pixels, measures only 0.44 inches diagonally—about the size of a
fingernail.
It turns out that‘s small enough to mount a pair of the displays inside the temples of
an eyeglass frame. And that‘s exactly what Westwood, MA-based MyVu has done with
the MyVu Crystal, a wearable display that goes on sale next Tuesday. Kopin‘s VGA-
resolution screens give the Crystal, which is designed to be plugged into video players
such as Apple‘s iPod and iPod nano and Microsoft‘s Zune, four times the resolution of
MyVu‘s previous products. And they give MyVu a gadget that competes directly with the
other VGA wearable display on the market, the Vuzix iWear AV920.
But the MyVu Crystal has a big advantage over devices from Vuzix and other video
eyewear makers. These aren‘t the kind of wrap-around goggles that immerse you in your
own personal home theater, cutting you off from the world. Instead, an ingenious system
of mirrors and lenses puts the video image in the center of your field of view, while
leaving windows open on either side—meaning you can still see what‘s around you while
you‘re watching that I Dream of Jeannie rerun you downloaded from iTunes.
I‘ve been testing the MyVu Crystal this week, and although I wouldn‘t advise you
to walk down a busy street while wearing the device (for both safety and fashion
reasons), you can easily see enough through the amber windows to realize that somebody
is standing in front of you. (They‘re probably waiting for an answer to the question you
didn‘t hear thanks to the Crystal‘s noise-blocking earbuds.)
Now, why would you need video eyewear in the first place—especially when the
Crystal, at $299, will cost you more than even a top-of-the-line iPod or Zune? You
definitely aren‘t going to buy a MyVu unit as a style accessory. While the company‘s
designers have done as much as they can to gussy up the device in shiny black, chrome,
and amber plastic, it‘s still as geeky-looking as Geordi Laforge‘s visor from Star Trek:
The Next Generation. And you‘ll get better picture quality, and far less eye strain, from
watching your TV, computer, or portable DVD player.
But I can imagine at least one scenario where video eyeglasses would be useful:
when you want to watch something without disturbing the people around you—and/or
without letting them see what you‘re watching, such as when you‘re on a plane or at a
boring conference. And if you‘re in that situation, the MyVu Crystal has a lot to
recommend it. The effective viewing area of the Crystal‘s screen is surprisingly large.
The experience is about the same as looking at a 22-inch computer monitor from about
four feet away, or looking at the 3.5-inch screen of an Apple iPhone from about a foot
away. Because the VGA displays inside the Crystal have twice as many pixels as the
screen on the iPhone, however, TV shows and movies actually look much sharper on the
Crystal than they do on the screen of an iPhone—or a video iPod or a Zune, for that
matter.
The Crystal‘s displays also have excellent color saturation. I tested the device by
watching the pilot episode of the Showtime series Dexter—you know, the one where
Michael C. Hall plays a Miami PD forensics expert who also happens to be a serial
killer—and I can testify that the abundant blood in the show was, in fact, very red.
On the downside, the Crystal is fairly heavy for a device that‘s supposed to be worn
like a pair of glasses. I think I still have an impression on my nose from the bridge. And
if you wear glasses, you‘ll need to order custom prescription lenses for the device
(although I have glasses for mild nearsightedness and I didn‘t have trouble focusing on
the screen). Also, the Crystal‘s displays aren‘t perfect: the colors seem to ―seethe‖ a bit
compared to the rock-solid hues of my desktop monitor and my home TV. I don‘t know
the technical term for this phenomenon, but it may be inevitable with such tiny
displays—after all, the individual pixels in the Kopin display are about 1/1000th the size
of those in a conventional LCD TV.
My verdict on MyVu? I think the Crystal will appeal to gadget lovers, seeing as it‘s
one of the first wearable displays with decent resolution, and the see-through windows
mean you aren‘t rendered inoperable while wearing it. But that‘s a limited market.
Personally, I‘m going to hold out for a Matrix-style neural interface that jacks directly
into my optical cortex. That way, if I want to wear shades, I can choose a cool pair like
Neo‘s.
8: Science Below the Surface
May 23, 2008
When I took my dog out for a walk yesterday morning, the sidewalk was strewn
with old EKG readouts, as if we had just missed a macabre ticker-tape parade. I picked
up one of the sheets—probably flotsam from the hospital across the street—and gazed at
the thin blue trace, tremulously crossing a field of pink squares.
The stiff, glossy paper was imprinted ―Marquette Pressure-Scribe Recording 1976.‖
It was obviously old. In fact, a doctor‘s scrawl indicated that the patient—a woman
whose name I won‘t print here, since I probably committed a huge HIPAA violation just
by picking up the readout—had come in for the test in March 1987. I‘m no doctor, but I
could see from the trace‘s violent, roller-coaster swings that she had not been well.
Finding this medical artifact made me think of how the line of an EKG, with its
check-mark rising and falling, has become a kind of cultural icon for life itself. When the
line pulses regularly, the patient is okay. When it goes wild—and especially when it goes
flat—we all know what it means.
Or we think we do. But behind the blue thread on the old readout, there‘s a complex
skein of scientific causes and effects: the way rippling photons carried the colors of the
trace from the paper to my retinas; the way the trace itself was etched by the
seismograph-like arm of the old EKG machine; and the way the EKG arm was guided by
amplified electrical signals, signals that ultimately originated in the convulsing muscle
cells of one woman‘s heart on a spring day 21 years ago.
Science explains our visual world, and visual representations help to explain
science. That‘s the central theme of On the Surface of Things: Images of the
Extraordinary in Science, a wonderful book that I discovered this week and that is the
real subject of today‘s column. The authors, science photographer Felice Frankel and
Harvard chemist George Whitesides, have filled the book mainly with close-up images of
the surfaces of inorganic materials such as oil drops and silicon transistors, rather than
biological cells or tissues. Yet I feel certain they‘d look at the discarded EKG as its own
kind of surface, one telling a vivid story about our physical world and the beings who
move through it.
On the Surface of Things first appeared in 1997, and Harvard University Press
issued a revised, 10th-anniversary paperback edition last month. I picked it up at Barnes
& Noble Wednesday night, immediately after attending a talk by Frankel and Whitesides
at the new Apple Store in Boston (which is, by the way, a true marvel of glass, brushed
steel, and architecture-as-advertising). I‘ve only begun to examine the 58 detailed images
Frankel created for the book, each of which is paired with an elegant caption by
Whitesides. But I‘m already under the spell of the ravishingly detailed imagery, and I
intend to clear a permanent space for the volume on my coffee table.
Frankel, who is a senior research fellow at Harvard‘s Initiative in Innovative
Computing and a former research scientist at MIT, said Wednesday night that she doesn‘t
use any special tricks for her photography—just a Nikon F3 with a 55mm or 105mm
macro lens, shooting on Velvia and Ektachrome film that she later scans and cleans up
digitally (using her Mac—hence the appearance at Apple). The genius of Frankel‘s
images is really in the way she conceives and constructs her subjects. And this is, of
course, the essence of science photography, a field just as demanding and content-driven
as science writing.
A Swedish foundation recently recognized Frankel for her leadership in this field
with the 2007 Lennart Nilsson Award for Medical, Technical, and Scientific
Photography—basically, the Pulitzer of explanatory photography. As she put it
Wednesday, ―I‘m not just making pretty pictures. I understand the science.‖ And she
arranges the pictures to illuminate that science.
Her photo of ferrofluid, chosen as the cover image for the paperback edition, is a
perfect example. Frankel arranged seven small magnets beneath a glass plate, then placed
a drop of ferrofluid—powdered magnetite suspended in oil—atop the plate. The fluid
took on a disturbingly beautiful formation that calls to mind tiny silica diatoms, some
strange manifestation of the Mandelbrot set, or perhaps a grain of pollen, but is in fact
just the liquid‘s attempt to trace out the lines of the magnetic field.
It‘s a shame, in a way, that visuals this marvelous are confined to the printed page.
Frankel‘s images would look stunning on the Web—in fact, this nice Wired Science
video interview contains several more of the images from the book. They‘d also be
perfect as slide shows for high-definition TV screens.
As if the images weren‘t enough stimulation, Whitesides‘ captions are the work of a
scientist-poet on a par with Lewis Thomas—full of just-right metaphors of the kind we
science and technology writers spend all day searching for. His text for the ferrofluid
image begins: ―Pity the gryphon, the mermaid, the silkie, the chimera: creatures
assembled of incompatible parts, with uncertain allegiances and troubled identities. When
nature calls, which nature is it?‖ It may sound like the introduction to a lost tome of
mythozoography, but Whitesides quickly explains: ―A ferrofluid is a gryphon in the
world of materials: part liquid, part magnet…when placed in a magnetic field, [the
particles] develop a useful schizophrenia…[shaped by] the conflicting attractions of
gravity magnetism, and surface tension.‖ All I can say to that is—Wow.
I introduced myself to Frankel after the Apple lecture, and told her I shared her
interest in the way art overlaps with science. I was surprised when she immediately
corrected me. ―I‘m not an artist,‖ she said. ―I‘m a scientist who does photography.‖ Fair
enough—I can see how such a calling card might give Frankel and her camera easier
entree into the world‘s leading laboratories. But Frankel is also the author of Envisioning
Science: The Design and Craft of the Science Image, and leads a Harvard workshop
series called ―Image and Meaning,‖ which she herself states is ―about the visual
expression of ideas.‖ Frankel and Whitesides, who are collaborating on a new book about
nanotechnology to be titled No Small Matter, are both polymaths of the same species and
genus as Leonardo—striving to understand the world by endlessly re-visualizing and re-
expressing it. I don‘t believe you can take the artist out of the scientist, or the scientist out
of the artist. How else can you get from the surface to what‘s below it?
Author's Update, February 2010: No Small Matter: Science on the Nanoscale is
now available in hardcover from Harvard University Press.
9: Gazing Through Microsoft’s
WorldWide Telescope
May 30, 2008
I was 13 years old when Carl Sagan‘s 13-part PBS miniseries Cosmos appeared in
1980. I was so hypnotized by the combination of Sagan‘s storytelling, Jon Lomberg‘s
space art, and Vangelis‘s music that I decided to become an astronomer. I saved up to
buy a telescope—an Edmund Scientific Astroscan, a fantastic beginner‘s instrument that
is still available today, and for only $10 more than I paid all those years ago—and spent
many happy nights stargazing from my family‘s backyard in rural Michigan.
I had to abandon my astronomical ambitions about seven years later when, as a
sophomore at Harvard, I discovered that I wasn‘t nearly as good at math and physics as I
had thought. But it was okay. By that time I had realized that it wasn‘t Sagan‘s profession
that fascinated me so much as his way of describing it. The science was obviously key to
Cosmos, but I think the show‘s success—it‘s the most-watched series in the history of
public broadcasting—was rooted mainly in its stylish, sophisticated, friendly
presentation, which I still try to emulate as a writer. Sagan, who was also one of Johnny
Carson‘s most frequent guests on The Tonight Show, never talked down to viewers; he
took them along on a journey through his own unique picture of the evolution of the
universe and the history of science. (Indeed, the subtitle for Cosmos was ―A Personal
Voyage.‖)
Sagan died of complications from myelodysplasia in 1996. But if he were alive
today, I bet he‘d be filming a testimonial for a remarkable new piece of software from
Microsoft called WorldWide Telescope. If Google Earth is a ―virtual globe,‖ allowing
you to spin the planet to any location and zoom in on detailed topographical data and
satellite images, WorldWide Telescope is a ―virtual planetarium,‖ allowing you to pan
across the celestial sphere and zoom in on thousands of stars, galaxies, nebulae, and other
objects as recorded by the world‘s most powerful ground- and space-based telescopes.
I‘ve been exploring the free Windows program since its beta launch on May 13, and I
like it in part because it has so many of the features that made Cosmos a hit, including
amazing graphics, entertaining narratives, and an expansive view of, well, the entire
cosmos. Yet despite the enormous depth of the data behind it, the program has a simple,
enticing message: ―Explore with me.‖
The program is the creation of a small team at Microsoft Research, including Curtis
Wong, manager of the lab‘s Next Media group, and developer Jonathan Fay. Wong‘s
work has turned up in this column before: he was also the producer of A Passion for Art,
a 1995 interactive CD-ROM about the Barnes Foundation museum in Philadelphia.
Wong might be called the Carl Sagan of multimedia: he‘s an amateur astronomer himself,
and his understanding of interactive platforms, along with his zeal for educating and
informing people through technology, is all over this program. The name itself,
WorldWide Telescope, comes from a 2002 paper by Jim Gray, another Microsoft
researcher who was the main force behind TerraServer and SkyServer, two digital-sky
projects that were the current program‘s direct ancestors. Tragically, Gray went missing
last year while on a solo sailing trip to the Farallon Islands outside San Francisco Bay.
I won‘t try to describe WorldWide Telescope in detail; Ogle Earth and Ars
Technica have thorough reviews. I‘ll just mention the two aspects of the program that
wowed me the most. One is the seamlessness of the interface. You can pull back far
enough to see a 60-degree-wide swath of the sky—about as much as you could take in
with the naked eye, if you were standing outside at night—then, using your mouse‘s
scroll wheel or the arrow keys, you can zoom in until the screen covers 1 second of arc or
1/3600th of a degree, about the size of the smallest features visible in pictures from the
Hubble Space Telescope.
In between those two extremes, there are thousands of interesting objects to look at,
from the Milky Way to the Pleiades star cluster to the aptly named Sombrero Galaxy to
the amber plumes of the Eagle Nebula, made famous by pictures from the Hubble. (Click
on the screen shots on this page to see larger versions.) Moving between these objects is
as easy as zooming out, panning, and zooming back in. In fact, the program‘s simplicity
reminds me why I loved my Astroscan—a Newtonian reflector that is amateur
astronomy‘s equivalent of a point-and-shoot Polaroid camera. The homely little telescope
is built in the shape of a sphere and rests inside a cup-like holder, so that you can rotate it
to view any object in the sky without having to fiddle with a complicated tripod mount.
WorldWide Telescope is like that—just point and go.
But the fact that the program aggregates data from so many different sources, such
as the Hubble, the Chandra X-Ray Observatory, and the Sloan Digital Sky Survey, means
that the ―view‖ you‘ll get using WorldWide Telescope is far better than what‘s possible
with the Astroscan—or any other physical telescope, for that matter. It‘s so much better,
in fact, that I worry a little about whether people who have the software will ever bother
to go outside to look at the actual sky. But ultimately, I believe that‘s a small concern. By
making astronomical imagery so accessible, WorldWide Telescope is likely to generate a
new crop of young amateur astronomers who want to witness the sky‘s wonders for
themselves—even if the Orion Nebula that they see through their two-inch refracting
telescope is a fuzzy white blob compared to the lacy, multicolored Hubble images the
program provides.
Ideally, kids who try out WorldWide Telescope will have a teacher or parent to help
them explore (just as I had a mentor, my high-school physics teacher, when I was
learning my own way around the sky—thanks Mr. Cartwright!). But to enhance its value
as a learning tool, with or without a teacher present, Wong and Fay have provided a
guided-tour function—which is the second thing that really impressed me about the
program. These tours can be as simple as a series of stops in the sky—the seven most
interesting spiral galaxies, for example—or as elaborate as PowerPoint slide shows,
complete with music, narration, titles, and superimposed images. About 40 different tours
are available from WorldWide Telescope‘s top-level menu, and more are being created
and uploaded every day, thanks to the program‘s built-in authoring tools.
It seems churlish to criticize something as marvelous as WorldWide Telescope. But
as long as Microsoft Research is giving stuff away for free, I might as well register a few
requests. First, the Next Media group really ought to reach out to the creative community
by building a Mac version of the program. This may be out of the question, considering
that the nuts and bolts of the interface are based on a proprietary Microsoft platform
called the Visual Experience Engine. But the more guided tours users create, the more
people will want to access them, and it will be sad if Mac users are cut off from this
bounty.
There‘s also a need for stronger online communities around WorldWide
Telescope—places where users can easily upload and share the tours they‘ve created, the
same way Google Earth users can share KML layers containing their own tours and
images or 3-D modelers can contribute models they‘ve built using SketchUp to Google‘s
3-D Warehouse. The program‘s top-level menu includes a ―Community‖ tab, but there
aren‘t many actual communities yet, and those that do exist, such as Astronomy
magazine‘s community, don‘t seem to include any user-generated tours. But I‘m sure it‘s
just a matter of time before WorldWide Telescope users form their own online
clearinghouses.
By the way, don‘t be put off by the system requirements for WorldWide Telescope.
Microsoft says that for the best experience, you should have a Vista machine with a 2-
gigahertz Intel Core 2 Duo processor, 2 gigabytes of RAM, and a 3D graphics card with
256 megabytes of RAM. I‘m sure the program would run like a flash on a system like
that. But I‘ve had little trouble running it on my 4-year-old Dell Windows XP laptop,
which has a 1.5-gigahertz Pentium M processor, a measly 256 megabytes of RAM, and
an Nvidia GEForce video card with only 64 megabytes of RAM.
To me, the most astonishing thing about WorldWide Telescope and its spiritual
cousin, Google Earth, is that they let average Internet users play with databases far larger
and more detailed than anything professional astronomers, earth scientists, or spymasters
could access as recently as a decade ago. Not only that, but the data is all free, and the
programs‘ builders encourage users to make new things with it and add their own layers
of information. It‘s only a matter of time until some 13-year-old uses WorldWide
Telescope to produce her own version of Cosmos. I‘ll be waiting.
10: Megapixels, Shmegapixels: How to
Make Great Gigapixel Images With Your
Humble Digital Camera
June 6, 2008

Size matters, at least when it comes to the resolution of digital photos. As much as I
love my iPhone, its built-in 1200×1600-pixel camera just doesn‘t work for serious
photography. Problem is, it‘s been my only camera for almost a year, ever since my
previous digicam croaked during a cross-country road trip. But a couple of weeks ago I
found a very reasonable price ($314) on a Canon Powershot S5, which has maximum
resolution of 3264×2448 pixels, or 8 megapixels. That‘s enough to make nice 16-by-20-
inch prints, and should be plenty, practically speaking, for anyone but a professional
photographer.
Or so I thought. But I‘ve recently become intrigued with a form of digital
photography that takes image size to a new extreme: super-high-resolution or ―gigapixel‖
imaging. Gigapixel photography isn‘t about making bigger prints—it‘s about gawking at
the images themselves on the screen, sort of the way you‘d watch hummingbirds fly in
super-slow-motion in a Discovery Channel show on your HDTV. Such images contain
such an overwhelming amount of detail that you‘re not really supposed to download or
view them all at once—rather, you use a combination of Web streaming technology and
scrolling and zooming tools to dive deeper and deeper, in much the same way that
software like Google Earth allows you to zoom from a view of the entire planet to a view
of your backyard.
Of course, there‘s no commercially available camera that can actually take pictures
with billions of pixels. Hasselblad‘s H2D-39, a $31,000 device based on Kodak‘s 39-
megapixel CCD, represents the rough upper limit today. To make images bigger than
that, you have to stitch multiple photos together using graphics software. And lately, two
things have happened to make gigapixel photography a practical pastime for amateur
photographers: professional-grade stitching software has become more affordable, and
there‘s at least one online community, GigaPan.org, where you can upload and share your
gigapixel images.
I mentioned the GigaPan site briefly in my first World Wide Wade column back in
April. It‘s not a Flickr-style commercial photo-sharing site, but rather an outgrowth of a
collaborative research project between Google, Carnegie Mellon University, the
Intelligent Robotics Group at NASA Ames Research Center, and an Austin, TX,
hardware maker called Charmed Labs. This fall Charmed is bringing out the Gigapan
Robotic Imager, a motorized platform for consumer digital cameras. The device precisely
controls a camera‘s pan and tilt, guiding it through dozens or hundreds of overlapping
snapshots that can be stitched together later to create huge panoramas or mosaics. (NASA
probes such as the Mars Phoenix lander generate panoramas of the Martian landscape in
much the same way.)
But you don‘t need a fancy device like the Gigapan robot to make great panoramic
photos. And while GigaPan.org was originally developed as a showcase for images
created using the Charmed Labs imager, anyone is free to upload their panoramas to the
site. There‘s just one catch: unlike other photo-sharing sites such as Flickr, which has an
upper limit of 20 megabytes on the size of the photos you can upload, GigaPan has a
lower limit: images must be at least 50 megabytes in size!
As a weekend project, I decided to see what kinds of panoramas and mosaics I
could make for the GigaPan site using my new Canon. I grabbed my tripod and trooped
over to Boston‘s Copley Square, home to architectural gems such as Trinity Church, the
John Hancock Tower, the Boston Public Library, and Old South Church.
Like most Powershot models, the S5 has a built-in ―stitch assist‖ mode that helps
you take a series of images from left to right. In other words, it shows the rightmost edge
of your previous photograph on the screen to help you line up the next shot, theoretically
leaving enough overlap so that photo-stitching software can later create a seamless
panorama. Notice I said ―theoretically.‖ I used stitch assist mode to take 13 successive
images of Copley Square, but when I got home and used the Canon PhotoStitch software
that came with my camera to assemble the images, the resulting image was less than
ideal. Not only did the software have a tough time lining up the images correctly, but the
final panorama suffered from optical distortions that made most of the buildings in the
photo lean precariously, as if gravity had gone askew.

It wasn‘t really the software‘s fault—it just wasn‘t made to deal with a particular
challenge in photography called the ―keystone effect.‖ If you‘re standing on a sidewalk
and looking up at a building, perspective will naturally cause the higher floors of the
building to look narrower. Our brains seem to correct for this effect most of the time, so
we don‘t really notice it. But photographs have a way of calling attention to it, and one
result is that a tall building (or any vertical edge) that isn‘t near the center of an image
will appear to lean perilously toward the center line. Since a panorama consists of a series
of individual photographs—each one keystoned around its own center line—the
collection as a whole can be vertigo-inducing, with buildings tilting every which way. To
fix the problem, you need better software.
So I downloaded the trial version of a program called PTGui (for Panorama Tools
Graphical User Interface), one of the third-party products recommended by GigaPan.org.
(The Gigapan device will come with its own stitcher software.) Made by a small software
company in the Netherlands called New House Internet Services, PTGui includes a mess
of clever features that correct problems like keystoning. I don‘t really know how it works,
but I‘m guessing that the software has edge-detection algorithms that help it select the
portions of each image that are least affected by keystoning; it may also rotate individual
photos as necessary to make vertical lines look vertical. To see how well it works, just
take a gander at the alternate panorama I made with PTGui, using the same source images
as before. In this image, the buildings are as straight as pine trees.
I also wanted to try making a large mosaic. So I positioned myself on a Copley
Square sidewalk kitty-corner from Old South Church and used a telephoto setting to take
42 extreme close-ups of the church, a lovely nineteenth-century Northern Italian Gothic
structure built from local Roxbury puddingstone. To put PTGui‘s image-recognition
capabilities to the test, I dumped all 42 pictures into the program at once and let the
software sort out how they fit together. It did so without difficulty, with the exception of
a couple of tiles that showed nothing but sky and clouds, which I had to reposition
manually.
The resulting mosaic, of course, suffered from the same keystoning as any single
image would have—but one of PTGui‘s marvelous features is that you can drag the entire
mosaic around vertical or horizontal control lines, which has the effect of warping the
appropriate areas of the mosaic until all of the vertical or horizontal edges are parallel.
The result is a mathematical fiction, but somehow looks much more pleasing to the eye.
To see what I mean, look at these views of my mosaic before and after I applied the
vertical-line correction:
I haven‘t yet put my full Copley Square panoramas and Old South Church mosaic
up on the Web, because the trial version of PTGui doesn‘t allow you to save images. (I
captured the images you see here directly from my computer screen using the Mac‘s Grab
utility.) I had almost made up my mind to plunk down the 79 euros ($126) that New
House Internet Services charges for the full version of the program when the PTGui
website went offline.
I‘m sure I‘ll buy the software, or some similar program, pretty soon, as I can feel an
itch developing to be part of the community of hundreds of gigapixel photographers who
are sharing, exploring, rating, and annotating one another‘s images at GigaPan. Travel-
and nature-related photos seem to be the most popular: indeed, one 240-megapixel image
of the Grand Canyon demonstrates just how much detail can be packed into a mosaic
(you can see ripples on the Colorado River from miles away). One of the coolest features
of the GigaPan site is the ability to isolate ―snapshots‖ in other people‘s mosaics and post
them alongside the main image, almost as if they were comments on a blog post; the
feature frequently leads to a sort of game in which site users race each other to point out
funny, obscure, or surprising details that aren‘t visible until you zoom way in, the same
way Google Earth users collect instances of nude sunbathers on rooftops.
Given Moore‘s Law, it wouldn‘t be a shock to see gigapixel image sensors turning
up eventually in consumer-level cameras. But until that day, super-high-resolution
photographers will have to stick to the software equivalent of the quilting bee, stitching
individual panels together into massive panoramas or mosaics that only appear as if they
were unified images. The reward can be a picture so finely featured that you feel like you
can keep zooming into it forever, as if you had Superman‘s telescopic vision. Now that‘s
high definition.
11: You Say Staccato, I Say Sfumato: A
Reply to Nicholas Carr
June 13, 2008
One of my prized possessions is an enormous book called Leonardo da Vinci: The
Complete Drawings and Paintings. You know how a lot of old art or history books have
a few glossy color plates bound into the center? Well, this book has 695 of them, each
one measuring about 12×18 inches, and together they weigh an incredible 19 pounds,
enough to put a permanent sag in my coffee table.
When the publishers put ―complete‖ in the title, they weren‘t kidding. If you want
to see how Leonardo grew as a draftsman and painter between the time of his earliest
known works, around 1472, and his death in 1519, there is no better source. Of course,
with so much to offer, the book encourages grazing. You can‘t help turning the pages to
see how the chaotic theatricality of The Last Supper gave way to the sfumato serenity of
the Mona Lisa, executed only a few years later. (Sfumato, from the Italian for ―smoky,‖
is the term for gradations in shade, like those in the Mona Lisa‘s cheekbones, that are so
smooth as to be imperceptible.) With a book this large—many of the reproductions are
larger-than-life—you don‘t have to see the actual paintings in Milan or Paris to be
overpowered by Leonardo‘s genius.
Yet I have a feeling that this would trouble Nicholas Carr, whose provocative
article for the July/August 2008 issue of The Atlantic, ―Is Google Making Us Stupid?,"
was published online Wednesday. The article, which is only obliquely about Google,
argues that the hyperlinked structure of the Web encourages staccato reading and staccato
thinking. On the Web, Carr asserts, it‘s so easy and so tempting to flit from page to page
that people who use the Internet extensively lose the ability to hold a thought, to analyze
an issue with any depth, and ultimately to construct a personal interpretation of the world.
When we consume information online, Carr writes, ―Our ability to interpret text, to make
the rich mental connections that form when we read deeply and without distraction,
remains largely disengaged.‖
What objection could Carr have to my Leonardo book? To the extent that it packs
the man‘s entire visual oeuvre into one volume, it‘s just like the Web, which brings all
the world‘s information to one place (your browser window). But each Leonardo canvas
is worth deep, protracted study—up close and in person, if you can arrange it. Indeed, if
we were all credentialed art historians or wealthy world travelers, this would be the surest
path to true art appreciation. The book lets me short-circuit that laborious process and flip
from painting to painting. From one perspective, then, you could say that photography,
color printing, binding, and all the other technologies that brought the Leonardo book
into my living room are intellectually impoverishing; the encounters they foster are far
more casual than a visit to the refectory at Santa Maria delle Grazie (where The Last
Supper continues to deteriorate) or the Louvre.
But that perspective is a narrow and crabbed one, in my opinion. I can accept Carr‘s
premise that the Internet discourages deep reading. After all, it‘s a strain to read long
documents on most types of screens, and time spent on the Net is undeniably time taken
away from other pursuits (though I suspect that TV viewing has suffered more in this
respect than book reading). But the idea that deep reading is the only way people form
rich mental connections is much harder to swallow, and suggests to me that Carr may be
too caught up in the romantic image of the poet or professor lost in his book. I think he
misses the many other ways in which these connections arise—some of which, believe it
or not, are happening right on the Internet.
Take, for example, some of the online learning and reference tools I‘ve written
about in this column, such as Microsoft‘s WorldWide Telescope. The depth and detail of
the image databases brought together in this virtual planetarium are nothing short of
astonishing, and as a platform for both solo exploration and prerecorded lessons, the
program promises to completely change the way students and other enthusiasts learn
about astronomy. Indeed, it‘s so easy to create guided tours inside WorldWide Telescope
that students can use it to teach each other about the climate of Mars, the meanings of the
constellations, and the life cycles of stars. It could not have been built without Web-based
technologies.
Or take a slightly more prosaic Web tool—Netvibes, my favorite RSS aggregator. If
anyone has managed to approximate the ―Daily Me,‖ the personalized electronic
newspaper envisioned by MIT Media Lab founder Nicholas Negroponte in the early
1990s, it is Tariq Krim, Netvibes‘ founder and CEO. By cramming about 70 headlines
from my favorite blogs onto a single Web page (and hundreds more if I scroll down),
Netvibes gives me a quick sense of what other Web writers are obsessing about on any
given day, and helps me decide what I should obsess about for the next few hours.
It‘s an excellent way to stay connected—and to see the connections among the
themes oozing across the blogosphere‘s inner membranes. But for Carr, it‘s all a big,
boiling mess that prevents him from fastening onto any one piece of information long
enough to absorb it. ―Once I was a scuba diver in the sea of words,‖ he writes. ―Now I zip
along the surface like a guy on a Jet Ski.‖
None of my criticisms here are meant to impugn Carr himself, whom I found to be a
fascinating interviewee back in January, when his book The Big Switch: Rewiring the
World, from Edison to Google had just come out, and who—despite his protests that the
Web is making his brain spongy—writes with admirable clarity, firmness, style, and
insight. But when I look at the technologies that Carr believes are sapping our
concentration—hyperlinks, search engines, blogs, blinking banner ads, and the like—I
see the apparatus of an unprecedented global conversation, with more people sharing
their insights, their fears, their experiences, their creations, and yes, their products and
services, than anyone could have imagined would be possible just a decade or two ago.
I‘m all for independent thinking and carving out one‘s own intellectual space. But
it‘s almost frightening to think about what a comparative information vacuum we all
lived in circa, say, 1988, when I was in college, and how easy it was as a consequence to
stumble around in ignorance of what other people were thinking and writing. Back then,
the shape my term papers took depended pretty much on what books weren‘t already
checked out at the library, what references I stumbled across in the Reader‟s Guide to
Periodical Literature, and how much change I had for the photocopiers. It all seems
positively medieval next to the glories of the Web, which, while certainly full of
misinformation, at least helps a young scholar today get a reasonable sense of the range
of current ideas about a given topic, the better to stake out an original position.
Now, Carr is absolutely right that for writers, especially journalists, the Web is a
double-edged sword. If you don‘t have your thoughts in order, there is no better way to
put off writing than to spend all day surfing the Web, telling yourself that you‘re just
gathering more background information, or that the last tidbit of evidence you need to
drive home your argument is just around the next link. At some point, you do have to step
away from the browser. I do some of my best writing in my head, when I‘m in the shower
or walking to Starbucks. And I‘ve come to love Thursdays—the day I write this
column—because that‘s the day when I have an excuse to stop blogging (a curious
operation that sometimes amounts to ingesting information and regurgitating it at the
same time) and actually think for a while.
But somehow I find that when I do make that mental shift, my deep-thinking
neurons are still there, waiting for a workout. (As Carr‘s must be, if he‘s writing 4,000-
word articles for The Atlantic). I don‘t recognize myself in Carr‘s world, where Google is
supposedly reducing Web users to Skinnerian lab rats, clicking on contextual ads all day
long in return for shiny pellets of information. ―The faster we surf across the Web—the
more links we click and pages we view—the more opportunities Google and other
companies gain to collect information about us and to feed us advertisements,‖ Carr
writes. ―The last thing these companies want is to encourage leisurely reading or slow,
concentrated thought. It‘s in their economic interest to drive us to distraction.‖
But frankly, the argument that computers are reprogramming our brains is getting a
little shopworn. Arlington, MA-based essayist and scholar Sven Birkerts started sounding
this refrain as early as 1994, when he wrote The Gutenberg Elegies: The Fate of Reading
in an Electronic Age; the book is an eloquent, if premature, warning that language itself
would erode as the printed word gave way to pixels on a screen (parts of Birkerts‘ book
are online here). And if you really want to count up the many warnings that free thought
and the intellectual lifestyle are about to be smothered by modern media culture, you can
go all the way back to 1950, when Lionel Trilling published The Liberal Imagination:
Essays on Literature and Society.
Please don‘t think that I‘m insensate to the crass, uninformed, anti-democratic, anti-
intellectual nature of much of what‘s on the Web, or to the countervailing pleasures of
curling up with a good book. I relish the quickening of the soul that Carr is gesturing at
when he praises old-fashioned book reading. ―The kind of deep reading that a sequence
of printed pages promotes is valuable not just for the knowledge we acquire from the
author‘s words but for the intellectual vibrations those words set off within our own
minds,‖ Carr writes. ―In the quiet spaces opened up by the sustained, undistracted reading
of a book, or by any other act of contemplation, for that matter, we make our own
associations, draw our own inferences and analogies, foster our own ideas. Deep
reading…is indistinguishable from deep thinking.‖
All of which probably true. But deep reading is hardly the only route to deep, richly
connected thinking. On the Internet—which is, for better or worse, my workplace and
often my playground—I find the inspiration for dozens of small acts of contemplation
every day. And I think Leonardo himself—the original poster boy for attention deficit
disorder, who dawdled for years over paintings, left many commissions unfinished, and
filled hundreds of notebooks with his musings on everything from the way water eddies
to the construction of siege-proof castle walls—would have loved the Web‘s endless
variety. He would have seen its potential as the nursery for a new generation of thinkers,
people who know instinctively where to turn to supplement their own resources with
those left by others, people sensitive to the infinite varieties of human experience and
creativity and eager to remix them—in short, a generation capable of a kind of sfumato of
the mind. And I think he would have felt right at home.
Author's Update, February 2010: Nicholas Carr is developing the ideas in his
Atlantic article into a book entitled The Shallows: What the Internet Is Doing To Our
Brains. It's due out in June, 2010.
12: Space Needle Envy: A Bostonian’s
Ode to Seattle
June 20, 2008
―Only one is a wanderer. Two, together, are always going somewhere.‖ It‘s one of
my favorite lines from my favorite movie—Hitchcock‘s Vertigo—and now it‘s true of
Xconomy. By introducing our Seattle site this week, we‘ve become a real network.
Adding Seattle to our original Boston presence gives us the chance to show that the
original idea behind the company—to create a series of hyperlocal technology news sites,
each committed to covering the innovation scene in its community, but together building
a broader understanding of the way technology is shaping the modern economy—has the
legs to go somewhere.
We could not be happier about our starting position in Seattle. To handle our
coverage there, we were fortunate enough to be able to hire top-notch journalists Greg
Huang, an old friend and Technology Review colleague with his very own PhD in
electrical engineering and computer science, and Luke Timmerman, a veteran life
sciences reporter who‘s even more famous among the West Coast biotech set than we
thought. But as we worked to help Greg and Luke get set up in their Pill Hill office and
launch the Seattle pages, I couldn‘t help feeling a tinge of envy over the adventure
they‘re beginning.
Seattle is not only a culturally sophisticated and visually stunning city, but a
fantastic place to write about the business of science and technology. (Which is why we
picked it, of course.) The city seethes with innovation; it seems powered equally by
caffeine and new ideas. Speaking personally, I see it as one of about three places in North
America with enough big technology companies, cool startups, great research hospitals,
great academic institutions, and tech-focused venture investors to keep me happily busy
as a technology journalist (the other two being Boston and the San Francisco Bay area).
The fact that the Seattle software economy has not one but at least three major
―anchor‖ companies—Microsoft, Amazon, and RealNetworks—makes a huge difference.
A tech writer could easily spend a year simply chronicling the array of Pacific Northwest
startups led by executives who cut their teeth (or made their first fortunes) at one of those
three outfits. Recent examples include Pelago, maker of a combination friend-finder and
mobile travel application called Whrrl; RSS software company Attensa; health-oriented
social networking site Trusera (which Greg has already profiled); mobile search and
advertising company Medio Systems; mobile applications developer Webaroo; online
real-estate value estimator Zillow; real-estate search site Redfin; Imperium Renewables, a
biofuels developer; HDTV DVR maker Digeo; and video search provider Delve
Networks (known until last week as Pluggd), to name just a few. In the gone-but-not-
forgotten category, there‘s the once-popular personalized newspaper service Findory and
voice-over-Wi-Fi company TeleSym. There are also plenty of older, more established,
but still interesting companies that basically orbit one of the three anchor companies,
such as Bellevue, WA-based embedded Windows software maker BSQUARE.
There‘s another related category of companies and organizations around Seattle—
you might call it the Microsoft Aura. These aren‘t Microsoft spinoffs exactly, but they are
definitely part of the software giant‘s legacy, and are an indispensable part of the Seattle
area‘s economy and culture. There‘s the radical Bellevue-based software engineering
venture Intentional Software, headed by Charles Simonyi, the former head of Microsoft‘s
applications software group; the ―invention factory‖ Intellectual Ventures, also in
Bellevue, and founded by former Microsoft CTO (and current Xconomist) Nathan
Myhrvold; the Seattle-based image licensing company Corbis, owned by Bill Gates
himself, which controls the digital rights to a good fraction of the world‘s historical photo
and film archives; the Bill & Melinda Gates Foundation, which, aside from its massive
philanthropic activities in areas like global health, is a huge supporter of housing and
community grants in the Northwest; and, of course, Vulcan, the organization that
manages both the charitable endeavors and the multi-tentacled businesses of Microsoft
co-founder Paul Allen. (It‘s truly impossible to imagine today‘s Northwest without Paul
Allen operations like the Experience Music Project, the Science Fiction Museum, the
Flying Heritage Collection, the Allen Institute for Brain Science, the Portland
Trailblazers, and the Seattle Seahawks.)
And I haven‘t even mentioned Boeing and its penumbra of aerospace and defense
companies. One could certainly make similar lists of great companies, influential
institutions, and visionary technologists in the New England area—in fact, that‘s what we
do on a day-to-day basis here at Xconomy Boston. But I still envy Greg and Luke,
because I know they‘re going to meet some amazing people and come across tons of
great story material as they dig into Seattle‘s high-tech economy, with its unique mix of
expertise in consumer-facing software, multimedia communications, e-commerce,
computer networking, telecommunications, and transportation.
Before I wrap up, I thought I‘d pummel you with a few intriguing, semi-random
pieces of trivia highlighting the similarities and contrasts between our two hometowns—
Beantown and The Emerald City.
• Boston and Seattle are almost identical in population. Boston has a slight edge:
590,000 within the city limits, according to Census Bureau estimates for mid-2006, as
opposed to 582,000 in Seattle. (The Boston Metro area, with 4.5 million people, is also
slightly larger than the Seattle Metro area, with 3.9 million.)
• Both cities are largely built on landfill. The heart of colonial Boston, the
Shawmut peninsula, was once connected to the mainland only by a narrow isthmus, but
landfill projects more than tripled the city‘s acreage between 1630 and 1890. Downtown
Seattle originally sat in a tidal marsh, but after the city was destroyed by fire in 1889, it
was rebuilt one to two stories higher than before—partly to reduce flooding and partly to
keep the newfangled flush toilets from backing up at high tide.
• Seattle has a reputation for dampness, but Boston actually gets more
precipitation: 42.9 inches per year, according to Weather.com, compared to 37.1 inches
for Seattle.
• Seattle boasts a significantly higher median household income than Boston
($49,300, according to CityData.com, versus $42,600) and a much lower poverty rate
(12.3 percent versus Boston‘s 22.3 percent).
• Frasier Crane started off as a malcontent barfly in Boston, and ended up as a
malcontent talk-show psychologist in Seattle.
Don‘t take my paean to Seattle the wrong way—I‘m glad to be staying here in
Boston. I once saw someone comment on a message board that ―If you take Boston and
extract all the bad stuff, you get Seattle.‖ I disagree. It‘s true that Boston can feel
crowded, dirty, and hostile compared to newer, Western cities like Seattle. But in other
ways—its outstanding cultural institutions, its tolerance for radical ideas, its commitment
to democracy as a big multiethnic brawl—Boston really does live up to its nineteenth-
century nickname as the Athens of America. And, of course, thanks to its huge student
population, Boston has a youthful cast and the sense of perpetual optimism and renewal
that goes along with it. (The average Bostonian is four years younger than the median
Seattleite—31.1 years old versus 35.4.)
And there are two things I definitely don‘t envy about Greg and Luke‘s new
location: the cloudy weather (226 overcast days per year, compared to Boston‘s 164) and,
frankly, the proximity to Mount Rainier. While it‘s certainly pretty to look at, it‘s an
active stratovolcano that erupted as recently as 1854 and will eventually go off again,
sending a flood of boiling mud and ash directly into Tacoma and southern Seattle. (That
would make for an interesting season-ending cliffhanger on Grey‟s Anatomy.)
On the other hand, Boston was badly damaged in the Cape Ann Earthquake of
1755, and is periodically battered by hurricanes and Nor‘easters, so we definitely have
our share of natural hazards. Still, I‘m content to wander in New England for a while.
13: You’re Listening to Radio Lab—Or
You Should Be
July 11, 2008
I drove from Boston to northern Michigan last weekend to hang out with my
parents over the 4th of July. It‘s a 15-hour trek—plus another two or three hours if you
forget your passport and you have to go south around Lake Erie instead of straight
through Canada. But I didn‘t mind the drive, because I had an iPod full of Radio Lab
podcasts to catch up on.
Radio Lab, a production of New York‘s flagship NPR station, WNYC, isn‘t just the
best science and technology show on public radio. I think it‘s a contender for the best
contemporary radio show, period. I discovered it in 2006, when it was already in its
second season. But thankfully, MP3s are available at iTunes and at the show‘s website,
and because there are only five new episodes per year, I had plenty time in the car to get
through the show‘s entire third and fourth seasons.
If you asked me to say what Radio Lab is about in one word, I would say
―perception.‖ Jad Abumrad, the show‘s lively host and producer, is the son of an
endocrine surgeon and a research biologist, a graduate of the music and creative writing
programs at Oberlin College, and a longtime radio journalist. Clearly, the only fate open
to a person with a background this eclectic is to invent new interviewing, storytelling, and
sound-editing techniques to explore big questions at the boundary of neuroscience,
evolution, and philosophy—questions like, Where‘s the part of my brain that‘s me? Why
do some songs get stuck in my head? Where does guilt come from? What makes placebos
work so well? Can we erase memories? Why do we find zoos so fascinating? Why are
people who deceive themselves more successful than those who don‘t? Why do we
sleep/dream/laugh/lie/age/die?
In the end, all of these questions are about how we see the world. And it doesn‘t
take a PhD to ask them—just a notebook or a microphone. Abumrad has said in
interviews that he only became interested in science a few years ago, and that he often
embarks on making an episode with only a ―Time magazine-level‖ understanding of his
subject matter. I think that‘s actually one of the show‘s main strengths. If you‘ve studied
science too long, or spent too much time around scientists, you lose the ability—or
maybe just the courage—to ask big, silly, impertinent questions.
Part of the trademark Radio Lab approach developed by Abumrad and his jovial
and mischievous co-host, ABC science correspondent (and fellow Oberlin alum) Robert
Krulwich, is to stumble around behind a scientist in his or her lab, posing questions a
third-grader might ask, professing astonishment and disbelief at the answers, and nagging
for clarifications and simplifying analogies. Of course, it‘s all an act—Abumrad and
Krulwich know exactly what they‘re doing as they maneuver scientists into dropping
their professional reserve and showing their unedited, human passion for their subjects.
One of those passionate researchers is Diana Deutsch, a professor at the University
of California, San Diego, who studies the psychology of music. Deutsch, whose lilting
Oxford-accented voice is somehow both playful and extremely serious, has uncovered
some very strange things about the sounds of language by studying looped recordings of
human speech. It turns out that certain phrases, if you listen to them over and over, start
to sound like music, complete with rhythm and melody—which raises some big questions
about what music really is, in neurological and cultural-linguistic terms.
Is it possible, for example, that children who grow up speaking tone-based
languages like Mandarin are better equipped to become great musicians (thus accounting
for the frequency of Chinese violin prodigies)? While investigating such ideas, Abumrad
and his colleagues deftly use digital sound editing, actual music, and even, from time to
time, hired singers and actors to raise material like Deutsch‘s tape loops to the level of
performance art. If you just listen to the first few minutes of the Season 2 episode
―Musical Language,‖ you‘ll understand what the heck I‘m talking about.
Two more of the show‘s unofficial scientists-in-residence are neurologist Oliver
Sacks, surely one of the three or four best physicians writing in English today (along with
Sherwin Nuland, Atul Gawande, and Abraham Verghese), and theoretical physicist Brian
Greene, author of The Elegant Universe and surely the world‘s most understandable
string theorist. These folks pop in every so often to share earth-shattering yet deadpan
observations—like this one from Greene, in a Season 1 episode called ―Beyond Time―:
―In quantum theory, some have suggested the so-called ‗many worlds‘ interpretation—
that the universe is not a single entity, that there are many universes and each of the
choices you make is borne out in one of these copies…The fellas that believe this say, ‗I
chose vanilla [ice cream] in this world but there‘s another version of me that‘s now eating
chocolate.‘‖
As you listen to the show over time, you start to feel toward these guests as you
might toward that wonderful, itinerant aunt or uncle who‘s always stopping by between
their European lecture tour and their Australian scuba safari, just long enough to take you
to the planetarium and drop off their latest manuscript on neurotransmitters and quantum
teleportation at the publisher‘s office. The genius of Radio Lab is that Abumrad and
Krulwich play the role of the wide-eyed nephew/niece so convincingly while—behind
the curtain—they‘re also operating the whole glorious Wurlitzer.
A few months ago, Jesse Thorn, the host of another very good public radio show
called The Sound of Young America (which happens to share Radio Lab‘s time slot on
WNYC), interviewed Abumrad and Krulwich about their work. Abumrad said part of the
show‘s goal is to liberate science from the textbooks and the gray newspaper columns.
―Scientists are often talked about as people who know stuff—as, like, esteemed elders
who have some knowledge to bestow upon us unwashed masses,‖ Abumrad said. ―When
really they are just people who are passionate about what they do. And they stay up really
late doing these experiments, 99 percent of which don‘t work, and they are as crazy
driven as the rest of us are. It‘s about putting your finger on the person, the humanity.‖
That‘s a great thing for public radio to do—and I can‘t wait to hear how Radio Lab
keeps doing it.
14: Can Evernote Make You into a
Digital Leonardo?
July 18, 2008
Historians believe that Leonardo da Vinci—one of my biggest heroes, if you hadn‘t
already guessed by reading my columns—filled about 30,000 notebook pages with his
drawings, diagrams, discourses, and doodles. Only about 6,000 of those pages survive
today, but what wondrous pages they are. Martin Kemp, a leading Leonardo biographer
and visual historian at Oxford University, calls the notebooks Leonardo‘s ―laboratory for
thinking.‖ No one, either before Leonardo or since, has ―used paper so prodigally‖ or
―covered the surface of pages with such an impetuous cascade of observations, visualized
thoughts, brainstormed alternatives, theories, polemics, and debates,‖ Kemp asserts in his
recent book Leonardo da Vinci: Experience, Experiment and Design.
I can‘t help wondering, though, whether a modern-day Leonardo would choose
paper as the medium for externalizing his imagination. If you‘re an artist or craftsman in
a studio or an engineer at a project site, then grabbing a pencil and jotting something
down on paper may still be the most expedient way to capture a thought. But if you‘re
reading Xconomy, then chances are that, like me, you‘re some type of knowledge
worker, and that, like me, you spend much of your day sitting at an Internet-connected
computer, immersed in digitally stored ideas and information. And if you want to capture
some of that digital information for later reference, then you need a digital tool to do it.
Until recently, though, there‘s been no permanent place to organize all of the
various kinds of digital materials we can now capture, from Web pages to photos to voice
memos; no real Web 2.0 equivalent, in other words, for Leonardo‘s notebooks or the
commonplace books kept by numerous political and literary figures up to the 19th
century. Believe me, I‘ve been watching for years, and the closest approximations I‘ve
seen have been social bookmarking services such as Diigo, which lets you quickly store,
highlight, annotate, tag, and search copies of important Web pages (for a deeper
description see my April 11 column, ―The Coolest Tools for Trawling & Tracking the
Web.‖). But Diigo and similar services can‘t deal with e-mail, photos, or the other digital
artifacts one might like to group together in one convenient place.
But finally, I think I‘ve found a product that has the potential to evolve into a true
online notebook. It‘s called Evernote, and it‘s been around for years as a Windows-only
desktop application, but it has became vastly more useful over the past few months with
the launch of a Web-based version as well as versions for Macintosh computers,
Windows Mobile and Pocket PC devices, and iPhones. Evernote CEO Phil Libin (the
former CEO of Cambridge, MA-based security company Corestreet) likes to call the
product ―your external brain,‖ but that‘s a little grand—Jon Udell comes closer when he
compares it Vannevar Bush‘s 1945 vision of the Memex or ―memory extender,‖ a
prototype hypertext storage device, and it‘s no insult when Ars Technica describes
Evernote as a digital ―shoebox,‖ recalling the places people used to hide all their old
snapshots, gas receipts, concert tickets, and baseball cards.
Evernote does many, many things, but basically, it remembers stuff—hence the
company‘s clever elephant logo, with its ear folded down like a bookmark. I‘ve only used
the Web and iPhone interfaces, so I can‘t tell you much about the Windows and
Macintosh desktop clients. But they all work the same way, which is to say, as a
repository for notes. A note, in Evernote, can consist of a Web page or any highlighted
portion of a Web page, an e-mail, an image, an audio clip, or a literal text note. If you
have the Windows or Mac version of Evernote installed on your computer, you can also
create notes by clipping portions of stored files such as Word, Excel, and PDF
documents. You can add text to your notes, tag them to make them easier to find later, e-
mail them to yourself or others, and (if they were clipped from the Web) jump back to the
place you originally found them. You can also create ―notebooks‖ and drag your notes
into them; for example, I‘ve started a notebook for receipts from places like Amazon, and
another for photos, and another for Web pages that give me ideas for future World Wide
Wade columns.
The key feature that makes Evernote a must-have for digital wanderers is that it
automatically synchronizes your notes and notebooks across all platforms. That means,
for example, that you could have the Evernote client program running on your Windows
computer at work and on your Mac at home, and every note you add from one computer
will automatically be copied to the other. Likewise, every note you clip directly into
Web-based version shows up on both the Windows and Mac clients; and all of your notes
and notebooks reside permanently online, where you can access them from any Web
browser, including phone-based browsers. Notes and notebooks are private by default,
but if you want to share the contents of a notebook with friends or with the world at large,
you can make it public, then direct visitors to it via its unique URL.
Another very nifty thing about Evernote is its optical character recognition (OCR)
capability. The software can recognize words inside scanned images, snapshots, and
PDFs and locate those words when you search for them. The practical import is that you
can capture paper documents such as business cards, airline tickets, or travel receipts by
scanning them, or just snapping a picture with your webcam or your camera phone; once
you upload the images to Evernote, they‘ll be searchable, just as if they were text
documents. I‘ve put this feature to the test, and it works amazingly well.
But what makes Evernote into true killer app, in my opinion, is the fact that the
company has tailored special versions of the software that make it easier to create notes
using your mobile devices. The new Evernote iPhone app, which became available last
week when Apple rolled out the App Store as part of the 2.0 release of the iPhone
firmware, is everything a mobile application should be—simple, elegant, and useful. You
can browse your existing notes, or choose from four simple buttons that let you a) create
a new text note, b) make a note by taking a snapshot using the iPhone‘s built-in camera,
c) make a note from a previously saved photo, or d) record an audio note. Each new note
is uploaded straight to your online Evernote account and synchronized to your PC. (These
uploads can take a while if you‘re using the Evernote app on a first-generation iPhone,
but they‘re quite snappy if you‘re within Wi-Fi range or if you have an iPhone 3G.)
The Evernote iPhone app makes me inordinately happy. It begins to bring to life a
vision that I (and plenty of other people) have had for some time, of an always-on
―information field‖ that surrounds us everywhere we go and helps us share our best ideas
and discoveries with one another. As I put it in a 2005 feature article for Technology
Review, this field would ―enable people to both pull information about virtually anything
from anywhere, at any time, and push their own ideas and personalities back onto the
Internet—without ever having to sit down at a desktop computer.‖ I called this
phenomenon ―continuous computing,‖ and it was clear to me even then that a service like
Evernote would have to be a key part of it.
Of course, Evernote isn‘t perfect. To my great frustration, the Mac version only
works on Leopard, and the company says they have no plans to port it back to Tiger–so if
you, like me, haven‘t yet upgraded to OS X 10.5, you‘re out of luck. You can‘t edit notes
from the iPhone, and you can‘t create notes from Web pages that you find using the
iPhone‘s Safari browser (but that‘s not Evernote‘s fault—it‘s because the iPhone doesn‘t
have a universal cut-and-paste function). And in one big respect, Evernote falls
grievously short of a Leonardo-style notebook: you can‘t alter or personalize the look of
an Evernote notebook or the overall Evernote interface in any way. I‘m sure many
Evernote users would like to customize their notebooks, especially their public ones, the
same way that some bloggers labor over the design of their blogs.
But I expect that some of these features will show up in future releases. After all,
the public Web version of Evernote is only three weeks old, and the iPhone app is even
newer than that.
Like so many other Web services these days, Evernote has a ―freemium‖ business
model. There‘s a free, ad-supported version that allows you to store up to 40 megabytes
of information per month. But if you start to use Evernote a lot, you‘ll quickly exhaust
that 40 megabytes, and you‘ll want to upgrade to the premium version, which costs $5
per month or $45 per year and has a limit of 500 megabytes per month, with no ads to
clutter up your notebooks. That‘s enough for tens of thousands of text notes and Web
clips, hundreds of low-resolution camera-phone photos, or about 130 high-resolution
photos. With storage as cheap as it is these days, I wouldn‘t be surprised if Evernote soon
raised the premium limit to 1 gigabyte per month or more.
I think that if Leonardo were alive today, he‘d still work out his important ideas on
paper. But then he‘d scan and upload each page to a central repository like Evernote,
where they‘d be safe for as long as the Web itself exists, and where he could search for
what he needed without having to manually index or organize anything (always his weak
point). The Web, in a sense, is one giant laboratory for thinking—and Evernote gives us
each our own personal bench.
Author's Update, February 2010: I eventually upgraded my Mac from Tiger to
Leopard, so I now have Evernote on my everyday computer. But the funny thing is I
almost never use the desktop program, since the Web version of Evernote does everything
I need. Meanwhile, the iPhone now has a cut-and-paste function, which makes the
Evernote iPhone app much more useful. The upload limit for premium Evernote accounts
is still 500 megabytes per month.
15: Are You Ready to Give Up Cable TV
for Internet Video?
July 25, 2008
That‘s the question I‘ve been asking myself lately. Partly, it‘s because my 12-month
introductory rate from Comcast just expired, putting my yearly cable bill into the $1,000
range. That‘s a lot to stomach, especially considering that about a third of the content
coming down the co-ax is commercials.
A friend says that I just need to call the ―retention specialists‖ at Comcast and talk
the price back down, but I‘m terrible at haggling. And it‘s not just about the cost. The
truth is that I just don‘t watch much traditional TV anymore.
I stopped watching TV news long ago; I get my daily dose of current events from
NPR and the Web. There are no good TV comedies these days, unless you count The
Daily Show. I can‘t stand reality shows. I get all my feature-length movies from Netflix.
And out of the current crop of dramatic series, there are only about eight that interest me.
(If you want to know, they‘re The Closer, Saving Grace, Heroes, Grey‟s Anatomy,
Friday Night Lights, Pushing Daisies, Terminator: The Sarah Connor Chronicles, and
Battlestar Galactica.) And half of the year or more, most of those shows aren‘t even on.
But my affection for those eight shows—and the convenience of having new
episodes show up on my DVR automatically, when they do come out—is the one thin
thread keeping me tethered to Comcast.
And now even that thread is unraveling. Much to the cable companies‘ dismay, I
imagine, the broadcast and cable TV networks—along with video sites such as Hulu,
Veoh, and the Apple iTunes Store—are now putting full episodes of all of the shows I
watch online.
Because the Internet video scene is evolving fast, you can never be sure which
shows are available where. But I took a few minutes to track down my favorite shows,
and discovered that they‘re all available from at least two different sources (and
remarkably, six of them are available at Hulu alone):
And of course, beyond these studio-produced series, there‘s terabytes of other
programming available from free video and movie aggregators like Joost, Miro, and
Lycos Cinema. There are also several good video search engines out there now, including
Blinkx, AOL‘s Truveo, Google Video, and Veveo‘s Vtap (for mobile devices). The point
is that content deprivation is no longer a reason to fear cutting your umbilical cord to the
cable companies.
But what if you, like me, are the proud owner of a new(ish) high-definition LCD or
plasma HDTV? Doesn‘t your beautiful screen deserve to be nourished with high-
definition cable? That‘s the last question I‘m struggling with. If I said goodbye to
Comcast, there are a couple of high-definition features I would definitely miss, including
GalleryPlayer—an on-demand slide show service that I wrote about here back in April—
and occasional Discovery Channel HD Theater specials such as When We Left Earth.
But here, too, there‘s a growing list of ways to circumvent the cable companies. A
basic one is to invest in a cable to connect your home computer to your HDTV, which
instantly turns your TV into a big external monitor. (Just be sure to change the video
settings in your computer‘s control panel so that you‘re getting full 1080×720 or
1920×1080 resolution on the HDTV.) Once you‘ve done that, you can download the PC-
based version of GalleryPlayer, which actually offers a much greater selection of images
than Comcast‘s on-demand channel does (though at a nominal price). And anything that
you can watch on your computer, you can now watch on the big screen as well.
If you‘re not feeling up to the process of connecting your PC to your TV—which
can still be a bit dicey for non-geeks—there are several convenient gadgets designed
specifically to grab video content from your computer or straight from the Web and show
it on your TV, including Apple TV, ZeeVee‘s ZvBox, and Roku‘s Netflix Player. Or you
can dispense with the big display altogether and just watch your shows on a wearable
device like the MyVu Crystal, though these devices don‘t yet feature high-definition
resolution.
So, as you can probably tell, I‘ve nearly talked myself into unbundling the phone,
Internet, and cable TV service I get from Comcast and dropping the cable part. At this
point, the company would have to offer a pretty steep discount to keep me on. (You‘ve
got my number, Comcast.) But I‘m still eager to hear readers‘ opinions on the subject.
Are you ready to cut the cord? Or have you gone cable-free already—and if so, what‘s it
like? Please vote in the poll below—and leave your detailed thoughts in our comment
section.
Author's Update, February 2010: I went ahead and cut the cord a few months after
writing this column. See Chapter 50 for my thoughts on life without cable TV (which has
been just fine—thanks in part to Netflix and the Roku Player).
16: Turn your iPhone or iPod into a
Portable University
August 1, 2008
You can‘t earn a college degree just by watching iPod videos (at least, not yet). But
if it‘s pure knowledge you‘re after, there‘s a veritable bounty of it available at the
―iTunes University‖ section of Apple‘s iTunes store, tuition-free.
iTunes U isn‘t new—Apple launched it on May 30, 2007, the same day it
introduced iTunes Plus, a new category of DRM-free songs with higher-quality sound.
But the company has been adding new content to the ―U‖ continuously, and the count of
colleges and universities contributing audio and video recordings of faculty lectures is
now up to 72, including such standouts as Carnegie Mellon, Duke, MIT, Northeastern,
Stanford, UC Berkeley, the University of Michigan, the University of Pennsylvania,
Vanderbilt, Wellesley, and Yale. There‘s also a variety of lectures and videos from
prominent non-academic institutions like American Public Media, KQED, the Museum
of Modern Art, the New York Public Library, the Smithsonian, WGBH, and the 92nd
Street Y.
The top download at iTunes U right now is one I highly recommend: ―Really
Achieving Your Childhood Dreams,‖ a.k.a. ―The Last Lecture,‖ by Randy Pausch, a
virtual-reality researcher at Carnegie Mellon. Pausch succumbed to pancreatic cancer on
July 25, but not before inspiring a worldwide audience with his infectious, irreverent
optimism and his stories about how he got to live out his own dreams of being in zero
gravity, working for Disney Imagineering, and the like. The unexpected viral response to
Pausch‘s lecture—more than 5 million people have viewed the YouTube version—got
him a book contract and even an appearance on Oprah, which is all pretty remarkable for
a guy who was a big software geek at heart. If you watch the video, I guarantee you‘ll
start envying the kids who were lucky enough to take his classes at CMU.
A lot of the material at iTunes U is like Pausch‘s lecture, falling somewhere
between education, as we traditionally think of it, and entertainment. But I think that‘s all
to the good. There aren‘t many other places where you can find a lecture by Stanford
neuroendocrinologist Robert Sapolksy on what baboons can teach us about stress and
coping, a conversation with the Dalai Lama about nonviolence, and a Charlie Rose
interview with Steve Martin. (Not to mention a presentation by Xconomy editor-in-chief
Bob Buderi and Xconomy Seattle editor Greg Huang about their book Guanxi.)
Apple deserves serious credit for bringing all of this content together and making it
easy to transfer it to your iPod or iPhone (or just watch on your laptop). Of course, it‘s
not an entirely philanthropic operation—the more free, high-quality content that‘s
available for iPods and iPhones, the more people will want those devices. But Apple
wouldn‘t have been able to build iTunes U at all if it weren‘t for the effort and money
that schools like Stanford and MIT have already invested over the past half-decade in
digitizing course materials, including lectures, notes, and readings, and making them
available to the public for free.
MIT‘s OpenCourseWare project, launched in 2002, was the pioneer in this area; it
offers materials for 1,800 courses from MIT alone, from Walter Lewin‘s legendary 8.01
freshman physics course to an undergraduate-led course for high school students on
Douglas Hofstadter‘s geek-cult classic Gödel, Escher, Bach. MIT‘s example is catching
on worldwide; the global OpenCourseWare Consortium now spans 26 countries and
includes 150 universities and affiliated organizations; each of which has committed to
publishing at least 10 courses online. (It‘s worth noting that most of this material is
available straight from the websites of participating universities, so you don‘t have to go
through iTunes, if you don‘t happen to be an Apple junkie.)
There‘s one downside to iTunes U. Considering the richness of the material it
contains, it‘s a real shame that the interface is so clunky, making content hard to find.
This might surprise you, since Apple‘s devices and programs are generally known for
their friendly interfaces. But the problem, as I see it, is that iTunes was originally
developed as a music store. As Apple tries to use it to distribute so many other kinds of
content—album notes, audio books, TV shows, music videos, movies, podcasts, iPod
games, iPhone apps, and lectures—it‘s showing signs of strain.
To take just one example: the iTunes Store‘s search engine is built to err on the side
of inclusiveness rather than exactitude, returning many results that represent spelling
variations on your original query. This is very helpful when you‘re looking for that band
you heard on the radio last week but you can‘t remember their exact name. But it‘s not so
great when you‘re searching iTunes U for lectures about Leonardo da Vinci and you get
back hundreds of results for Leonard Cohen (the singer-songwriter), Leonard Maltin (the
film critic), and Leonard Susskind (the closest of the three—he‘s a Stanford physics
professor).
My other complaint—though it‘s a minor one, keeping in mind that the iTunes U
cornucopia is entirely free—is that you can‘t get detailed information about the individual
programs, short of actually downloading them and viewing/listening. The titles of the
recorded or videotaped lectures are often cryptic, and the ―Comment‖ area of the iTunes
interface isn‘t large enough to convey a useful synopsis. It‘s yet another awkward result
of shoehorning things like lectures into the music-store format, in which everything is a
―track‖ that has to have ―artist‖ and ―album‖ names and a ―genre.‖
But before your next trip to the gym, I urge you to stop by iTunes U and download
a lecture or two to your iPod. It‘s a great way to fatten your brain while slimming your
fanny.
***
Before I close, let me return for a moment to last week‘s column, where I agonized
over whether to give up cable TV and just watch my favorite TV shows on the Internet.
In the accompanying poll, a surprising number of people—35 percent, so far—say
they have already done just that. More than 52 percent say they could imagine giving up
their cable TV subscriptions, but simply haven‘t yet, and 10 percent say they‘d consider
abandoning cable if there were more content available online. Only 2.5 percent say they
could never give up cable.
A couple of people left comments saying they were pretty happy subsisting on a
mix of Netflix DVDs and videos from sites like Hulu and iTunes. And a few more said
they‘d quit their cable habits as soon as they found ways to get their favorite TV content,
such as English Premier League soccer, online.
Overall, I‘d say that‘s pretty bad news for the Comcasts, Coxes, and Time Warners
of the world. Thanks to readers, I‘m feeling emboldened to take the plunge myself. In a
future column, perhaps, I‘ll report back on life post-cable.
17: In Defense of the Endangered Tree
Octopus, and Other Web Myths
August 8, 2008
This March marked the 10th anniversary of the campaign to save the Pacific
Northwest Tree Octopus from extinction. If you‘re not familiar with the elusive tree
octopus, it‘s an arboreal cephalopod found in the temperate rainforests of the Olympic
National Park west of Seattle. Every spring the creatures migrate from their lairs in the
forest canopy to their ancestral spawning grounds in the Hood Canal; the rainy climate
keeps their skin moist the rest of the year. But logging and suburbanization have
decimated this gentle species‘ habitat and reduced the breeding population to critically
low numbers, leading some to argue that it should be placed on the Endangered Species
List.
Do I need to add at this point that the Pacific Northwest Tree Octopus is completely
fictional? Apparently, I do. Lyle Zapato, a Washington-based author and Web publisher,
invented the tree octopus in 1998. The creature is the star of an extensive and hilarious
parody website that has, improbably, worked its way into the center of the debate over
literacy in the Internet age.
The question is whether children raised on the Web can parse reality properly. And
every so often the educational establishment and the mainstream media—most recently,
the New York Times—drag up Zapato‘s site as an example of the kind of seemingly
authoritative material that gives the Web a bad name, by fooling unsuspecting young
Internet users into thinking it‘s for real. Edutopia, the magazine of the George Lucas
Educational Foundation, recently denounced the tree octopus site as full of
―pseudoscience‖ and ―outright lies.‖
To me, such indignance over the untrustworthiness of the Internet is both amusing
and a little sad. Yes, the Internet is a fertile breeding ground for hoaxes and
misinformation. Yes, children must be taught how to sort truth from fiction. But come
on! Without the occasional tree octopus, the Web would be a far poorer place.
The thing about hoaxes is that the best ones make you think a little harder about
why you believe what you believe. One of my favorite examples—and I have to admit
that I fell for it briefly, about five years ago—is Dog Island, an archipelago off the coast
of Florida where people supposedly send their dogs to roam free. ―Separated from the
anxieties of urban life, dogs on Dog Island live a natural, healthy, and happy life,‖ the
Dog Island site claimed. In reality, that‘s all Dog Island was—a spoof website. But it
sounded so nice, especially to a dog owner suffering from occasional guilt about keeping
his pooch cooped up inside all day. (Then I remembered that my own dog is such a
scaredy-cat about outdoor noises that when we go camping he wants to sleep in the car.)
The tree octopus‘s transformation from harmless spoof into poster child for the
hazards of the Web apparently began in 2006, when University of Connecticut research
Donald Leu used the site in a study of online literacy among seventh graders. Leu asked
25 students from middle schools in Connecticut to review Zapato‘s site. Interviewed
later, all of the students said they believed that the tree octopus was real. Few of the
students, Leu reported, could pinpoint the obvious clues that the site is a spoof, such as
the information that the natural predator of the tree octopus is the Sasquatch. And even
after Leu told them the site was a fake, a handful of the students continued to insist that
the tree octopus is real.
The Times cited Leu‘s findings in a hand-wringing feature article two Sundays ago
asking whether reading on the Web is really reading at all. ―Some argue that the hours
spent prowling the Internet are the enemy of reading—diminishing literacy, wrecking
attention spans and destroying a precious common culture that exists only through the
reading of books,‖ the piece said. The article‘s conclusion from Leu‘s study? ―Web
readers are persistently weak at judging whether information is trustworthy.‖
But I think there are several other interpretations for Leu‘s findings, not all of them
so troubling. One is the possibility that education professors are persistently weak at
judging whether seventh graders are pulling their legs. Another, more likely lesson is that
kids are simply open-minded, and naturally receptive to far-fetched ideas until they have
evidence to the contrary.
And isn‘t that the way kids should be, given how we‘re inundated every day by
scientific findings that are more fantastic, in their way, than Zapato‘s fiction? It‘s now
known, for example, that at the center of the Milky Way galaxy there is a colossal black
hole with a mass 4 million times that of our sun. Back here on Earth, there are fish that
catch other fish using tongue-like limbs that have evolved into fishing reels, complete
with bait. There are also microbes that thrive in the hellishly hot volcanic vents of the
ocean floor. The Department of Defense, in the course of siting a network of low-
frequency antennas for submarine communications in northern Michigan, discovered a
single underground mushroom extending across more than 3,000 acres of land.
Moreover, there is a mysterious dark energy that is apparently causing the universe to
expand at an accelerating rate. You can‘t turn on the Discovery Channel without running
into half a dozen such jaw-droppers, any one of which is harder to believe than the idea
that octopuses could live in trees. (Actually, one of the facts in this paragraph isn‘t quite
true. Can you tell which one?)
Wikipedia classifies the Pacific Northwest Tree Octopus website an ―Internet
hoax.‖ But I prefer to think of it as an experiment with reality—a hybrid of satiric humor
and science fiction, made more piquant by the fact that, on the surface at least, it purports
to be true. Skillful hoaxsters mix and match factual references into new blends that are
just plausible enough to tweak our sense of reality—and to underscore, in the process,
how bizarre life really is.
***
I‘ll be away on vacation in Alaska next week, so my next World Wide Wade
column will appear on August 22. I‘m mainly visiting family in Fairbanks. But you‘ll be
glad to know that I‘ve also signed up for a three-day photo safari in search of the elusive
and ferocious Denali Tundra Salmon.
Author's Update: Alone among my columns, this one has attracted the attention of
quite a number of school-age children. Unfortunately, judging from the many comments
they've left, most of them totally misunderstand my argument, or haven't even tried to
follow it. "Hey this is ssooo stupidly funny! how could you actually believe this all the
photos are ssooo computerized and fake!" says one commenter named Emily. "Haha this
is soooo fake," says another named Lexi. Yes, Emily, yes Lexi, the tree octopus is a fake.
That's the whole point.
18: Pogue on the iPhone 3G: A Product
Manual You Won’t Be Able to Put Down
August 22, 2008
The Apple iPhone is easily the most powerful, multitalented phone ever marketed.
As I and many others have pointed out, it‘s really a handheld multimedia computer, with
camera and (in the iPhone 3G) GPS functions to boot. So it‘s a little baffling that the only
set of instructions you get when you buy an iPhone is a flimsy color pamphlet called
―Finger Tips‖ that looks more like an extended magazine ad. This thin document is
tucked into the box so cleverly that I didn‘t even realize it was there until after I‘d bought
a 3G and was putting my first-generation iPhone back in its original container to sell it.
It‘s true that Apple is famous for making hardware and software that‘s so user-
friendly you usually don‘t have to read a manual to get started. But the iPhone doesn‘t
work like a Mac. For one thing, it doesn‘t come with built-in help menus. And it doesn‘t
even behave like other mobile phones: its external buttons aren‘t labeled, and simply
making a phone call requires you to string together at least five non-obvious actions
(pressing the Home or Sleep/Wake buttons, flicking the ―unlock‖ slider, opening the
phone application, opening the phone keypad or contact list, and entering a phone
number or selecting a contact).
As a concession to those who need a bit of hand-holding while getting accustomed
to the iPhone‘s radically original interface, Apple has created a 14-megabyte PDF user
guide, which you can download free from the Apple website. But even that document is
rather terse, leaving you to discover many of the iPhone‘s coolest details, tricks, and
shortcuts on your own—which, I guarantee, you won‘t.
Fortunately, computer publisher O‘Reilly Media and New York Times gadget
columnist David Pogue came to the rescue last summer with iPhone: The Missing
Manual. O‘Reilly‘s tag line for the Missing Manual series is ―The book that should have
been in the box,‖ and in the case of the iPhone, it‘s absolutely accurate. Indeed, there are
even more details to master now that the iPhone 3G is out, along with the 2.0 version of
the device‘s firmware, which has the added capability of running third-party software
applications. Last week O‘Reilly published a second edition of Pogue‘s book containing
everything you need to know about the iPhone 3G and the new App Store, where users
can choose from more than 1,500 third-party programs.
O‘Reilly sent me a review copy of the book a couple of weeks ago. As time was
running out this week to write my World Wide Wade column, my first impulse was to
merely skim the book. But the farther into it I got, the more I wanted to slow down to
enjoy Pogue‘s witty writing and ensure I didn‘t miss any interesting details.
I wound up reading the book cover to cover—or at least, from the beginning of the
file to the end of the file (I was using the PDF version). In the end, I would gladly have
paid the $24.99 cover price ($16.49 at Amazon) just for the book‘s excellent collection of
time-saving tips for heavy iPhone users.
Maybe I‘m just dense, but even after owning an iPhone for 13 months, the
following tidbits were still revelations to me:
When using the device in iPod mode, you can skip ahead to the next song by
pinching the clicker on the earbud cord.
You can fast-forward or rewind through a song by tapping, but not releasing, the
onscreen ―next‖ and ―previous‖ buttons.
When using the on-screen keyboard, you can type a period by tapping the space bar
twice, without having to open the secondary punctuation/numerical keyboard.
The iPhone‘s camera doesn‘t take a picture until you remove your finger from the
onscreen shutter button. So to avoid bumping the device and blurring your photos, you
should frame your photo while holding your finger on the shutter button, lifting it only
when you‘re ready to snap the picture.
In the Safari Web browser, the keyboard includes a very convenient ―.com‖ key that
makes it easy to type new URLs in the address bar. I knew that—but what I didn‘t know
was that if you hold down the ―.com‖ key, a balloon will pop up allowing you to choose
from ―.net,‖ ―.org,‖ or ―.edu‖ as well.
In Safari, if you hold your finger on a link for a few seconds, the full URL will pop
up in a balloon, giving you more information about where the link leads—similar to
mousing over a link in a PC browser.
If you want to save a snapshot of whatever‘s showing on the iPhone screen—say, a
still from a video, or a Web page that you want to remember later—you can grab the
image and save it to the phone‘s photo album by holding down the Home button and
pressing the Sleep/Wake button once.
The book contains a wealth of additional insights, tricks, and workarounds. And
even though Pogue only had a few weeks to assemble the chapters on the iPhone‘s
newest features, like the App Store, he was able to compile some terrific
recommendations about the most useful third-party applications. I‘ve already had a good
time using one of them: AirMe, which takes snapshots using the device‘s camera and
uploads them directly to my Flickr account without stripping out the geotagging data, as
the iPhone inexplicably does if you simply e-mail your photos to Flickr using the phone‘s
built-in mail application.
If you‘ve read Pogue‘s Posts at the Times, then you know that his technology
descriptions are thoroughly researched and crystal clear, and are enlivened by a pleasant
combination of good-natured sarcasm and mildly corny jokes. The book reads the same
way—even when Pogue is wading, for completeness‘ sake, through such weighty matters
as resolving calendar conflicts and syncing the iPhone with corporate Microsoft
Exchange networks.
Amazon says it won‘t have the print version of iPhone: The Missing Manual, 2nd
Edition in stock until September 29. But you can buy a PDF version directly from
O‘Reilly right now—and if you want to get the most out of your iPhone, you really
should.
Here‘s one more tip: From the iPhone‘s mail program, you can open attachments of
all sorts, including PDFs. So if you download the electronic version of the book and e-
mail it to yourself, you can read it on your iPhone.
19: Photographing Spaces, Not Scenes,
with Microsoft’s Photosynth
August 29, 2008
Up to now, software giant Microsoft has largely missed out on the digital
photography revolution. The most popular photo editing tools come from Microsoft
competitors like Adobe and Apple. Flickr, every geek‘s favorite photo-sharing site, was
invented in Microsoft‘s backyard in Vancouver, BC, but went on to become part of
Yahoo. And Corbis, Bill Gates‘ bold early-90s experiment in licensing digital images for
high-resolution displays in consumers‘ homes, devolved into an online stock image
house.
But the hottest new twist on digital photography is, unexpectedly, a Microsoft
product. It‘s a powerful Web service called Photosynth that can analyze multiple photos
of a common object or space—say, Michelangelo‘s David, or Times Square in New
York—and intuit a 3-D model of the depicted subject, which then acts as the scaffolding
for an interactive photo tour. The creation of a small Redmond-based product group
called Live Labs, Photosynth is more than cool enough to earn Microsoft greater
mindshare among photographers, both serious and amateur.
I‘ve been playing around with the tool since Microsoft started allowing members of
the public to create their own ―synths‖ on August 21. I would call Photosynth almost
post-photographic, in the sense that it abandons any allegiance to the idea of a single,
definitive image (goodbye, Doisneau‘s ―Kiss by the Hôtel de Ville‖ or Adams‘s
―Moonrise, Hernandez, New Mexico‖) in order to exploit the abundance of images on the
Web, or, these days, on any digital photographer‘s hard drive. It assembles related images
into interactive montages that can be navigated almost as if the user were walking
through or around the photographed space or object.

For example, in early demos of Photosynth, Microsoft showed how hundreds of


images pulled from Flickr could be assembled into a massive 3-D montages of Notre
Dame cathedral in Paris or the Trevi Fountain in Rome. Words don‘t really suffice to
explain Photosynth: to understand it, you should just go to the Photosynth website and
explore a few synths. I especially invite you to check out three synths I created last
weekend around Boston, representing my South End apartment, Copley Square, and the
Christian Science complex.
Embedded versions of these synths can also be found on the following pages of this
article. Sadly, the special Photosynth viewer runs on Windows machines only, inside the
Internet Explorer and Firefox browsers; this being a Microsoft project, there isn‘t yet a
version of the program that works on Macs. But at least the Live Labs people are
apologetic about that: when you try going to the Photosynth site on a Mac, you get a
message that says ―Unfortunately, we‘re not cool enough to run on your OS yet.‖
When you start exploring a synth, you‘ll notice that mousing over an individual
image brings up ghostly white outlines, indicating that the synth contains other images
presenting the same objects from different angles. Clicking on one of those outlines (or
on the arrows around the screen) will take you to those other images, but not instantly:
Photosynth provides a smooth, animated transition, as if you were merely turning your
head or approaching an object for a closer view.
The best synths—that is, those with the most convincing transitions between images
and the most complete sense of spatial unity—are those constructed from hundreds of
photos taken with Photosynth in mind, like my three synths. The software‘s matching
algorithms have more to work with when there‘s a lot of overlap between adjacent
photos. So if you want to make a synth of an object like a sculpture, for example, you
have to move gradually around the object, taking a picture every 15 degrees or so.
(Conversely, to make a 360-degree panorama, you should take at least 24 separate photos
as you spin in place.) If you‘re interested in making your own synths, the Photosynth
website has an entertaining video with more pointers.
Apparently I absorbed the video‘s lessons well, because when I finished uploading
the 300 photos I took of the Christian Science complex last Saturday, Photosynth
declared them to be ―99% Synthy.‖ My apartment photos were ―95% Synthy‖ and my
Copley Square photos were ―92% Synthy.‖ Before you create your own synths, however,
be aware that the program is already proving incredibly popular, and that as result, Live
Labs‘ servers have been largely overwhelmed by uploads. Each of my synths took about
8 hours to upload and process—and each failed at least once along the way, requiring me
to start over. The Photosynth team says it‘s busy optimizing the system to cope with the
unexpected onslaught.
When you‘re exploring your finished synths in the Photosynth viewer, there‘s a
hidden shortcut—the ―P‖ button on your keyboard—that reveals something truly novel
and jarring: the ―point cloud‖ that makes up the 3-D scaffolding behind the individual
images in a synth. The points represent tiny patches of color or texture that, in
Photosynth‘s judgment, are shared across multiple images. (This is actually the key to
producing a 3-D effect: by comparing images in which the same details are depicted from
multiple angles, the software is able to infer the existence of 3-D structures and render
them in space. It‘s not so different from the way our brains create stereo, 3-D views from
the binocular images captured by our eyes. Technology Review‘s March/April issue
includes a good explanation of the whole process behind Photosynth, which was
conceived by University of Washington graduate student Noah Snavely and built into a
workable Web-based system by Snavely and Microsoft programmer Blaise Agüera y
Arcas.)
The more times an object appears in a synth, the denser that object‘s point cloud
will be. For example, on the coffee table in my apartment, I have two handmade ceramic
bowls sitting on a Southwestern-style mat. I took a bunch of pictures of the table while I
was preparing my synth, and in the finished point cloud, the mat and the bowls appear
like a bright little galaxy of orange and blue stars.
The limitation of Photosynth is that once you‘ve spent some time viewing your
synths or others‘ and exploring all the pretty point clouds, it‘s not clear what to do next.
(As the Technology Review article‘s headline asked, ―It is dazzling, but what is it for?‖) It
would certainly be a useful tool for learning your way around places that you intend to
visit—why not tour one of the 200 synths already available for Paris, for example, before
your next trip to the City of Light? In fact, Live Labs leader Gary Flake told TR that
Microsoft may attempt to integrate Photosynth into the company‘s virtual-globe program,
Microsoft Virtual Earth, as a kind of shortcut to creating a 3-D metaverse.
But as a tool for individual photographers, Photosynth is still in its earliest stages.
For now, you can‘t edit or add to existing synths. You can‘t assist the program by
manually placing the orphaned photos it couldn‘t recognize into the jigsaw. You can‘t
export the 3-D data, as architects who use CAD drawings or virtual-world builders might
wish to do.
I‘m sure that there are a few artist-geeks out there dreaming up surprising and
informative ways to use their allotted 300 photos (uploading more images than that tends
to stymie the program). But I think the real significance of Photosynth is that it‘s helping
to touch off the next major shift in photography: from 2-D to 3-D. In short, it‘s now
possible to document not just scenes but spaces. And no matter what kind of image-
processing software is out there on the Web, most photographers will need more time to
get their heads around that idea.
20: What Web Journalists Can Learn
from Comics
September 5, 2008
While the tech-blog world is exhausting itself testing and writing about Google
Chrome, the new open-source Web browser released by the search giant on Tuesday, I‘m
still just having fun paging back and forth through the 38-page Scott McCloud Web
comic that Google commissioned to explain the whole project. A lot of Silicon Valley
companies, when they‘re launching big new products, will rent a hotel ballroom, erect a
glitzy set, and invite a bunch of journalists and pundits to a scripted dog-and-pony show.
Chrome‘s launch may mark the first time in history that a company simply hired a comic
book artist instead.
Google couldn‘t have found a likelier candidate than McCloud, who is the author of
Understanding Comics (1993), Reinventing Comics (2000), and Making Comics (2006),
and has written (or should I say drawn?) extensively about how the Web is expanding the
boundaries of comics as a genre. It‘s a perfect pairing to see McCloud—who has done
comics on topics as technical as the constraints imposed on digital-comics authors by
HTML tables—writing about something as fundamental to the Web as the browser itself.
If you haven‘t heard the story behind Chrome already, it‘s Google‘s attempt to
update the very notion of the Web browser—which was, after all, invented 15 years
ago—to reflect the realities of the Web 2.0 era. These days, if you‘re on the Web,
chances are you‘re interacting with an application rather than simply consuming content.
―People are watching and loading videos, chatting with each other, playing Web-based
games…all these things that didn‘t exist when browsers were first created,‖ Google
software engineer Pam Greene points out in McCloud‘s comic. (She‘s one of the many
Googlers whose words McCloud drew upon for the comic. His drawings of her remind
me a lot of the Jodie/Julie character in McCloud‘s terrific experimental Web comic, The
Right Number.) Chrome is designed to make such applications run faster and more
reliably, and to protect users and their computers in the process—in part, by separating
the activity occurring on each open browser tab into its own process, as if it were a
separate program. (As McCloud explains, current browsers like Internet Explorer and
Mozilla Firefox attend to the scripts running in each tab one at a time, moving between
them serially—which is why the more tabs you have open, the slower your browser gets.)
I won‘t go into the real details—plenty of other bloggers and journalists have done
that this week. What‘s amazing about McCloud‘s Web comic is that he‘s able to distill
some fairly high-level points about things like multi-process architecture, memory
fragmentation, rendering engines, virtual machines, hidden class transitions in Javascript,
and incremental garbage collection into a few panels in a comic, and make it all feel fun
and non-threatening. Take it from a longtime technology writer: explaining a new
technology‘s significance while getting the details right and keeping it all accessible to
your Aunt Mae is a difficult feat. But a lot of us tech journalists could take lessons from
McCloud, who doesn‘t bring in a concept unless he can clarify through a clever
combination of graphics, iconography, and text.
So, a lot of what I‘m saying here boils down to one craftsman admiring another.
Envying, even: the comic medium gives McCloud access to a lot of visual devices and
idioms that are denied to us lowly copywriters. One of McCloud‘s frequent tricks is to
make the Google engineers part of the very diagrams he uses to explain Chrome‘s new
features. Every time you open a blank tab, for example, Chrome populates it with small,
clickable tiles representing your most-visited Web pages (the program figures that you
were probably on your way to one of those pages anyway). To explain what‘s happening
on this page, McCloud puts a couple of Googlers inside the tiles, not unlike those
washed-up actors who used to appear on Hollywood Squares. In other places, the Google
guides are climbing around on flow-chart boxes or perched on the borders of the comic‘s
panels.
Given how long McCloud has been working on various forms of Web comics and
how popular his books have been, it‘s odd that his example hasn‘t caught on more
widely. It‘s true that traditional comic publishers like Marvel are finally using Flash and
other Web-based technologies to put their classic superhero comics online. And in the
non-Web world, comics and graphic novels are still in the midst of a renaissance that‘s
been underway for more than a decade now, even crossing over into film (e.g., 2003‘s
American Splendor, based on the comic books of Harvey Pekar). But I don‘t have the
sense that many comic artists are creating the kinds of new Web-based experiences
McCloud was hoping they would back in 2000-20001, when he published ―I Can‘t Stop
Thinking,‖ a series of Web comics that continued the themes in Reinventing Comics—
especially, his speculations about the future of digital comics.
In one great strip from ―I Can‘t Stop Thinking,‖ for example, McCloud examined
how the endlessly scrolling nature of a Web page—he called it the ―infinite canvas‖—
might allow comic artists to play with reader‘s expectations about the sequential nature of
comics, perhaps by connecting panels via unconventional types of lines, links, paths, or
trails. The Right Number used a unique zooming interface to get from one panel to the
next—and this idea has found an unlikely reincarnation in the form of Seadragon, an
experimental Microsoft program that uses zooming to ease the navigation of massive
amounts of graphical information. But while software engineers and information
architects may be busy experimenting in these directions, I‘m not aware of a lot of artists
who are.
Perhaps Web comics aren‘t flowering (outside of McCloud‘s opus) because
drawing well is simply harder than writing well. Or perhaps it‘s because we still equate
comics with Superman and Batman. But a blogger at the Dublin, Ireland, Web design
company iQ Content noted this week that the usual association between comics and low-
brow superhero stories is a Western thing. ―In some cultures, notably Japan, comics (or
Manga) are not only an accepted form of entertainment for people of all ages, they are
used as product instruction manuals and even on government tax forms,‖ iQ Content
senior analyst John Wood wrote. That sounds pretty smart to me. There are some cases
where you just have to RTFM, as they say—and I think we‘d all be happier if the M
stood for Manga.
Not everyone is enchanted by the McCloud comic. It has already inspired a savage
(but amusing) parody over at the website of Conde Nast‘s Portfolio magazine, which
argues that the comic simply panders to Google‘s geeky constituents, and that some
software concepts are so arcane that they don‘t lend themselves well to illustrations. And
there have been a few complaints that at 38 pages, McCloud‘s comic is too long. But I‘m
with Wood, who writes, ―Personally, I‘d rather wade through a 30+ page comic than 15
pages of technical detail, randomly salted with marketing bumpf.‖
In short, the comic leaves a stronger, clearer impression than any writeup could
have. Now we get to see whether Chrome is really as shiny as it seems in McCloud‘s
drawings.
21: ZvBox’s Unhappy Marriage of PC
and HDTV
September 12, 2008
I really wish that I could write a positive review of the ZvBox—the appliance from
Littleton, MA-based ZeeVee that taps into your house‘s TV cables, allowing you to
watch videos playing on your Windows PC from any high-definition TV in your house.
When I first profiled ZeeVee back in May, I had high hopes for the device, which finally
hit stores in early August. As you know if you‘ve been reading this column regularly, I‘m
on the edge of giving up my home cable TV subscription, and a gadget like the ZvBox
seemed to offer a perfect substitute: a way to get my favorite shows for free over the
Internet but still be able to watch them on the big screen in my living room. On top of all
that, the people at ZeeVee are super-nice: they went well beyond the call of duty as I was
doing the research for this review, loaning me not only a review unit but the extra
hardware I needed to make the system work (more on that below), and calmly fielding
several panicked calls for assistance.
Alas, I can‘t recommend this first version of the $499 ZvBox to the general home
user. The company‘s ―localcasting‖ concept is great. But you can only expect the average
consumer to cope with so many kinks, adjustments, workarounds, and other snafus—and
the ZvBox just generates too many.
To be fair, most of the problems I ran into while testing the ZvBox are not
technically ZeeVee‘s fault. The issue, at its most basic, is that TVs are TVs, and
computers are computers. They were not designed to interact. In most homes, they aren‘t
even in the same room—which means that connecting them is going to be a kludge, no
matter how you slice it. And while the latest high-definition TVs come with all sorts of
ports for digital input, they‘re still programmed to expect video signals very different
from the ones generated by most PCs. (The vertical resolution of most HDTVs, for
example, is either 720 or 1,080 pixels, while many PCs are limited to a vertical resolution
of 600, 768, or 800 pixels.) When you throw your home‘s coaxial cable network and an
operating system as cumbersome as Windows into the mix—well, let‘s just say that
ZeeVee is biting into a very complicated problem, and it wouldn‘t be surprising if it took
a couple of generations of hardware experimentation to thoroughly chew it up.
Being the kind of person who actually enjoys sitting amidst the dust bunnies behind
the entertainment center, puzzling out the dozens of cables connecting all of my
audiovisual and gaming gear, I thought I‘d be up to the challenge of installing the ZvBox.
But my first moment of trepidation came when I opened the box and discovered a ―Get
Going Guide‖ that included 12 dense pages of diagrams, kicking off with a glossary of
―fundamental technical concepts.‖
The only really important concept, as it turns out, is that the ZvBox takes video and
audio signals from your computer—signals that would ordinarily go to an external VGA
monitor and speakers—and transmits them instead over an empty channel on your
house‘s coaxial cable system. If you tune your TV to that channel, you‘ll see and hear
whatever is happening on your PC. ZeeVee calls this localcasting.
My first problem—and it‘s no fault of ZeeVee‘s, although it does limit the potential
market for the ZvBox—was that I live in an apartment building. I have a cable outlet in
every room, but I have no idea where cable actually enters my apartment, which you need
to know to set up the ZvBox. (You have to add a little widget called a channel filter to the
network to create that needed empty channel.) So to try out the ZvBox, I had to bypass
my apartment‘s built-in cables and connect the device directly to my HDTV. This
defeated the whole purpose of the localcasting approach—in effect, turning the box into a
very expensive VGA cable—but I didn‘t have any other way to test its other features.
My second problem was that my home PC is a Dell Inspiron 8600 Windows XP
laptop that I purchased in 2004. It came with an Nvidia GEForce 5200 video card.
Remember that resolution-mismatch issue I mentioned above? ZvBox deals with it by
adjusting your PC‘s output resolution to something that your HDTV can deal with—
namely, 1280 x 720 pixels. Unfortunately, many older graphics cards can‘t reset the
display resolution to an arbitrary number like 1280 x 720.
Again, this problem wasn‘t ZeeVee‘s responsibility. But it could obviously prevent
quite a few people from actually using the ZvBox in their homes. And when I first got the
ZvBox review unit, there wasn‘t a word about this potential major complication in the
―Get Going Guide‖ or inthe support section of the company‘s website. Since then,
ZeeVee has put a warning into its online FAQ saying that ―If the card won‘t allow 1280 x
720 output even after updating the driver, it will be necessary to replace the video card
for ZvBox to work.‖
Needless to say, I wasn‘t about to replace the video card in a four-year-old laptop.
At that point I was close to giving up on the ZvBox, and I wrote to ZeeVee, saying I‘d
have to return the unit and cancel my review. But then the company generously offered to
ship me a spare Windows Vista desktop PC.
Once the ZvBox had an up-to-date PC to work with, I was able to complete the
process of installing the box and optimizing the PC signal for my HDTV. Finally, I‘d
have a chance to see how Web videos look and sound when the computer‘s signal was
flowing through the ZvBox. (I would have made some popcorn, but I can‘t have popcorn
again until my braces come off in November.)
And the system actually worked—once or twice. I went to Netflix.com, where
many videos are available for instant viewing via a Windows-only streaming video
player, and watched La Notte, the black-and-white Michelangelo Antonioni classic from
1961. The video and sound quality were fine—the same as what you‘d get if you were
watching on a PC screen, just bigger. Using the ZvBox‘s remote control, I was able to
pause, play, and rewind just as if I‘d been sitting at my PC.
This, then, was the holy grail; I was consuming Internet video on my big screen.
Now that so many current TV series are available as iTunes downloads or from streaming
video sites such as Hulu, the capabilities promised by ZeeVee are exactly what‘s needed
to liberate Internet content from our PCs. In theory, it allows users to watch their favorite
shows on the big-screen TVs that they‘ve all shelled out so much for, while avoiding the
extortionate prices charged by the cable TV monopolies. And by far the coolest thing
about ZvBox—if you get to this point—is its ZViewer software, a kind of video portal
with big fat buttons that make it easy to use the ZvBox remote to browse and watch
videos from Hulu, YouTube, ABC, and quite a few other sources.
Sadly, practice hasn‘t quite caught up with theory. I ran into a couple more serious
snags with the ZvBox—one in the ―annoying‖ category, and the other in the ―I give up‖
category. The annoying problem was that the ZvBox could not consistently get the PC
picture lined up with the edges of my HDTV screen; part of the PC desktop was always
bleeding off the TV (invariably, a part that contained a crucial element like a button for
closing a window, or the ZvBox‘s own icon in the system tray). So every time I started
up the PC and the ZvBox, I had to go through a multi-step process to reset the alignment.
Much worse was the sound problem. Usually, there wasn‘t any. My La Notte
viewing turned out to be one of the only times when I could get the ZvBox to send sound
to my TV‘s speakers. I contacted ZeeVee about this problem, and got a return call (on a
Sunday!) from a very kind customer-support technician. Together, we determined that the
problem was, once again, not really ZeeVee‘s fault. It seems that Windows Vista is rather
single-minded about the way it assigns the audio signal from various programs to various
peripherals. Once it makes a decision, it‘s hard to undo. And about two-thirds of the time
that I started up the loaner PC, it decided that it was going to send sound from the
ZViewer to some device other than my TV (even though there were no other devices
hooked up to the computer). It looked like it was going to take a dispensation from Steve
Ballmer to fix it.
To be fair, there was probably some other workaround for the sound problem, but I
just didn‘t have the heart to pursue it. And I‘m an eager early adopter of most new
electronic gadgets—if not an ―alpha geek,‖ then at least a beta. If I can‘t make the ZvBox
work with all of the other finicky devices that are part of the Internet video equation, then
average computer owners probably can‘t, either.
I‘m disheartened by my experience with the ZvBox, because I know that the
ZeeVee engineers are working hard to make their technology compatible with a wide
range of setups. I‘m left with the suspicion that the only commercially viable Internet
video solution will be an Apple-style unification of hardware and software —in other
words, an HDTV with some kind of built-in Web terminal. (AppleTV is a start in this
direction, but even that device has to interface with your TV, your Mac, and your home
Wi-Fi network.)
ZeeVee says it‘s working on a Mac-compatible version of the ZvBox. Given that
the Mac universe is so much more user-friendly than the Windows world, I wouldn‘t be
surprised if that version is free of many of the snags that tripped me up this time. But for
me, for now, it‘s back to watching videos on the small screen of my laptop.
Author's Update, February 2010: ZeeVee CEO Vic Odryna reported in March 2009
that the company had abandoned the ZvBox 100 after selling fewer than 10,000 units.
ZeeVee engineeers had reluctantly concluded that "the complexities of installation were
really too much for a lot of people," Odryna told me. The company is now marketing a
related hardware product to institutions such as hotels and hospitals that must manage
hundreds of in-house televisions. At the same time, ZeeVee has peeled away the best part
of the ZvBox, the Zviewer built-in video browser, and is now marketing it as a separate
PC-based program called Zinc. In my tests, Zinc has worked pretty well.
22: GPS Treasure Hunting with Your
iPhone 3G
September 19, 2008
If you had to name the companies that stand to lose the most from Apple‘s latest
smartphone, released July 11, you might say Microsoft, Nokia, Motorola, Samsung, or
Verizon. But I have a different list: Garmin, Magellan, and TomTom.
That‘s because new third-party apps available for the iPhone 3G make it into a
wholly credible GPS device, with additional features—especially, broadband Internet
connectivity—that the dedicated GPS receivers available from the leading manufacturers
can‘t match, even at far higher prices. You‘ll pay $199 or $299 for an iPhone 3G and get
all of its other phone, media-player, and gaming features in the bargain, whereas
handheld GPS receivers start around $250 and go all the way up to $900 or more.
I spent last weekend getting back into an old hobby, geocaching. For all you
muggles out there (that‘s the sport‘s term for the uninitiated), geocaching is a high-tech
cross between treasure hunting and back-country bushwhacking. The object is to find
small, hidden caches—usually waterproof plastic boxes holding logbooks and cheap
souvenirs—using just the latitude and longitude readings on your GPS device. I wanted
to see whether my iPhone, saddled up with a couple of new GPS-related programs from
the iTunes App Store, would guide me to geocaches as effectively as the dedicated GPS
receiver (GPSr) I used to own, a $600 Garmin GPSmap 60C.
My verdict: absolutely. If you‘re looking for an outdoor activity that the whole
family can enjoy, and that‘s guaranteed to take you to places you never would have
explored otherwise, then for about $12 you can download all the software you need to
turn your iPhone 3G into a full-featured navigation device and digital compass.
If you‘re planning a geocaching expedition, your first stop should always be
Geocaching.com, which lists the coordinates of some 650,000 caches on all seven
continents. (On a typical day, geocachers log visits to about 60,000 of them—though I
doubt the ones in Antarctica get many visitors.) The site is managed by a small Seattle
company called Groundspeak, whose president, Jeremy Irish, was one of the early
popularizers of geocaching about eight years ago. (In fact, the sport wasn‘t even possible
until May 2000, when the U.S. military relaxed restrictions on the accuracy of the
satellite signals that GPS receivers use to triangulate their positions—thus allowing GPS
to blossom into the hugely useful civilian navigation and recreation tool that it is today.)
At Geocaching.com, you enter your current location, browse a list of caches in your
neighborhood, pick one or more to visit, and record their coordinates. In the early days of
the sport, geocachers would print out a cache‘s Web page, head to the general vicinity,
then tromp around until the coordinates shown by their GPS receivers matched those on
the printout. By the time I got my GPSr in 2005, programmers at Olathe, KS-based
Garmin and other companies had come up with ways to download a cache‘s coordinates
to your PC, and transfer the data to the handheld device via a USB connection. The
caches would then show up as pushpins or ―waypoints‖ on the device‘s map display, and
you could drive/hike/scramble directly to the cache by following the gadget‘s built-in
compass and range indicator.
But at the iTunes App Store—which, at last count, included 117 navigation-related
applications—I located a $1.99 geocaching app called Geopher Lite that lets you
combine the whole procedure just outlined into a couple of easy steps, without the need
for a PC or, obviously, a dedicated GPSr. Geopher taps into the iPhone‘s GPS chip and
sends your current location to Geocaching.com, which sends back a list of nearby caches.
(Note that you need to be within range of AT&T‘s 3G or Edge data networks for this part
to work—so if you‘re going geocaching in a data dead zone like central Idaho, you‘ll
need to stick to one of the old methods.)
The next step is to pick a cache and transfer its coordinates into your iPhone. This is
where Geopher gets a bit clunky: because Groundspeak doesn‘t allow third-party
applications to grab cache coordinates from its website directly, you have to read each
cache‘s latitude and longitude off the Geocaching.com Web page and type them into the
iPhone by hand. But Geopher provides a nice split screen that simplifies this operation
(and hey, for $1.99, what do you expect?).
From there, Geopher can do two things: show you a navigation screen that
combines a compass-needle pointing toward your cache with a readout of the distance to
the cache, or send you to the iPhone‘s built-in Google Maps application, where you‘ll see
a red pushpin representing your chosen cache and a pulsating blue dot representing your
current location. Keep moving until the blue dot and the red pushpin coincide, and you‘ll
be within a few meters of the cache.
From there, you‘re on your own. Part of the fun of geocaching is that the
technology only gets you so far: even the most exact consumer-grade GPS receivers can
only fix a position to within two or three meters, and their accuracy drops further if the
satellite signals are attenuated by trees or other obstructions. And if you‘re looking for a
small plastic box that might be hidden under a rock or a log, a circle 3 meters in radius
can be a lot of ground to search.
I batted .500 last weekend, when I took my iPhone, Geopher Lite, and my dog to
Wompatuck State Park in Hingham, about 20 miles southeast of Boston. I didn‘t realize it
until I got there, but the park is built on the swampy grounds of a former munitions depot,
where the U.S. Navy once assembled and stored nuclear depth charges and other scary
items. The park is peppered with bunkers, roads, concrete pads, and other ruins that were
never properly demolished after the depot was decommissioned in 1965. Which means
there are lot of cool places to hide caches—there are about 30 of them within the park‘s
boundaries, according to the maps at Geocaching.com.
Out of the six geocaches I went looking for around Wompatuck, I located three. For
enthusiasts: I found Ancient Ruins #2 [GCHT5Y], What Grade Are You? [GC15DYZ],
and George Washington Forest II [GC19TND] and was stumped by Wompatuck Multi #3
[GCNEYP], Bunker N-9 [GC1DY93], and Triphammer Overlook [GCEB32]. I blame
my own impatience, rather than Geopher, for my inability to find the last three caches; I
guess they were just hidden too cleverly for me.
You don‘t have to drive way out of town, like I did, to go geocaching. It‘s a little
harder, in cities, to find spots where muggles won‘t accidentally come across a cache, but
enthusiasts have hidden plenty of them in most major metropolises. My favorite Boston
geocache so far is located near the Gillette razor factory on Fort Point Channel (The Best
A Man Can Get [GCX7RV]). It‘s a ―microcache,‖ a bullet-sized capsule that contains
only a tiny scroll—the logbook, where you‘ll find my name in square #8. The capsule is
attached by magnet to—well, on second thought, I won‘t give away the secret. Another
fun and easy-to-find Boston cache (Fort Point Cable Crossing [GCMVJ4], near the
Children‘s Museum) is the subject of an entire podcast recorded a few years ago by
Boston.com.
If you want to go beyond geocaching and experiment with other aspects of GPS
technology, there‘s another excellent app at the App Store, a $9.99 program called GPS
Kit, that will give your iPhone 3G many of the features of a dedicated GPSr. For
instance, it shows your speed and direction of motion, has odometer and trip-meter
functions, lets you save waypoints and tracks, and displays beautiful road maps and
topographical maps of your current location, downloaded on the fly just like the maps in
the iPhone‘s Google Maps application. And unlike the very expensive maps that owners
of Garmin devices and other dedicated GPS receivers must buy to make their devices
work, the maps you can access using GPS Kit and other iPhone navigation apps are
totally free—another huge reason why the iPhone threatens to undermine the traditional
GPS industry.

The track function of GPS Kit, which creates an electronic breadcrumb trail
showing your movements, is particularly fun. When you‘re done with a trip, GPS Kit lets
you e-mail the track to yourself, and you can then view it on a Google Map or in Google
Earth. At right, you can see a track I recorded using GPS Kit: it‘s from my expedition last
weekend to find the two Fort Point Channel caches.
TomTom, a Dutch company that specializes in automobile navigation systems, said
earlier this year that it was developing an iPhone app, but there‘s been no word on that
front since June. Even then, the company didn‘t make it clear whether its application
would provide what a lot of iPhone owners have been clamoring for: audiovisual, turn-
by-turn driving directions like those you can get from TomTom‘s dedicated GPS devices.
But I don‘t think enthusiasts need to wait around to see what TomTom and the other GPS
giants will do. While the iPhone doesn‘t outshine the expensive, dedicated GPS devices
in every detail, apps like Geopher and GPS Kit make it perfectly usable for recreational
purposes—to the point that I‘d question the sanity of anyone still thinking about laying
down $600 or more for a handheld GPSr.
So, the next time somebody asks you what the ―G‖ stands for in ―iPhone 3G,‖ tell
them it‘s for GPS. Or better yet, geocaching.
Author's Update, February 2010: TomTom introduced its iPhone app in the
summer of 2009. At $99, it's one of the highest-priced apps in the iTunes App Store, but
reviewers have praised it for transforming the iPhone into a reasonably dependable turn-
by-turn navigation device for driving. A November 2009 update brought the app text-to-
speech capabilities, which puts the iPhone more or less on a par with dedicated GPS
navigation devices that cost as much as the iPhone (or more) but have none of its other
software and wireless capabilities.
23: Boston Unblurred: Debunking the
Google Maps Censorship Myth
September 26, 2008
Having written an appreciative column a few weeks ago about the endangered
Pacific Northwest tree octopus, a tongue-in-cheek hoax site, I am not about to denounce
the Internet as a cesspool of misinformation. But I‘m still puzzled by the way certain
salacious memes persist on the Internet, even though they‘re easily disproved—for
example, the myth often repeated in e-mail chain letters that Barack Obama is secretly a
practicing Muslim (the most discouraging element here, of course, being that anyone
cares).
Another meme that keeps popping up and that deserves to be discounted once and
for all is the idea that Google widely and deliberately censors aerial and satellite imagery
at the behest of governments and other organizations. This idea was reinvigorated most
recently by a July IT Security feature article called ―Blurred Out: 51 Things You Aren‘t
Allowed to See on Google Maps.‖ The article, which was picked up by Digg and widely
republished, was of special interest to readers in Boston, since six out of the 51 locations
were in Massachusetts and New Hampshire. But as one of my favorite bloggers, Stefan
Geens, pointed out on his Ogle Earth blog a couple of weeks ago, there‘s only one case
out of the 51 purported examples of ―blurring out‖ where it can be verified that Google
itself modified an image; it was in Basra, Iraq, where imagery showing bomb damage and
military construction was replaced by older pictures, taken before the Second Gulf War.
Geens‘ post prompted me to look into the Boston-area locations listed in the IT Security
article, and as the screen shot below illustrates, the reports of alleged blurring appear to
be completely spurious.

That‘s not to say that the all of the images in Google Maps and Google Earth are as
detailed as they could be. As Google has acknowledged in the past, there are spots, such
as the U.S. Naval Observatory—home for another 116 days to Vice President Dick
Cheney—that have been deliberately blurred or pixelated by the companies that sell
aerial imagery to Google. (See image at left. You can click on this image and all of the
images in this article to see larger versions.)
Presumably, the companies do this to make life a little harder for terrorists who
might be planning an airborne attack. Interestingly, though, the White House and the
Capitol building are crystal-clear in Google Earth‘s images. (I admit to some curiosity
about who decided that Cheney‘s house was more worthy of obscuration than President
Bush‘s. If you‘re interested, there‘s a long discussion of that particular question over at
Wired‘s Danger Room national security blog.) Since Google doesn‘t own its own fleet of
satellites, its only recourse in these cases of deliberate pixelation is to buy more imagery
from other sources, which it sometimes does.
More often, though, allegations that certain areas are ―off-limits‖ in Google Earth
are just wrong. One rumor making its way around the Web right now is that Google
blurred out images of Wasilla, AK, after Alaska governor and former Wasilla mayor
Sarah Palin was named John McCain‘s running mate. If you look up Wasilla in Google
Earth, you‘ll see that Google‘s images of the Anchorage suburb are indeed blurry—but
only for the northern half. Google is constantly updating its imagery, and for many areas
it doesn‘t yet have the kind of super-clear pictures where you can see individual houses,
cars, and even the shadows of people (or cows). Wasilla is just one of the many places in
Google Earth where old and new datasets are juxtaposed.
No such excuse is available, however, for the writers of the IT Security article. I
remember reading the article‘s provocative introduction when it first came out: ―Whether
it‘s due to government restrictions, personal-privacy lawsuits or mistakes, Google Maps
has slapped a ‗Prohibited‘ sign on the following 51 places,‖ it said. And I remember
being surprised that so many of the spots listed were in and around Boston.
But upon examining those six locations in Google Maps and Google Earth, I can
see absolutely no sign of the alleged blurring. Here are Google Earth screenshots of the
listed locations:
1. PAVE PAWS, a missile-warning and space surveillance radar maintained by the
U.S. Air Force Space Command in Cape Cod, MA.
2. Seabrook Nuclear Power Station, Seabrook, NH.

3. Research reactor, Radiation Laboratory, University of Massachusetts, Lowell,


MA
4. Oil tank farm in Braintree, MA.

5. Liquid natural gas terminal and industrial port area, Everett and Chelsea, MA.
6. MIT Lincoln Laboratory, Lexington, MA.

I am not a photo-reconaissance expert, but none of these locations look any less
detailed to me than the surrounding areas. The locations of buildings and other details are
clearly visible in each location. The light-colored areas in some of the photographs look
overexposed, especially in the PAVE PAWS and Seabrook images, but that‘s a question
of camera adjustments, not resolution.
I suppose it‘s possible that after the IT Security article came out, Google replaced
the allegedly blurred images of the six Boston-area locations mentioned with sharper
ones. But lax fact-checking is a more likely explanation. Many of the sites listed in the IT
Security article are also listed in a Wikipedia article that has been flagged by Wikipedia‘s
own editors as lacking in reliable, third-party confirmation. Given that the Wikipedia
article was created in April 2007, it seems likely that the authors of the IT Security article
were simply cribbing their list from the community-edited site.
And before anyone gets too worked up about confirmed examples of image
manipulation like Basra and the Naval Observatory, it‘s worth remembering a few things.
First of all, it‘s only in the last decade that the public has had easy access to high-
resolution aerial and satellite photos, thanks to the work of private satellite-imaging
companies such as DigitalGlobe and GeoEye and search companies like Mapquest,
Google, and Yahoo. Also, the data is shared online at such a reasonable cost—nothing—
that there isn‘t much room for complaints about inconsistencies or shoddy service.
Furthermore, if a location is pixelated on Google‘s maps, you can often find a sharper
version simply by going to Yahoo, or vice versa. If there‘s a conspiracy here, it‘s a pretty
poor one.
Even in cases where images have been deliberately degraded, it‘s a stretch to cry
censorship, at least from a constitutional perspective. Sensitive goespatial data collected
by the government is exempt by law from Freedom-of-Information-Act requests. As for
privately collected satellite images, I haven‘t done a thorough search, but I‘m not aware
of case law establishing that they‘re protected by the First Amendment. And whatever
your view of the Bush Administration‘s record on free speech, you can probably agree
that there are national-security reasons for limiting access to high-resolution images of
certain locations.
So let‘s be realistic. Even if a few military or industrial sites are hard to see on
Google Maps—and it would appear that such cases are much rarer than some outlets
report—there are far worse violations of intellectual freedom to worry about. (As this
cuddly cartoon about warrantless wiretapping might remind you.)
Addendum, February 11, 2009: Now that Dick ―Undisclosed Location‖ Cheney is
no longer Vice President, someone has apparently decided that it‘s okay for people to
unvarnished views of the U.S. Naval Observatory. In today‘s edition of ―The Sightseer,‖
Google‘s e-mail newsletter about Google Earth, the company writes: ―On January 18,
two days before Barack Obama was sworn in as our 44th president, we pushed out an
imagery update for the Washington, DC area including the National Mall area. Much of
the new imagery is from 2008. Part of the new imagery shows clearer imagery to the US
Naval Observatory.‖
24: Four Ways Amazon Could Make
Kindle 2.0 a Best Seller
October 3, 2008
I wanted to love the Amazon Kindle. I‘ve been a believer in the future of e-books
ever since the late 1990s, when I briefly worked for NuvoMedia, the company that
introduced the Rocket eBook. I was thrilled when I first heard that Jeff Bezos had
decided to get serious about the technology, figuring that he was sure to have a better
understanding of what makes for a great reading experience than Sony, whose PRS-500
reader, released in 2006, was a disappointment. I was intrigued when Amazon said
Kindle would have a wireless chip, allowing free, nearly instantaneous book downloads
over a national EVDO network. But when the first version of the Kindle came out in
November 2007, it was so astonishingly ugly and expensive that I immediately soured on
the product.
Now, though, there are reports that the ―Kindle 2.0‖ is on the way. And being an
optimist, I‘m hopeful that Amazon will work out some of the kinks in the first-generation
device. In late August Business Week‘s Peter Burrows reported, based on an interview
with an unnamed source who had seen the new device, that Amazon brought in a
consumer-electronics expert from international design house Frog Design to guide the
Kindle‘s overhaul, and that the new version is thinner and ―more stylish,‖ with an
improved screen and user interface. ―They‘ve jumped from Generation One to
Generation Four or Five. It just looks better, and feels better,‖ the source told Burrows.
That‘s all very encouraging. But Amazon needs to change more than just the
gadget‘s look and feel. If it really hopes to catch up with slick rivals like the iPhone
(which is a credible e-book reading device in its own right) and compete with Sony‘s
expanded e-book reader line (the latest addition to which was announced this week), the
Kindle needs some basic operational improvements: fundamental design matters like the
placement of the page-forward and page-back buttons were badly flubbed the first time
around, according to many owners. Amazon also needs think more flexibly about content
pricing. And it needs to charge less for the device itself: the current $359 price tag
probably reflects Amazon‘s actual cost (the electronic paper screen, designed by
Cambridge, MA-based E Ink, is very expensive), but I don‘t think the company will see
mass adoption at any price above $249. Dropping the price to $199, the same as the 8-
gigabyte iPhone 3G, would get people thinking seriously about the Kindle as a holiday
present.
I‘ve met Bezos, and he strikes me as a big-picture guy. I‘m sure he understands that
the Kindle is more than a reading appliance—it‘s an entire publishing platform, a system
for browsing, purchasing, and consuming books, magazines, newspapers, and other
digital media. So, just as Apple has continually revised and updated iTunes and the
iTunes Store (without which iPods and iPhones would be fairly useless), I‘m hopeful that
Amazon is looking at ways to make the whole Kindle package more appealing to readers.
But just in case they need some suggestions, here are a few:
1. Explore motion-activated scrolling or page turning. One of the biggest
complaints from Kindle customers has been that the page-forward and page-back buttons
are so large and awkwardly placed that it‘s easy to hit them accidentally. Amazon will
surely try to fix this problem in the Kindle 2.0, probably by moving the buttons around or
making them smaller. But there‘s an affordable technology—tilt activation—that could
help them get rid of the buttons altogether.
Last week I bought an app for my iPhone called Instapaper Pro that‘s quickly
becoming indispensable to me. Its main function is to copy stripped-down versions of
Web pages, then download them to your iPhone. Say you come across a long newspaper
article and you want to read it later. You just click the ―read later‖ bookmarklet in your
browser, and the article will automatically show up, minus ads and other junk, on your
iPhone. I find this extremely useful. But what makes Instapaper even cooler is the ―tilt
scroll‖ feature, which allows you to advance through the copied Web text simply by
tilting the phone slightly backward or forward. It‘s an ingenious use of the iPhone‘s built-
in accelerometer—the same tiny chip that prompts the Web browser window to rotate by
90 degrees if you want to view it in landscape mode rather than portrait mode.
It ought to be easy to build something like this into an e-book reader. Tilting the
Kindle backward or forward might not be the most natural way to activate a page-turn,
since Web pages scroll up and down, while book pages flip from right to left. But any
movement that the accelerometer can detect is fair game. Maybe a sideways jiggle?
2. Try different pricing and distribution models for e-books. Amazon charges $9.99
for the Kindle versions of new releases. That‘s less than what you‘d pay for a hardcover,
which is part of the Kindle‘s attraction. And in light of the fact that Apple does pretty
well selling albums on iTunes for $11.99 to $13.99, I‘m willing to revise my earlier
argument that new-release prices should be slashed to $5 or $6.
But I still don‘t understand why e-book publishers and device makers aren‘t
exploring more of the creative marketing possibilities opened up by digital distribution.
Commendably, Amazon gives Kindle users a try-before-you-buy option: the first
chapters of most Kindle editions can be downloaded free. But here are some more radical
ideas, any one of which would get me more interested in the Kindle platform:
Book subscriptions. The book-of-the-month club model was successful in the print
publishing world for decades. Audible.com has stayed in business for 10 years now,
charging $22.95 for a monthly subscription that gets you two audio book downloads per
month. If I were Amazon, I‘d charge $19.95 per month for three book downloads per
month.
All-you-can-eat books for a flat fee. Charge, say, $100 for a ―Kindle Prime‖
membership, analogous to Amazon‘s free shipping option. Then let people download all
the books they want. Some people would download hundreds of books, but my bet is that
a lot more would download just a few, balancing it all out.
Send customers full book downloads on spec. If Kindle owners like a book, they
can pay for it and keep it. If they don‘t, it expires and disappears from their device‘s
memory.
Let customers name their own price. Magnatune, an independent digital music
publisher, is trying this model, and they say that when you give people an empty box and
let people fill in their own price, they often pay more than the minimum requested, to
support their favorite artists.
Bundle e-books with print books. When a customer buying a print book is checking
out at Amazon.com, ask them if they‘d also like the Kindle edition for an extra dollar or
two. And ask Kindle owners if, for a little more, they‘d like to receive the print versions
of the e-books they‘re buying. Pretty soon, even non-Kindle owners might have enough
e-books waiting in their online libraries that they‘d give in and buy a Kindle. And Kindle
owners would appreciate having both print and electronic copies of their books on hand,
allowing them to switch back and forth depending on their situation.
Don‘t forget that there is essentially zero marginal cost to selling an e-book: it‘s just
bits, so there‘s nothing to print, store, or ship. From a publisher‘s perspective, every e-
book sold is like pure profit on top of their print sales. So what‘s the harm in
experimenting?
3. Find a bricks-and-mortal retail partner. Anyone can walk into an Apple Store or
an AT&T retail location and get some hands-on time with an iPhone. But one of the
Kindle‘s huge handicaps is that you can‘t play with it. Amazon doesn‘t have any physical
stores. And there aren‘t enough Kindle owners yet so that the devices are a common
sight. (I have spotted exactly one Kindle in the wild—and that was at the Seattle-Tacoma
airport, in Amazon‘s home city.) So the only way you can see the device is in the pictures
on Amazon‘s site.
Amazon has made a half-hearted attempt to remedy this situation, by creating a
customer forum called ―See a Kindle in Your City.‖ It‘s supposed to be a place where
prospective owners who want to try the device can connect with current owners who
don‘t mind showing theirs off. But the forum is long on Kindle-seekers and short on
showoffs. The forum‘s tone is forlorn: ―Kindle in Milwaukee, WI?‖ ―Is there a Kindle in
St. Augustine, FL?‖ ―Any Kindle in or near Boulder, CO?‖
It may be a heretical thing to suggest to an e-retailer—particularly one that started
out selling books—but Amazon needs to connect with a real-world bookstore chain and
shell out for a few endcaps where readers can touch a Kindle. When Sony brought out its
PRS-500 reader in 2006, it was smart enough to contract with Borders to put display units
into bookstores. I bet Barnes & Noble would take Bezos‘s call. Failing that, how about
Best Buy, Circuit City, or even Wal-Mart or Target?
4. Make the Kindle into the world‟s first mass-market talking book. I‘m not talking
about bundling conventional audio books with e-books (although that could be a cool
idea). I‘m talking about building text-to-speech capabilities into the Kindle. The
technology is getting pretty good these days; there‘s a ―Listen to story‖ link on every
story page at Xconomy that plays a speech-synthesized version of the text, courtesy of an
Israeli company called Odiogo, and I‘ve talked with Xconomy visitors who were
surprised to learn that the Odiogo voice isn‘t a real human. If a Kindle could talk, owners
could consume books, magazine articles, and newspaper articles even if they were
cooking, driving, or exercising. Maybe I‘m crazy, but I think this could be the Kindle‘s
killer app.
***
If Amazon really plans to bring out Kindle 2.0 this fall, as the Business Week report
predicted, it‘s probably too late to add features like the ones I‘m suggesting here. But
maybe they‘ve already thought of some of these ideas—or better ones. Truth be told, the
company could go a long way toward winning me over just by making the Kindle a little
more elegant, a little less expensive, and a little more connected to the real world of
reading. I‘m ready to forgive Amazon for a rough beginning—and move on to the next
chapter.
Author's Update, February 2010: Amazon introduced the Kindle 2 in February
2009, and I bought one in April 2009 (see Chapter 51). Most of the details reported in the
August 2008 Business Week article turned out to be correct; the second-generation
device was indeed far more stylish than the original. The snafu with the first-generation
Kindle's next-page buttons was corrected, and the price of the device has gradually been
reduced from $359 to $259. How closely did Amazon pay attention to my other pieces of
advice? Not terribly, which is probably just as well. The Kindle 2 does not have motion-
activated page turning (which would probably be impractical in a device larger than a
cell phone). Amazon hasn't tried any of my alternative pricing models—if anything, it's
been raising the prices of e-books, not lowering them. Which is notgreat for mass-market
adoption, but certainly helps authors and publishers. You still can't see a Kindle 2 in a
store, which I think is a handicap for Amazon, especially now that Sony and Barnes &
Noble, which both have their own chains of retail locations, are coming out with serious
competitors for the Kindle. But I did bat at least .250 in this column: the Kindle 2
includes a text-to-speech function that allows you to listen to a computerized audio
rendition of most books.
25: Playful vs. Preachy: Sizing Up TV’s
New Science Dramas
October 10, 2008
Crime shows generally turn me off, but for years I‘ve enjoyed CSI: Crime Scene
Investigation, because the heroes are scientists. They catch crooks not by outgunning
them, but by observing, hypothesizing, and testing. Of course, the dramatic license that
CSI and other series sometimes take with real-world science can be disturbing: no matter
how much you ―enhance‖ a still from a surveillance video, for example, you can‘t read a
license plate in the reflection on someone‘s cornea. But you‘ve got to applaud creator
Jerry Bruckheimer and the show‘s writers for bringing out the glamour in a dweeby and
meticulous profession like forensic science.
TV viewers clearly have an ongoing appetite for scientists as leading characters.
And there are two new series this season that play on that fascination. One of them—
another Bruckheimer creation called Eleventh Hour that premiered on CBS last night—
tries its best to stick to known, real-world science and its uses and abuses. The other, the
Fox series Fringe, has no such scruples. Created by J.J. Abrams, it gleefully mixes
factual science with patently impossible claptrap, yet manages to stay charming.
I felt moved to write about the two shows this week because they‘re both inspired
by science, but take almost diametrically opposite approaches to portraying it—with
wildly differing results. In short—though I wish it were the other way around—Fringe is
an enjoyable romp, while Eleventh Hour (at least in its pilot episode) is preachy and
predictable.
Fringe‘s plot will feel familiar to any fan of The X-Files or of Abrams‘ previous
series, Alias and Lost. FBI agent Olivia Dunham (played by Australian actor Anna Torv)
joins a top-secret interagency task force investigating a series of bizarre occurrences: a
planeload of bodies dissolved by a mysterious virus, a mutant baby that hits Social
Security age in under an hour, a bus full of people suffocated by instant Jell-O, a demonic
underground torpedo that surfaces every few decades. The government thinks these
events are connected: Dunham‘s boss refers to the phenomena as The Pattern.
To study the events, Dunham recruits former Harvard scientist Walter Bishop—a
mad-scientist type played to the hilt by John Noble, aka The Lord of the Rings' Lord
Denethor—and his son Peter, a genius-dropout gamely portrayed by Joshua Jackson, the
wisecracking actor who single-handedly made Dawson‟s Creek tolerable. Walter has a
childlike wonder about science, but is haunted by the memory of the defense-related
experiments he was forced to perform in the 1970s (experiments that, it is implied, may
have given rise to The Pattern). Meanwhile, Peter is on the run from some unpleasant
people who loaned him a lot of money (and they‘re not mortgage brokers).
Dunham‘s investigations frequently lead her back to Massive Dynamic, a
Microsoft-Apple-Intel-General Dynamics hybrid where all of the offices look as if they
were designed by Ayn Rand protagonists. Her contact there is Nina Sharp, lieutenant to
the company‘s reclusive founder. Sharp has a bionic arm and is played with creepy gusto
by Blair Brown (Altered States, The Days and Nights of Molly Dodd).
It‘s actually Sharp, in the pilot, who‘s given the line that sums up the series‘
intellectual premise: ―Science and technology have advanced at such an exponential rate
for so long… it may be well beyond our ability to regulate and control them.‖ We‘re
meant to infer that The Pattern is a global experiment by some dark organization (SD6,
perhaps?) that‘s bent on transforming society but may be in over its own head. Clearly,
Abrams has been reading books like Ray Kurzweil‘s The Singularity, which posits that
genetics, nanotechnology, and robotics are about to give rise to a new species of hyper-
intelligent and virtually immortal super-humans. But whereas Kurzweil greets this
glorious future with optimism, Abrams sees the potential for boundless mischief.
I would be very surprised if Abrams ever reveals, or even knows, what The Pattern
really is—I stopped watching Lost after it became obvious that he had no intention of
clarifying who created the island or why the plane-crash survivors were brought there.
But I think I may stick with Fringe, because in the end, it isn‘t about the conspiracy. It‘s
really about Walter, who keeps the show moving forward by conceiving the brilliant (if
pseudoscientific) experiments that provide the clues to solving each episode‘s
conundrum. (Hey, if we just hook up a TV to this corpse‘s optic nerve, we‘ll get a picture
of the last thing she ever saw!) Walter is the MacGyver of neurophysics—his genius lies
in his willingness to consider how, with the aid of defibrillators, LSD, and aluminum foil,
one might approximate fringe phenomena like telepathy. The show‘s big question is
whether Walter, who‘s too eccentric to be wholly credible, will ever get Olivia and Peter
to think as imaginatively as he does. And that‘s enough to carry it for a while. (Hey, this
exact formula worked for Mulder and Scully for nine seasons.)
Fringe boils down to good old-fashioned sci-fi fun, in the same tongue-in-cheek
spirit as early horror classics like Bride of Frankenstein. I particularly enjoyed a moment
in the third episode when Walter, treating a man whose brain implants allow him to listen
in on the conspirators‘ radio communications, bursts out ―My God! With a few
modifications, I think you could get satellite TV for free!‖
My only beef with the series, and it‘s a minor one, is that it makes such a ham-
handed attempt at representing Boston, where the series is ostensibly set. The show‘s
outdoor scenes are shot in Toronto, and while that works for a few car-chase scenes—one
North American city‘s double-decker expressways are pretty much like the next‘s—real
New Englanders may balk at the blatant misrepresentations of specific locations like
South Station (which wasn‘t a one-story brick building, last time I checked) and Harvard
(whose neo-Georgian brick campus bears absolutely no resemblance to the University of
Toronto‘s Gothic stone).
Last night‘s pilot episode of Eleventh Hour was set, ironically enough, in one of
Xconomy‘s other home cities, Seattle. (Though again, it was actually filmed in Canada—
Vancouver this time.) The show‘s hero is biophysicist Jacob Hood, played by British
actor Rufus Sewell—you might remember him as the villain in The Legend of Zorro or as
Will Ladislaw in the 1994 BBC production of Middlemarch. Hood is a ―special science
advisor‖ to the FBI who seems to be on a personal mission to save people from unethical
applications of science and technology.
In the pilot, he‘s searching for ―Gepetto,‖ a mysterious scientist hired by a Seattle
billionaire to clone his deceased child. (Get it?…Gepetto…Pinocchio‘s father? Such is
this show‘s heavy-handed symbolism.) Guilt-tripping the Catholic security guard who
disposed of all the failed, aborted clones puts Hood on the trail of Gepetto‘s underlings,
who are paying a young woman to carry the latest clone to term. Hood has to find the
woman before she delivers, or she‘ll die.
Along the way, Hood gets to make lots of bombastic pronouncements like ―in
science, a negative result is as important as a positive one,‖ demonstrate somatic cell
nuclear transfer using tweezers and grapes, and lecture the grieving billionaire on why
cloning his dead son won‘t bring him back. Hood saves the young woman, of course, but
is forced to let Gepetto get away (the better to reappear as his nemesis in future episodes).
And we‘re left feeling like science is really…yukky—as if it might be better just to stop
those darned researchers from messing around with embryos and stem cells at all.
If CBS is going to spend so much money on a series about science—$30 million for
the first 13 episodes, according to the New York Post—it would be nice to see Hood
using a bit of real science to track down his quarry, the way the CSIs do, or at least to
hear him acknowledge science‘s positive uses: a nod to the therapeutic or agricultural
uses of cloning wouldn‘t have hurt. My fear is that in order to keep the show going, the
writers will have to manufacture a string of implausibly villainous scientists—people
who say things like ―great rewards bring great risks‖ as they strap their next test subject
to the gurney. I also fear that in portraying Hood as such a heavy, the show equates a
scientific mindset with humorlessness.
Fringe blows past the boundaries of science and gets away with it by taking nothing
seriously. Eleventh Hour explores the potential misuses of real science but takes itself so
seriously that it becomes overbearing. I know which one I‘m going to watch—and this
time the ―reality‖ show loses.
Author's Update, February 2010: Time ran out for Eleventh Hour in spring 2009,
when CBS canceled the show after just 18 episodes. Can't say I'm surprised. Fringe,
meanwhile, rollicks right on. I have to give Abrams credit this time for gradually raising
the veil on The Pattern, which, it turns out, is related to an interdimensional war
involving Walter's former colleague William Bell (played by Leonard Nimoy). I'm totally
hooked on the show, which I watch every week on Fox.com.
26: Is Brown the New Green? Why
Boston’s Ugly, Expensive Macallen
Condos Shouldn’t Be a Model For Green
Buildings
October 17, 2008
Along West 4th Street in Boston, just past I-93 and the MBTA train yard, there‘s a
big brown apartment building with an odd sloping roof. I live about a mile away, and I‘ve
gone past this building several times on walks and bike rides without thinking much
about it, except that it‘s unattractive in an early-1970s sort of way. It reminded me of the
work of the late Josep Lluís Sert, the architect responsible for such aging modernist
eyesores as Harvard‘s Science Center and Holyoke Center, the Peabody Terrace
apartments in Cambridge, and the George Sherman Union complex at Boston University.
I was surprised to learn this week that not only is the brown building brand new, but
it‘s being celebrated as an example of green design. It‘s called the Macallen Building,
and it‘s the subject of an independent documentary, ―The Greening of Southie,‖ that‘s
currently making the film-festival rounds; I caught the movie this Tuesday at a screening
hosted by Atlas Venture, a Boston-area venture capital firm. (Update 11/20/08: There‘s
now a video about the screening prepared by the filmmakers themselves.)

A 140-unit luxury condominium complex, the Macallen Building has garnered


warm reviews from architecture critics, including no less a figure than Pulitzer Prize-
winning Boston Globe writer Robert Campbell. It‘s also the first residential building in
Boston to win a Gold-level LEED rating, something that can only be achieved through
serious effort on the part of architects and developers. (LEED, for Leadership in Energy
and Environmental Design, is a voluntary certification system devised by U.S. Green
Building Council to encourage sustainable building practices.)
So I‘ll probably sound like an unenlightened, anti-environmentalist crank when I
say this, but the Macallen Building strikes me as a sorry excuse for the ―greening‖ of
anything, let alone South Boston, the working-class neighborhood over which it looms. If
this project comes to be seen as a model for green development in Boston and other
cities, the green-building movement is in big trouble.
I do give the developers of the Macallen Building, Pappas Enterprises, credit for
deciding to pursue LEED certification in the first place. As the film makes clear, the
decision led to a thousand headaches that the company could have avoided by doing
things the old-fashioned way. Construction crews had to set aside scrap metal for
recycling, for example, rather than tossing all of the project‘s construction waste into
landfill-bound dumpsters. They cheerfully tried unproven but ―sustainable‖ materials—
such as the non-toxic glue holding down the condo units‘ bamboo floors—that wound up
causing costly complications. And you can‘t argue with green design‘s benefits: features
like double-flush toilets, rainwater-trapping systems for landscape irrigation, and
extensive natural lighting through double-paned, floor-to-ceiling windows mean that the
building will save 600,000 gallons of water per year and use 30 percent less electricity
than a non-green building.
I also have no objection to the way Pappas has made the building‘s green design
into a selling point with environmentally conscious condo buyers. Because the building is
LEED-certified, the company is able to charge about 10 percent more than developers are
getting for similarly sized condos in this corner of the city, according to the real estate
review site ApartmentTherapy. That‘s fine with me. After going to so much trouble, the
company deserves to earn a bit of profit—and who‘s going to finance the green-
technology overhaul this country needs, if not capitalists? ―Green is not about
sacrifice…it is about understanding that doing good and doing well often go hand in
hand,‖ the Macallen Building‘s website intones. I couldn‘t have put it better myself.
But there are several aspects of the Macallen project that bother me. One is the
unfortunate symbolism in the fact that Boston‘s first green residential building is a luxury
condo. You have to be doing pretty well, indeed, to afford a one-bedroom, one-bath unit
for $600,000 or a three-bedroom for $2.1 million. According to this 2005 Boston Globe
article, the Pappas brothers—Tim, Andrew, and Jay—design their urban properties for
city-loving young professionals like themselves. ―We look at our peers and we look at
our friends,‖ Andrew Pappas, then 26, told the paper. Another Globe article described
Luke Peterson—a 25-year-old mortgage banker who put down $685,000 for a townhouse
at First+First, another Pappas project in South Boston—as the ideal Pappas client.
I‘m going to hazard a guess that 25-year-old mortgage-banking tycoons are in
shorter supply these days. Indeed, there are still 20 empty units at the Macallen, even
though the company briefly tried giving away a Toyota Camry Hybrid with each
purchase. But at least the penthouse may soon be occupied; at last report, Tim Pappas, the
34-year-old real estate heir who heads Pappas Enterprises and drives racecars in his spare
time, was close to persuading his girlfriend that they should move out of their Court
Square Press loft and into the $8 million, 5,600-square-foot top floor at the Macallen,
which comes complete with a retractable roof over the lap pool.
The anonymous author of the local blog Bostonia Rantida puts the question exactly
right: ―It‘s nice that it‘s a green building, but isn‘t there a way to have green buildings
for, I don‘t know, the NON-ultra rich?‖ There may be—but somehow I don‘t think you‘ll
see the Pappas brothers building green housing for the 50 percent of Boston families who
earn less than $46,000 per year.
It‘s also distressing that a such a high-profile green building wound up looking so
forbidding. The architects—Monica Ponce de Leon and Nader Tehrani of the Boston firm
Office dA—gave the building an aluminum skin that was supposed to shimmer like
bronze but turned out to be a flat, unsavory shade of brown. According to Campbell‘s
review, this skin was conceived as a ―pliable curtain‖ weaving its way basket-like among
the tips of the steel trusses that hold up the building, but to my untrained eye, it looks like
it‘s simply peeling and buckling.
To passersby, the building is anything but friendly-looking. The grass-covered roof,
one of the building‘s most unusual and remarked-upon features, is invisible from the
street. Instead, pedestrians get a lovely view of the parking garage. The lobby entrance is
hidden away on a private alley between the Macallen building and Pappas‘ other
residential project, the Court Square Press building. Video cameras loom over the
sidewalk, signs warn of 24-hour surveillance, and one side of the building is protected by
a curtain of spikes that looks like the perfect place for a row of severed heads.
The Macallen is also remarkably noisy. The double-paned windows may keep the
condo units as quiet as a morgue on the inside—but if you‘re outside, the two giant
ventilating units facing West 4th Street, one at each end of the parking level, generate
more of a racket than the locomotives rumbling through the adjoining train yard.
A final criticism, based on what I learned from the movie—which is surprisingly
unbiased, considering what intimate access the filmmakers had to Pappas Enterprises and
the job site—has to do with the sometimes self-defeating logic of green design. Is it really
―sustainable‖ to use double-flush toilets if you have to bring them all the way from
Australia, on container ships that burn huge amounts of diesel fuel? Are bamboo floors
still green if you have to bring the wood from China? How much sense is there in using
special glues that are free of volatile organic compounds if it means that those bamboo
floors buckle and have to be ripped out (and new bamboo ordered from China)?
The fault here doesn‘t lie with the architects or the developers but with the LEED
checklists, which award points if builders include certain features (e.g., locally-mixed
concrete, recycled steel, cotton-based insulation, and you guessed it, bamboo floors). The
system almost seems set up to encourage point-mongering—often at the expense of a
project‘s actual carbon footprint. Indeed, the LEED system is under fire from some
quarters for putting too little emphasis on measures that could reduce carbon emissions
and help to arrest climate change.
Of course, Boston‘s green building movement doesn‘t begin and end at Macallen.
In August, Boston Mayor Tom Menino decreed that all new affordable housing funded
by the city‘s Department of Neighborhood Development must obtain a LEED Silver
rating. ―We want to keep the working class in the city,‖ Menino told a Globe reporter
after a press conference in Fields Corner, where the Massachusetts Technology
Collaborative had just announced a $2 million grant to help six Boston housing projects
meet the standard.
So, the same $2 million that will get you a single three-bedroom condo in the
Macallen Building will help six housing projects in the heart of Dorchester go green. I
wonder which one does more good?
Author's Update, February 2010: According to the Macallen Building's website, 20
of the building's 140 units remain unsold.
27: The Encyclopedia of Life: Can You
Build A Wikipedia for Biology Without
the Weirdos, Windbags, and Whoppers?
October 24, 2008

After 16 months in business, Xconomy has published about 3,400 pages of articles.
At this pace, we‘ll get to 1.8 million pages in about 700 years. But the Encyclopedia of
Life—a new scientific and educational website that will have one page for every species
on the planet—intends to hit that number in just 10 years. And even then, it will only be
getting started: while biologists have named, described, and catalogued some 1.8 million
critters, they estimate that another 8 million species of plants, animals, fungi, bacteria,
protists, viruses, and archaea remain undiscovered.
That‘s a seriously big website. We‘re talking Wikipedia big. (The famous free
online encyclopedia, begun in 2000, has 2.6 million articles in English alone, and over 10
million all told.) Which means the organizers of the Encyclopedia of Life (EOL for short)
are going to have to throw out the old playbook in taxonomy—the slow and meticulous
science of species classification, born 250 years ago this year with the publication of Karl
Linnaeus‘ Systema Naturae—and turn to the techniques of Web 2.0.
Specifically, they‘re going to have to rely on thousands of amateur naturalists to
collect and submit data for the encyclopedia. But that creates a fascinating problem: How
do you partake of the revolution in ―user-generated content,‖ as Wikipedia has done,
while keeping the material you publish wholly factual and stable—as it ought to be if the
Encyclopedia of Life is to be a useful resource for scientists, students, and policy makers,
and as Wikipedia manifestly is not? I‘ve been reading up on the EOL project this week,
and as far as I can tell, the organizers haven‘t yet worked out a thorough answer to that
question.
Of course, not all of the material in EOL will be user-generated. A big part of the
concept for the encyclopedia, a $40 million project funded by the John D. and Catherine
T. MacArthur Foundation and the Alfred P. Sloan Foundation, is that it will be a classic
Web 2.0 ―mashup.‖ You‘re probably familiar with RSS news readers, which assemble
headlines and stories from hundreds of separate websites; in a similar way, EOL will use
Web-based aggregation technology under development at the Marine Biological
Laboratory in Woods Hole, MA, to suck in and recompile information from existing
online species databases, such as the uBio NameBank, iSpecies, FishBase,
AmphibiaWeb, and North American Mammals.
Another big part of EOL involves digitizing millions of print books and journal
articles in 10 of the world‘s leading natural history libraries, including the Harvard
University Botany Libraries and the Ernst Mayr Library of the Museum of Comparative
Zoology here in Cambridge (the full list is here). The hope is that once this information
has been scanned, run through optical-character recognition software, and automatically
tagged with the appropriate metadata, it will be possible to access passages from the
scientific literature from the relevant species pages in EOL. Say you‘re researching
Nicrophorus americanus, the American Burying Beetle—a colorful but critically
endangered species for which there‘s already a very nice page at EOL. The encyclopedia
may lead you to, among other resources, a detailed description published in the
Proceedings of the Academy of Natural Sciences of Philadelphia in 1853.
The problem is that scanning, classifying, editing, and mashing together all of that
material is going to take years, especially given that it‘s all being done on the cheap
(EOL and digitization partner, the Biodiversity Heritage Library, have nothing like the
amount of money Google is spending on its Book Search project). But ―EOL must show
some results and value quickly‖ if it is to be taken seriously by scientists, funders, and the
public at large, as the project‘s own planning documents acknowledge.
It‘s hard to see how the current plan, spelled out in the planning documents and the
project‘s FAQ page, will accomplish that. Each species page is to have a volunteer
―curator,‖ a competent scientist responsible for authenticating information submitted by
contributors before it‘s published. Unfortunately, the world population of trained
taxonomists is only about 6,000, according to E. O. Wilson, the famed Harvard biologist
who conceived EOL and is the project‘s honorary chairman. So if you left the curating to
the real experts, they‘d each have 300 species to oversee (counting just the known
species, not the millions more that are likely to be identified as scientists attempt to fill
out the encyclopedia). Even if you spread that work around to other types of scientists,
every curator would have dozens of pages to maintain—on top of their actual, paid jobs
as university faculty or government scientists.
EOL documents acknowledge that in the long run, the experts‘ time is too
precious—in other words, that the encyclopedia will have to be ―consumer-driven.‖ And
already, the organization has made some tentative steps toward soliciting material from
the general public. There‘s an EOL group on the photo-sharing site Flickr where you can
upload your own photographs of plants and animals; if you identify the photos with the
correct machine-readable species names, they will eventually be folded into the relevant
pages of the encyclopedia, the organization says. Over time, EOL‘s planners envision a
greater role for people they call ―citizen scientists,‖ meaning, for example, amateur
horticulturists, biology students doing classroom ecology projects, and the thousands of
birdwatchers who participate in annual species censuses. The species pages, according to
EOL documents, ―will provide a framework for natural history societies and related
citizen groups to undertake local inventories of the flora and fauna in their area,‖ efforts
that will ―produce a national inventory of biodiversity which can be used in modeling a
wide range of phenomena, from climate change to human impacts.‖
That all sounds great—but I don‘t think EOL has taken the citizen scientist concept
far enough. The only way for the encyclopedia to get big fast, I submit, is to take the full
Web 2.0 plunge. This would mean opening up the site to direct involvement by amateur
enthusiasts, Wikipedia-style.
Of course, it would also mean opening up the encyclopedia to misinformation, bad
writing, spam, and revert wars (the internecine battles, all too common on Wikipedia, in
which writers and editors endlessly undo one another‘s changes). But there is cause to
hope that these problems can be minimized. While no one argues that the Wikipedia
articles are 100 percent reliable, the Wikipedia community has developed remarkably
rapid procedures for detecting and correcting errors and vandalism. Indeed, because
Wikipedia is Web-based and wired into a community of volunteer editors who are
automatically notified of every change to every page, errors there are found and fixed
much faster than those in print-based publications like the Encyclopaedia Britannica.
If EOL allowed citizen scientists to serve as curators, my guess is that they‘d be
even more vigilant than Wikipedia editors—especially given EOL‘s rather lofty and
inspiring goal, which is nothing less than to catalogue and preserve Earth‘s biodiversity.
And there‘s nothing (except lack of time) to prevent trained scientists from coming along
later and vetting the material that non-experts create.
EOL says it‘s also working on something called LifeDesk, a Drupal-based
application for building EOL pages that will be made available first to taxonomists but
may eventually be opened up to non-experts. I believe that tool should be rolled out to
amateur natural historians as soon as possible. EOL‘s steering committee and advisory
board—a collection of rather senior figures from leading natural history museums,
biology labs, botanical gardens, and foundations (San Diego Xconomist Larry Smarr is
one advisor)—has already embraced the Wikipedia-style notions of open source software
and free distribution of knowledge. Here‘s hoping that they will see also the benefit in
harnessing nature lovers‘ enthusiasm for learning and sharing.
Author's Update, February 2010: It appears that since the time I wrote this column,
the Encyclopedia of Life has opened up somewhat to non-professional contributors.
"Anybody can register as an EOL member and add text, images, videos, comments or
tags to EOL pages," the site now says. But there's still a cadre of expert curators who
"ensure quality of the core collection by authenticating materials submitted by diverse
projects and individual contributors."
28: In Google Book Search Settlement,
Readers Lose
October 31, 2008
The biggest development in the digital media world this week, by far, was the
settlement of a pair of class-action copyright-infringement lawsuits brought against
Google in 2005 by the Authors Guild, the Association of American Publishers, and
several publishing houses. The compromise agreement, which was announced October 28
and now awaits approval by the federal courts, could eventually result in improved access
to books, especially the millions of books that are no longer in print but are still covered
by U.S. copyrights. It promises to free Google to move forward with its ambitious library
digitization effort, which will put a vast collection of literature at the fingertips of
students, researchers, and at least a few public library patrons. It should also placate the
Chicken Littles in the publishing industry, who have spent years using every available
means, including the Google lawsuit itself, to obstruct the sharing of knowledge enabled
by the digital revolution.
But for readers—the group whose interests are closest to my own heart, and the
only major class of stakeholders in the lawsuit whose interests weren‘t being protected by
a team of well-paid attorneys—the Book Search settlement contains some major
disappointments. I should emphasize that I am not a lawyer, and I have only spent a few
hours studying the settlement agreement. (It‘s 323 pages long, which may explain why it
took the parties more than two years to negotiate a solution.) But I‘m saddened by the
gap between the level of open access to literature that was considered possible when
Google first launched its project to digitize millions of library books and what we‘re
probably going to get as a result of this agreement.
Specifically, the settlement seems to put an end to hopes that the Google Library
Project would result in widespread free or low-cost electronic access to books that are out
of print but have not yet passed into the public domain. These books—and there are
millions of them—are in a painful state of limbo. They‘re deemed commercially non-
viable by their original publishers, so you can‘t find them in most bookstores. Yet no one
else can republish them without getting permission from the original copyright holders or
their heirs or assignees—and for many so-called ―orphan works,‖ these rightsholders
can‘t even be identified or located. So the only way to read one of these books is to find a
copy at a used bookseller, or figure out which public or academic library owns a copy,
and then physically travel there.
The hope was that Google—consistent with its stated mission to ―organize the
world‘s information and make it universally accessible and useful‖—would simplify
access to these out-of-print but still-presumptively-copyrighted books by sharing their
full text over the Internet at little or no cost to readers, the same way it does with the
public-domain books it has digitized. (Under U.S. law, the copyright on all works
published before January 1, 1923, has irrevocably expired, and Google lets readers peruse
and download these books for free. If you click here, for example, you can read my
favorite novel of all time, E.M. Forster‘s 1910 masterpiece Howards End.)
If Google had chosen to take the lawsuit to trial and prevailed, it might have been at
liberty to do this, monetizing the practice (just as it monetizes all of its other services)
through keyword-based advertising. Such a service would have been a great boon to
readers everywhere. Indeed, when I interviewed a bunch of librarians about the Google
initiative back in 2005, before the lawsuit, most of them were ecstatic: they‘d been
waiting for years for someone to come along and help them put their collections online. I
bet Google could even have charged a little something for the service—after all, nobody
else is trying to scan so many library books (7 million of them so far).
Alas, the nation‘s authors and publishers organized a campaign to stop Google.
Letting avarice run roughshod over common sense and the common good, the plaintiffs
in Authors Guild et al. v. Google and McGraw Hill et al. v. Google argued that the very
act of scanning an in-copyright book without the rightsholder‘s permission is an
egregious copyright violation. Even the short snippets of text that Google Book Search
serves up among its search results were too much for these groups to stomach. (This
despite the fact that the courts long ago ratified the inclusion of snippets in general Web
search results as an example of ―fair use‖ under copyright law.)
It quickly became clear that the plaintiffs in the lawsuits would sooner see out-of-
print books remain in limbo forever than sacrifice one penny of potential profit to
Google. No matter that these authors and publishers weren‘t even marketing the books
Google was scanning: if the rightsholders themselves couldn‘t figure out how to make
money on their out-of-print titles, no upstart search-gizmo company was going to, either.
It may surprise you that, as a writer, I‘m on Google‘s side in this dispute. But my
point of view is that decent writers can always find ways to get paid for their work. They
shouldn‘t have to leech off the people who have the vision and the expertise to bring out
the latent value in the world‘s common heritage of information. More generally, I
continue to be astonished by the hostility so many writers and publishers display toward
Google, which, to my mind, is the best thing to happen to intellectuals since the First
Amendment.
Apparently concluding that a compromise would be preferable to a risky, extended,
and costly trial, Google and its opponents negotiated the settlement proposed this week. It
is an exhaustive, labyrinthine document (and one you are free to download, seeing as
government documents aren‘t subject to copyright). The main provisions are these: The
Authors Guild, the AAP, the publishing houses, and all members of the class represented
in the action will drop the suit and waive all claims against Google. Google admits no
wrongdoing, but will make payments totaling $125 million, including $45 million for the
owners of the copyrights on the books it has already scanned—or about $60 per book,
depending one how many rightsholders file claims.
Google is authorized under the settlement to continue the library scanning project.
What‘s more, it can sell access to the full text of the books it scans, in the form of
subscriptions that will be available to institutions such as libraries and corporations, and
in the form of individual books that consumers can download or read online. But the bulk
of the revenues Google collects—63 percent, to be exact—will go to those books‘
copyright holders. Google will pay $34.5 million to set up a new, independent Book
Rights Registry to track the revenues and issue the payments. If they wish, copyright
owners can choose to exclude their books from any or all these arrangements. (The
remaining $45.5 million of the settlement payment will apparently go to the plaintiffs‘
attorneys, though I couldn‘t find this spelled out anywhere in the document.)
The parties to the settlement were anxious to project harmony and sunlight,
repeatedly calling the agreement a win-win. ―This historic settlement is a win for
everyone,‖ Richard Sarnoff, chair of the AAP, said in a joint press release. ―The
agreement creates an innovative framework for the use of copyrighted material in a
rapidly digitizing world, serves readers by enabling broader access to a huge trove of
hard-to-find books, and benefits the publishing community by establishing an attractive
commercial model that offers both control and choice to the rightsholder.‖
―While this agreement is a real win-win for all of us, the real victors are all the
readers,‖ chimed in Google co-founder Sergey Brin. ―The tremendous wealth of
knowledge that lies within the books of the world will now be at their fingertips.‖
Perhaps—but at what cost? While the settlement does give Google the right to offer
full access to the scanned books, the problem is that it‘s likely to be at sky-high prices.
The agreement provides two mechanisms for setting the cost of online access to books:
either rightsholders can name their own price, or Google will automatically set a price
between $1.99 and $29.99 using an algorithm designed, in the words of the agreement, to
―maximize revenue for each rightsholder.‖ Neither option sounds very palatable to me.
Indeed, pricing is apparently one of the major issues that has kept several of the
libraries that were initially part of the Google Library Project from endorsing the
settlement. Harvard University, for example, opted this week to cut off Google‘s access
to its in-copyright books (though it may continue to allow the scanning of its public
domain books). ―As we understand it, the settlement contains too many potential
limitations on access to and use of the books by members of the higher education
community and by patrons of public libraries,‖ Harvard University Library director
Robert Darnton wrote this week in a letter to library staff quoted by the Harvard
Crimson. Darnton continued: ―The settlement provides no assurance that the prices
charged for access will be reasonable, especially since the subscription services will have
no real competitors [and] the scope of access to the digitized books is in various ways
both limited and uncertain.‖
And there‘s another provision of the settlement that spells out, to me, just how
parsimonious the plaintiffs‘ attitude really is. Under the agreement, the authors and
publishers give Google permission to provide every public library in the United States
with free access to the books database. That sounds great, on the surface. As Authors
Guild president Roy Blount Jr. put it in a message to members about the settlement,
―Readers wanting to view books online in their entirety for free need only reacquaint
themselves with their participating local public library: every public library building is
entitled to a free, view-only license to the collection.‖
But the devil, again, is in the details. If you read the agreement, you‘ll see that it
restricts each public library to exactly one Google terminal. Tens of millions of books
online—but at any given moment, no more than 16,543 people are allowed to read them
without paying. (That‘s how many public libraries and branches there are in the United
States, according to the American Library Association—one for every 18,500
Americans.)
That, to me, about sums it up. Even in this digital age, the organizations
representing authors and publishers are saying that free access to out-of-print books
should be restricted to people who can a) make the physical journey to a library and b)
beat their neighbors to the computer room.
There‘s something fundamentally medieval about the philosophy that seems to have
guided the plaintiffs through the entire Google lawsuit: namely, that profits can only be
protected by imposing scarcity. One gets the sense that if they could, the authors and
publishers who sued Google would do away with libraries altogether—and that the
bloody Internet would be next on their list. Fie on Google, fie!
Update, January 30, 2009: The parties to the Google Book Search settlement have
begun the process of notifying authors and publishers about their rights and options under
the settlement. I got a note from one of the firms helping to administer the settlement
asking me to update this post with a link to the court-approved website where authors can
find claim forms and the like. So: http://www.googlebooksettlement.com.
Update, April 18, 2009: O‘Reilly Radar has published an excellent blog post by
guest blogger Pamela Samuelson, a professor of law and information at the University of
California, Berkeley, and a director of the Berkeley Center for Law & Technology,
analyzing the Google Book Search settlement. She calls the proposed settlement
―galling‖ and ―worrisome‖ and points out that by acceding to the Author‘s Guild and the
AAP‘s claims to represent entire classes of authors and publishers, Google has gained a
monopoly on digital distribution of orphan works with ―considerable freedom to set
prices and terms‖—a monopoly which would be very difficult for any other party, even
Amazon or Microsoft, to challenge. Samuelson concludes: ―The Book Search agreement
is not really a settlement of a dispute over whether scanning books to index them is fair
use. It is a major restructuring of the book industry‘s future without meaningful
government oversight. The market for digitized orphan books could be competitive, but
will not be if this settlement is approved as is.‖ Cory Doctorow at BoingBoing has a
smart commentary on Samuelson‘s post.
Author's Update, February 2010: After more than a year of work on revisions in
response to complaints from many quarters, including the Department of Justice, Google
the other parties to the settlement agreement submitted a revised version of the
agreement to the federal court in November 2009. It gives oversight of orphan works and
any revenues they generate to an independent body, and leaves more room for other
companies to compete with Google by arranging separate digitization efforts. District
Court judge Denny Chin has scheduled a "fairness hearing" on the revised settlement for
February 18, 2010.
29: In the World of Total Information
Awareness, “The Last Enemy” Is Us
November 7, 2008
If you thought the notorious Total Information Awareness program went away
when Congress eliminated funding for the Pentagon‘s mass-surveillance experiment in
2003, you were misled. The program itself may have been dismantled, but as an
investigation by the Wall Street Journal detailed in March, many pieces of it were simply
transferred to other federal agencies, where they‘re now part of a massive effort to mine
U.S. residents‘ e-mail messages, bank transfers, credit-card transactions, travel records,
Web searches, and telephone records for signs of terrorist conspiracy. Suspects identified
by this mining can be targeted by the National Security Agency‘s Terrorist Surveillance
Program for wiretapping and other searches without a warrant—a practice authorized by
President Bush in 2002, first publicly exposed by the New York Times in 2005, and
legalized by Congress in 2007.
Exactly what kind of a world are we building with these domestic spying
programs—and could we unbuild it now, even if we wanted to? Those are the questions
posed by a fictional-but-realistic BBC miniseries, ―The Last Enemy,‖ that concluded this
week on PBS. I highly recommend it—and if you rush, you can still watch the whole
five-hour series at the PBS website (it‘s available online until November 9). You can also
pre-order a DVD of the series for delivery in January.
In an interesting bit of timing on PBS‘s part, the series closer aired on November 2,
just two days before Americans decisively turned away from the Bush-Cheney legacy
and its shocking assault on civil liberties in favor of a President-elect, Barack Obama,
who has worked in the Senate to rein in the Patriot Act and who promised during the
campaign that he would end warrantless wiretaps. We may not know until after January
20 where an overhaul of the nation‘s intelligence-gathering apparatus will rank on
Obama‘s priority list. But the moment is clearly ripe for a rollback of many of the abuses
perpetrated by the Bush administration in the name of national security.
What could happen if democratic societies continue to sacrifice liberty for the
appearance of security is the subject of ―The Last Enemy,‖ a depressing tale set in
London in the year 2011. Closed-circuit surveillance is ubiquitous (not much of a stretch,
given that Britain already has 5 million closed-circuit cameras) and every citizen must
carry an ID card linked to their thumbprint and iris scan (also not much of a stretch—the
British parliament passed a national identity card act in 2006, and starting in 2010
everyone who applies for a passport will be issued a card and placed in a national identity
register). In this near-future world, the government is in the final testing phases of an all-
encompassing national intelligence database called (you guessed it) Total Information
Awareness.
As the story begins, a brilliant, antisocial mathematician, Stephen Ezard, is
returning from self-imposed exile in China to attend the funeral of his brother, an
international aid worker supposedly killed in a roadside bombing in Afghanistan. Stephen
gradually learns that refugees treated in his brother‘s camp have been dying from a
tainted hepatitis vaccine, and that his brother was working to expose the government‘s
cover-up. Stephen promptly falls in love with his brother‘s widow, and is asked by the
British government to evaluate—and then assist with public relations for—TIA. We soon
begin to suspect that the government has invited Stephen into the program simply to keep
a closer eye on him. He gets a couple of steps ahead of his minders, and figures out how
to exploit the database to track down vaccine researchers who might help to untangle the
conspiracy. But that leads to some nasty surprises—and I won‘t give away any more of
the story.
The writing and acting in ―The Last Enemy‖ are a bit duller than what I usually
expect from the BBC, but the story is well-researched and chillingly plausible. If it were
shorter, I‘d say that it should be mandatory viewing for high school and college civics
classes. What‘s most disturbing about the show‘s plot is the way that Stephen‘s attempts
to evade TIA‘s web (once he begins to learn how deep the conspiracy goes) are taken as
de facto evidence that he‘s a danger to national security. How often has it been said that
surveillance programs are harmless, since innocent, law-abiding citizens have nothing to
hide? The problem with this logic, of course, is its dark corollary—that anyone who
seems to be hiding something must be guilty.
I‘ve always been amazed by the British flair for technological dystopianism—just
think of Orwell‘s 1984, Terry Gilliam‘s ―Brazil,‖ and the utterly devastating ―28 Days
Later.‖ If I had to guess at an explanation for this phenomenon, I‘d say that England had
a front-row view as her sister industrial democracy, Germany, descended into Fascism in
the 1930s and 1940s. In the aftermath, a few British authors and filmmakers have been
sufficiently honest and courageous to point out related tendencies in their own society,
like xenophobia, grandiosity, technological triumphalism, and a fetish for bureaucracy
and authority figures.
As the Bush-Cheney era finally lifts, will Americans take an equally honest look at
how 9/11 exacerbated our own none-too-latent xenophobia? Will our government come
to understand that constant electronic scrutiny is itself a violation of our privacy? Not
without some pushing. Yesterday, the American Civil Liberties Union published a
transition plan calling on Obama to ―begin repairing the damage to freedom‖ on day one
of his presidency by, among many other things, prohibiting the National Security Agency
from monitoring the communications of U.S. citizens and residents without a warrant. He
will doubtless have bigger things on his mind, like preventing a depression, exiting Iraq,
and stabilizing Afghanistan. But through his choice of an attorney general and his early
policies on issues such as implementing a civil-liberties board to oversee the Patriot Act,
Obama has the opportunity to reverse eight years of progress toward a total-surveillance
state. To push through legislation that heads off new abuses in the future, he‘ll need the
voices of concerned citizens behind him. And if, in the end, we can‘t elect leaders who
will restore and respect our liberties, then perhaps we deserve to be treated like the
enemy.
30: Attention, Startups: Move to New
England. Your Gay Employees Will
Thank You.
November 14, 2008

If you‘re trying to decide where to build your new tech startup, California obviously
has a lot of attractions. You‘ll be close to the heart of the venture capital community.
Non-compete agreements, which are said to slow innovation in states like Massachusetts,
are illegal in the Golden State. The weather is beautiful year-round. And let‘s face it, it‘s
where all the cool kids live.
But now there‘s a reason to rethink going to California. If you do, you‘ll be sending
your employees to a state where a majority of the voting population says gay people
aren‘t entitled to equal rights under the law.
On Election Day, 52 percent of California voters approved a ballot measure called
Proposition 8, which adds a single line to the state‘s constitution: ―Only marriage
between a man and a woman is valid or recognized in California.‖ The proposition
overturns a State Supreme Court decision this May that gave gay and lesbian couples full
rights to marry. As far as I know, it‘s the first time a group of citizens has fought for and
won the right to marry, only to have that right taken away.
So, despite the fact that some 18,000 gay and lesbian couples have married in
California since June without incident, residents have decided that only heterosexual
couples are entitled to have their unions recognized and protected by the state. That
means Massachusetts—and, as of this week, Connecticut—are now the only two U.S.
states where gay people have full marriage rights, forming the country‘s strongest bastion
against one of the last acceptable prejudices, homophobia.
I am gay. It hasn‘t come up here before, but it‘s no secret. For almost 10 years,
California was my adopted home state—but boy, am I happy that I came back to Boston
last year to work for Xconomy. It makes a huge difference to live in a place where I feel
welcomed—to know that if I had a life partner, all of the public and private benefits of
marriage (except those still denied under Federal law) would be available to us
automatically, just as they are to straight couples. This, after all, is the place where the
state‘s highest court, in the 2003 decision that legalized gay marriage, declared that
―without the right to marry…one is excluded from the full range of human experience‖
and that the state constitution ―forbids the creation of second-class citizens.‖
It‘s time for Massachusetts and Connecticut to turn that recognition into a
marketing advantage. As we reported a few days before the election, a group of 22
biotech executives in San Diego teamed up to urge their regional trade-industry group,
Biocom, to oppose Proposition 8 as a drag on recruiting. ―The governor of Massachusetts
has made it very clear that he recognizes this is a competitive and lucrative industry and
he‘d do everything he can to attract companies,‖ Laurent Fischer, CEO of Ocera
Therapeutics, told the San Diego Union-Tribune. ―And this is a sure opportunity for
Massachusetts to feature its benefits that are not available in California should
Proposition 8 pass.‖
Fischer was absolutely right. And now the ball is in New England‘s court. The
Commonwealth of Massachusetts‘ Office of Business Development, which does a great
job of pitching the benefits of locating in the state, should stress the state‘s gay-friendly
credentials to technology and life sciences companies in its brochures and PowerPoints,
and maybe even take out a few ads in places like the Los Angeles Times and the San
Francisco Chronicle. Governor Deval Patrick—whose 18-year-old daughter came out as
gay this summer—should go on a trade mission to California and see if he can lure a few
progressive companies away from places like San Francisco and Silicon Valley. And
Boston‘s venture capital firms and angel investors should lean on their portfolio
companies to pick a home state where all of their employees will be treated equally.
Now, one can make the argument that if technology companies left or avoided
California and the other states that discriminate on the basis of sexual orientation, it
would simply drain those states of the liberal voters who will eventually be needed to
help reverse measures like Proposition 8. But all I‘m saying is that company founders
who have a choice of locations—and this includes most of the young entrepreneurs
coming out of incubators like Y Combinator or DreamIt Ventures—should think about
what kind of environment they want to provide for their employees. If they want to send
a message of inclusion and equality, they should either set up shop in Massachusetts or
Connecticut, or join the legal and political battles to overturn same-sex marriage
prohibitions in the states where they do locate.
What makes the passage of Proposition 8 and similar gay-marriage bans in Arizona
and Florida all the more mystifying, of course, is that it came on the same day that
Americans turned the corner on centuries of racial prejudice by electing an African-
American as President.
Like many others around the world, I‘m inspired by Barack Obama‘s historic
victory. But inspiration aside, it‘s hard for me to understand how Obama himself can be
in favor of separate-but-supposedly-equal civil unions for gay people, as opposed to full
marriage rights. Obama came out against Proposition 8, but he did little to campaign
against it. What particularly puzzles me is how someone whose own parents would not
have been allowed to marry under the anti-miscegenation laws still in force in many
states in the 1960s can take the position Obama has; I can only surmise that it‘s an act of
political pragmatism.
Eventually, I have no doubt, Americans from the White House on down will stop
differentiating between civil rights for racial minorities and civil rights for gay people.
Meanwhile, Californians and people in the 28 other states with constitutional gay-
marriage bans need to learn that there‘s a price for their prejudice. Progressive
entrepreneurs and investors should vote with their feet and their dollars—and send their
startups to Massachusetts and Connecticut.
Author's Update, February 2010: Vermont joined Massachusetts and Connecticut
in the group of New England states allowing same-sex marriage in April 2009. Maine
briefly joined this group as well—a gay-marriage law was approved by state's legislature
and governor in May 2009—but Maine voters repealed the law at the ballot box in
November.
31: Springpad Wants to Be Your Online
Home for the Holidays, And After
November 21, 2008

If you‘re like me, you go through life with the vague hope that someday,
technology will help you become a more efficient person. How often I‘ve driven to the
grocery store or the library to pick up one thing, knowing full well that there‘s some other
item I needed, but that I‘ll never be able to locate it beneath the dust bunnies of my
memory.
New tools for tidying up one‘s brain come along all the time, of course: the File-o-
fax of the 1990s gave way to the Palm Pilot, which eventually gave way to online
services like Jott, Evernote, Remember the Milk, and Ta-Da List, and to the hundreds of
personal productivity applications available for platforms like the iPhone. There‘s even a
whole website, Lifehacker, devoted to tracking such technologies.
But I‘m still waiting for the über-application, the one central online repository that
will allow me to a) file away all of the noteworthy bits of information coming in every
day via e-mail, snail mail, catalogs, the blogs and websites I read, the mass media,
billboards and posters, and the like, b) curate that information—that is, organize,
annotate, tag, rearrange, and share it, and c) retrieve it when and where I really need it,
whether I‘m using my computer or my cell phone. The tool that currently comes closest
to doing all that, for me, is Evernote, created by the Sunnyvale, CA, startup of the same
name (I wrote a column about Evernote back in July). But now there‘s a promising New
England candidate, though it‘s still in its embryonic stages: Springpad, an online
notebook service launched in beta form last week by Boston-based Spring Partners.
Springpad is a system for creating customizable, task-oriented Web pages called,
logically enough, springpads. To each springpad, you can add blocks of data such as text
notes, to-do lists, contacts, calendar events, maps, and digital documents such as photos.
You can build as many springpads as you want for the various tasks in your life. The
company provides useful starting springpads designed for dozens of activities, from
planning a vacation to tracking your pet‘s medical records. There‘s a powerful personal
database system under the hood that allows you to tag, search, and share individual
blocks, and Spring Partners—a 10-person, venture-backed startup located in Boston‘s
quaint Charlestown neighborhood —is working on add-ons such as an iPhone app and a
Web clipper that will allow you to send information you find on the go or on the Web
directly into your springpads.
If you go to Springpad right now, you might get the impression that it‘s all about
holiday planning—the same way MyPunchbowl is all about party planning or Geezeo is
all about financial management (both of those life-tool startups happen to be located in
the Boston area too). But the Thanksgiving and Christmas motif at Springpad is a bit
misleading—and actually represents a marketing gamble of sorts for the startup.
As co-founder and CEO Jeff Janer explained to me when I visited the company
Wednesday, the team had to start somewhere. Spring Partners—which consists almost
entirely of transplants from Boston-based mobile advertising company Third Screen
Media, acquired by AOL in 2007—has extremely ambitious plans for Springpad. Janer
sees it as the central place for consumers, starting with the Web-savvy 25- to 35-year-old
demographic, to organize all their life activities—shopping, chores, hobbies, eating out,
exercise, travel, research, you name it. He describes it as a kind of anti-Facebook: a place
to focus not on your social network but on yourself and all the tasks and information you
have to manage.
But that‘s a lot to explain to prospective users—and historically, quite a few super-
duper personal information management tools have fallen victim to what Janer calls
―blank slate syndrome,‖ the problem of having a great tool in front of you, but not
knowing what to put into it.
So that‘s why Springpad‘s front pages are currently full of the kind of tips and
advice you might find on the cover of the December issue of Better Homes & Gardens or
Real Simple: an ―8-week Holiday Preparation List,‖ a ―Christmas Card Log,‖ a ―Holiday
Meal Planner.‖ The tips are linked to pre-built templates that guide users through the
traditional tasks related to Thanksgiving, Chanukah, Christmas, and New Year‘s
celebrations. ―The idea was to show people, in a focused way, how to survive the
holidays,‖ says Janer. ―Yes, there are all these other templates and features and
functionalities available, but the full platform is still under development, and we wanted
to get some users into the system and start gathering some feedback.‖
So one of Springpad‘s first challenges, to my mind, will be to avoid becoming
known simply as a holiday-planning site—or, come January, a wedding-planning site.
―Our notion is to roll this out on some sort of editorial calendar and look at what people‘s
needs are on a seasonal, topical basis, introducing Springpads that are very focused,‖ says
Janer. ―Now, by doing that are we creating awareness for a platform, or for a specific
solution? I don‘t quite know the answer yet.‖
Another challenge will be to show users how to take advantage of all the features
built into the platform once they feel confident enough to venture beyond the pre-built
templates like the holiday planners. From playing around a little bit with Springpad, I
have the sense that it‘s highly versatile, and that people will come up with many
interesting uses for it that Janer and his team haven‘t even imagined. But beyond the pre-
formatted springpads, the company doesn‘t yet provide much in the way of tips or
support on how to employ all its tools. (One exception is the nice introductory video,
embedded below.)
And one more key task—the one that could really differentiate Springpad from
other personal information management tools, if the company succeeds at it—will be to
provide more points of integration between Springpad and the dozens of other consumer-
oriented Web services springing up these days. Almost every Web 2.0 company worthy
of the name provides application programming interfaces (APIs) that outside developers
can use to grab and repurpose their data. Spring Partners‘ software engineers have
already taken advantage of a few of these: you can import your appointments from
Google Calendar and restaurant reviews from Yelp and make dining reservations using
OpenTable, for example. But there‘s a ton more that the company could do in this vein.
Some of the more obvious things to add would be shopping lists that link directly to
Amazon or Peapod, or calendars that alert members to concerts and other events in their
areas and link to Ticketmaster, or health and fitness planners that link to online medical
records at Google Health or Microsoft HealthVault.
A very cool feature that could become a signature of the Springpad service is the
―Springit badge.‖ You can see how this feature works by going to TheSimpleMe.com or
SpringAdvice.com, blogs where Spring Partners employees collect material from around
the Web that can be adapted into springpads. For example, there‘s a post at
springadvice.com about the website Dumb Little Man, which recently published a list
entitled ―The 9 Best Ways to Get Organized by Year‘s End.‖ Springadvice.com provides
a Springit badge that will automatically turn this top-9 list into a task list in your
Springpad account. Janer showed me how the company is working with online publishers
to embed Springit badges alongside all sorts of Web content—for example, an article at
HGTV.com on how to reorganize your garage. (When Springpad sucks in such content
from external sources, it can bring ads along with it, which Janer sees as one of the
important revenue sources for the company.)
Janer says the response to Springpad has been gratifying so far. He says the site got
a huge influx of users this week—putting some strain on the company‘s servers, in fact—
when Lifehacker published a post on it.
Those new users certainly won‘t suffer from blank slate syndrome. But my guess is
that the tool‘s real utility won‘t become apparent until the company has had time to
introduce key features like a Web clipper and a mobile application, and to get Springit
badges embedded in more places around the Web. Then we‘ll see whether it has the
potential to be the über-organizing solution that finally banishes the dust bunnies.
Author's Update, February 2010: Spring Partners rolled out a Web clipper for
Springpad in January 2009. I took an updated look at Springpad in this July 14, 2009
Xconomy profile.
32: Speak & Spell: New Apps Turn
Phones into Multimedia Search
Appliances
December 5, 2008

About five years ago, in a previous life at another technology publication, I wrote
that I wished I could ―Google my sock drawer.‖ I was being facetious, but my point was
that searching the Web had become so easy that it left me yearning for equally
convenient ways to search other things, like the books in my local library, the stores in
my neighborhood, the recordings in my CD or DVD collection, even the everyday stuff
in my house.
Well, the idea of searching your sock drawer isn‘t so tongue-in-cheek anymore.
You still can‘t ask Google to find the missing half of your favorite argyles—but you can
use the new Amazon Mobile app to take a picture of your sock drawer, then have
Amazon send you a link to a page where you can buy a matching pair online.
You can also use the popular Shazam app on the iPhone to capture a few seconds of
a song playing on the radio, and find out instantly what it‘s called, who recorded it, and
where to buy it. You can use the Street View feature of the new-and-improved Google
Maps application on the iPhone to take a virtual stroll down Boston‘s Newbury Street and
decide which stores you want to visit. Once you get there, you can use a location-aware
app like Urbanspoon or Yelp to find interesting restaurants. And you don‘t even have to
type in your search terms anymore: Vlingo‘s new iPhone app and the latest version of the
Google Mobile app can work with spoken-word input just as easily.
My point is that the newest search-related applications, especially those for
advanced wireless devices like the iPhone, are lending new meaning to the very concept
of search. Finding entertaining media, useful products, and interesting places no longer
requires a PC, a keyboard, a Web browser, or even a traditional search engine. On the
query side, devices like the iPhone 3G have built-in cameras and microphones that let
them capture unconventional types of input for a search, such as photos, spoken
instructions, or snippets of music. They can also fill in key pieces of context on their
own—for example, by grabbing your current location from the built-in GPS chip. On the
output side, the devices can supply links, reviews, videos, maps, even walking directions.
The end result—a new level of connectivity to the people, things, ideas, and places
around you—is, to my mind, one of the best reasons to invest in a broadband-capable
smartphone. (I admit to being an iPhone chauvinist, but similar experiences are available
on other gadgets, such as the high-end Blackberry devices and the T-Mobile G1 phone.)
I‘ve been playing around lately with three mobile search applications in particular.
Each one illustrates different strengths of the mobile platform. And together, they‘ve
brought me full circle, to the point where I wish that conventional desktop or laptop-
based search tools had some of the same capabilities as these mobile marvels.
The first is the new Google Mobile app on the iPhone, released November 14. The
app has two functions. It‘s a convenient portal to the browser-based versions of many of
Google‘s other Web services—Gmail, Google Calendar, Google Docs, Google Talk,
Google Reader, et cetera. But it‘s also a freestanding search engine, with a compelling
new keyboard-free ―voice search‖ option. All you have to do is lift the phone to your ear,
as if you were making a phone call; the iPhone‘s accelerometer takes that as the cue to
start listening for a spoken query, like ―Quantum of Solace Boston showtimes.‖ Take the
phone away from your ear, and the software sends your voice snippet to Google for
processing; within seconds, the search results show up on screen. There‘s a fun, Star Trek
quality to the whole operation, except that the phone doesn‘t talk back. (Maybe that‘s the
next improvement Google will roll out.)
Second, the new iPhone app from Cambridge, MA-based Vlingo, which came out
December 3, also lets you initiate Google searches by speaking. With Vlingo, you have to
tap the ―Press + Speak‖ button to start the process, rather than holding the phone up to
your ear, which is an annoyance, once you‘ve gotten used to the Google method. But the
Vlingo app does do several cool things that the Google app doesn‘t. For example, you
can speak an address or business name and see the location on a Google map,
automatically call anyone in your contact list by speaking their name, or dictate a status
update for your Facebook or Twitter account. The Blackberry version of Vlingo‘s
speech-recognition app, which has been out since June, goes even further, letting users
dictate e-mail and text messages. I fully expect to see Vlingo‘s engineers add such
features to their iPhone app.
Both the Google Mobile app and the Vlingo app put an end to typing out search
queries. But despite my general enthusiasm for these new voice-driven mobile search
tools, I have to say that their speech-recognition algorithms still need work. Converting
speech to text seems to be one of those problems, like building a foolproof A/V system
for lecture halls, that experts are still going to be working on 30 years from now.
When I tried to speak the search term ―Rahm Emanuel‖ into the Google Mobile
app, the application came back with the transcriptions ―roman manual,‖ then ―robin
manual,‖ then ―brahmin manual.‖ (Thankfully, however, it got ―Barack Obama‖ right the
first time.) It fared a little better with ―Xconomy,‖ one of the words I always like to use to
torture speech-recognition systems, coming back first with ―astronomy‖ and ―taxonomy‖
and finally getting ―Xconomy‖ on the third try. The Vlingo app got ―Xconomy‖ right the
first time—but I have a hunch that‘s because the folks there knew I was evaluating the
software.
The third mobile search tool I‘ve been enjoying recently involves videos rather than
voice. It‘s WikiTap, an iPhone app released in September by Veveo, an Andover, MA-
based startup I‘ve covered several times. The app is a mobile-friendly mashup of
Wikipedia and YouTube. Those two information sources might, at first blush, seem to
blend about as seamlessly as Charlie Rose and Paris Hilton. But as it turns out, they go
together remarkably well.
Veveo‘s first mobile product was an ―incremental search‖ tool called vTap,
designed to make it easier to find the video you want on a mobile phone by narrowing
down the list of possible matches as you type. WikiTap works the same way. To find the
Wikipedia listing for my favorite film-score composer, Bernard Herrmann, I only had to
enter ―bernard h‖ and Herrmann popped up as the top match. But here‘s the really cool
thing about WikiTap: as soon as you click on a search result, the program brings up both
the Wikipedia listing and related videos culled from YouTube and other sources. The
videos presented alongside the Bernard Herrmann article, for example, included a
YouTube slide show featuring Kim Novak, star of Vertigo, one of the many Hitchcock
films Herrmann scored, and was followed by a video on the top 15 horror film themes of
all time (Herrman‘s music for Psycho, of course, topped the list).
I‘ve found that leaping back and forth between Wikipedia text articles and related
YouTube videos is a surprisingly fun way to kill a few hours. The pairing seems so
natural that I now feel like there‘s something missing when I visit Wikipedia on the
conventional Web. There‘s nothing new about the concept of multimedia reference
works, of course—encyclopedia publishers like Britannica have been publishing CD-
ROM and DVD-ROM versions of their content, spiced up with a few QuickTime videos,
since the mid-1990s. But in these older works, the videos (which usually turned out to be
clips from recycled 1960s educational documentaries) always felt to me like an
afterthought—they were there more because the platform could support them than
because anyone thought they were essential. WikiTap, like Wikipedia itself, is a gleefully
crowdsourced hodgepodge where you never know quite what you‘re going to find. The
whole point is to make unexpected connections, and to see and hear things that you can‘t
understand just by reading about them.
And that‘s the fun of the new mobile search applications in general: they lead you
into experiences you never would have had otherwise. Which is part of the reason I‘m
still feeling like the money I put down for my iPhone 3G is the best $299 I ever spent.
33: Former “Daily Show” Producer
Karlin is Humorist Behind WonderGlen
Comedy Site
December 11, 2008

Since October, the Internet has been abuzz with discussion about WonderGlen, a
fictional TV production company whose fake company intranet is a window onto the
obsessions of a staff that takes dysfunction well beyond the levels of Dunder Mifflin, the
fictional paper company in NBC‘s ―The Office.‖ Much of the buzz has focused on the
identities of the site‘s creators, unknown until now; speculation has centered at various
times on Los Angeles personalities such as director-producer Judd Apatow and on Jesse
Thorn, host of the podcast ―The Sound of Young America.‖ But according to a source
who contacted Xconomy this week, the force behind WonderGlen is Ben Karlin, the
former executive producer of ―The Daily Show with Jon Stewart‖ on Comedy Central.
Karlin, who was also executive producer of ―The Colbert Report,‖ stepped down
from the two hit shows in August 2007 to start Superego Industries, a TV, film, and new
media production company formed in partnership with HBO. The WonderGlen site is
Superego‘s first public project; Karlin‘s partner on the project, according to Xconomy‘s
source, is Will Reiser, a TV producer who has been involved in such projects as ―Da Ali
G Show,‖ a satirical interview program hosted by comedian Sacha Baron Cohen.
WonderGlen (also spelled Wonder Glen, with a space, in various locations on the
site) is an example of an increasingly common form of Internet pseudo-hoax or viral
marketing campaign—a website that purports to be real but which, upon further
examination, dissolves into an entertaining fiction or an amusing parody. I‘ve written
about such sites before—one of my favorites is one devoted to saving the endangered
Pacific Northwest Tree Octopus, created by Washington-based cartoonist and Web
publisher Lyle Zapato. But WonderGlen is one of the most elaborate examples of the
genre, and considering that the site first appeared almost three months ago and has been
widely discussed on the Internet, its creators have stayed anonymous for a surprisingly
long time. (Even lonelygirl15, the heroine of a viral YouTube series that debuted in June
2006, was ―outed‖ as a hired actor after only about three months.)
The news of Karlin and Reiser‘s involvement in WonderGlen comes via Christel
Whittier, the director of business development at a West Hollywood, CA, Web
production studio called FanRocket. Whittier, who first contacted me earlier this week,
said she did so because she is from the Boston area and is a longtime reader and
―personal fan‖ of Xconomy. I‘m not clear on the connection between FanRocket and
Superego Industries, and I have to admit I wondered a bit if I was being spoofed myself,
but so far Whittier seems legit, and tonight she forwarded me a statement with the details
on WonderGlen.
In that statement, WonderGlen Productions is described as ―a fictional film,
television and new media company with a very basic website. However, the real
discovery happens upon logging into the site‘s ‗intranet.‘‖ That intranet, which is updated
regularly and contains a smorgasbord of made-up messages between employees detailing
abortive but hilarious projects such as a home-improvement show featuring hobbit
houses, has been fascinating and irritating Web audiences since the WonderGlen site
surfaced in early October.
The site was featured on December 1 by the Internet newsletter Very Short List,
which called it a ―multifaceted, superdetailed send-up of corporate culture‖ reminiscent
of the work of Christopher Guest or Ricky Gervais (creator of the original British version
of ―The Office‖). The editors of Very Short List added: ―Half the fun consists of trying to
figure out who the site‘s actual authors and owners are. (Yes, we know—but, irritatingly,
we‘ve been sworn to secrecy.)‖
Karlin and Reiser, apparently exercising their pull in Hollywood, have enlisted
personalities such as actor James Franco (co-star of the Spider-Man movies) to help fuel
the viral campaign around WonderGlen; in a video posted on YouTube in late October,
Franco is the host of a satirical tribute to WonderGlen co-founder Aidan Weinglas, a gay
man who is never pictured without his Australian Shepherd dog and whose partner, Dr.
Dean Payne, is a New Age ―multi-modal therapist‖ who specializes in the treatment of
Gudjonsson-Payne‘s syndrome (‖Pervasive fear of work, obligation, or commitment,
combined with high-risk sexual and drug-taking behavior‖).
Though the characters inhabiting the WonderGlen universe are a little too off to be
real, the WonderGlen site is full of links to absurd but apparently non-fictional content
elsewhere on the Web (such as a site about hobbit houses in Bend, OR), suggesting that
the WonderGlen project is, in part, an exercise in bending readers‘ sense of the boundary
between fiction and reality. But it is still not clear precisely why Karlin and Reiser
created the site, or whether it is the first step in promoting some larger project (an HBO
show, perhaps?).
In the statement forwarded to Xconomy, Ben Karlin had this to say: ―There is no
model for what it is we are trying. Which is exciting, but it also feels like we are in
Jamestown circa 1610. We may very well successfully colonize a new world or, we could
starve to death because we didn‘t bring enough salt.‖
Update, December 12, 2008, 1:45 pm: I‘ve spoken with more folks at FanRocket,
who confirmed all of the details above and explained that Karlin‘s company hired
FanRocket to help with the technical implementation of the WonderGlen site and with
viral outreach. I am attempting to reach Karlin and Reiser for direct comment. But either
way, I will be posting more on the story behind WonderGlen later today.
Update to the update, December 12, 2008, 2:15 pm: Jesse Thorn, host of the
(wonderful) podcast ―The Sound of Young America,‖ has published a blog post
explaining his connection to the WonderGlen project and introducing a new player into
the drama: the Kasper Hauser Comedy Group, a San Francisco comedy sketch troupe
whom Thorn names as the writers behind the WonderGlen employees‘ antics. Thorn‘s
post says in part:
―Well, the cat‘s out of the bag. The amazing, amazing virtual world of Wonderglen
Productions was created by producer Ben Karlin (long-time Daily Show showrunner) and
written by our pals The Kasper Hauser Comedy Group. I‘ve been dying to share this
information for months, but was sworn to secrecy. Now, web sleuths [meaning
Xconomy] have revealed the truth, so I can finally speak. Not only is this possibly the
most ambitious project Kasper Hauser have ever worked on, it‘s one of the most
ambitious in the history of web entertainment. This isn‘t an advertisement for something
else—it‘s an independent ecosystem of hilarity.‖
Further update, December 12, 2008, 8:25 pm: Nothing yet from Karlin, who, I‘m
told, is busy producing four real films (which makes you wonder how he has time to fool
around with a website about a fictional company producing fictional films). But I was
able to confirm with the folks at FanRocket that, just as Jesse Thorn says, WonderGlen is
not advertising or viral marketing for something else. ―This is it,‖ I was told. ―There is no
TV show to come, no movie. It has no purpose other than to entertain. There is no other
curtain to be raised.‖
Author's Update, February 2010: I eventually caught up with Karlin, and published
a full interview in late January, 2009; see Chapter 38.
34: The 3-D Graphics Revolution of
1859—and How to See in Stereo on Your
iPhone
December 19, 2008

Gadget lovers and other technology enthusiasts suffer from a curious myopia about
the past. The general assumption—fostered by the admittedly blinding pace of progress
in computing and software—is that everything really cool must have been invented in the
last decade or two. Marvels like wearable virtual-reality displays with force feedback
gloves are often described as if they were without precedent.
But past generations were far cleverer than we usually imagine. It may surprise you
to learn, for example, that the first three-dimensional (stereo) images were created by
British scientist Charles Wheatstone in 1845, just a few years after the emergence of
photography itself, and that 3-D photo viewers—called stereoscopes—were common
appliances in middle-class living rooms for more than 70 years, from the time of the
American Civil War to the Great Depression.
If you came across a stereoscope or a stack of the dual-image ―stereograph‖ cards
used in them today, perhaps in an antique store or your grandparents‘ attic, you‘d
probably dismiss them as quaint curiosities or toys. That would be an understandable
reaction, given that we can now enjoy computer-animated 3-D movies like The Polar
Express and Beowulf at Imax scale. But you‘d be missing the fact that for at least three
generations, in an era before radio, television, and easy geographic mobility, the
stereoscope was many citizens‘ most important window on the world outside their
hometowns; it was their newsreel and their 3-D National Geographic, functioning as the
main medium for what we now call photojournalism.
About 15 years ago, I inherited a stereoscope and a small number of stereograph
cards from my grandfather. I treated the device mainly as a knick-knack until a couple of
months ago, when I came across a book-fair vendor who was selling a large trove of
quality stereo views. I picked through them, bought a dozen, took them home, put them
in my stereoscope—and was completely bowled over by the images‘ clarity and depth.
The cards I‘d purchased were mostly made by the Keystone View Company, the
largest and most prosperous of the 19th-century stereograph publishers, and their subject
matter, composition, and printing exhibited a refinement that had been missing from the
handful of cheaper cards I already owned. It was as if the depth in the cards I‘d viewed
before was a stage illusion; the people and objects in these more cheaply made images
could just as well have been a series of scrims or paper cutouts. But the Keystone images
had a continuous, lifelike depth similar to the perspective we enjoy in everyday life.
The revelation sent me off in search of historical information and, inevitably, more
stereograph cards. I learned, to my surprise, that the household stereoscope—a simple
contraption with a handle, two lenses, a hood to block light, and a sliding card holder—
was invented in 1859 by Oliver Wendell Holmes, Sr., the great Bostonian poet and
physician. Holmes was enchanted by stereo photography and believed that a simple,
affordable, handheld version of the heavy, awkward stereo viewers in use up until that
time would give ordinary people access to a universe of marvels.
At the Boston Public Library, I tracked down a copy of a 1949 address given by
George E. Hamilton, then president of the Keystone View Company, to the Newcomen
Society of England in North America, a club devoted to the history of engineering and
technology. Hamilton‘s address included quotations from two articles Holmes had
written about the stereoscope and stereo views for The Atlantic Monthly in 1859 and
1861. I want to excerpt those quotes here at length, because they convey the rapture
Holmes felt toward these ―painting[s] not made with hands‖:
―The first effect of looking at a good photograph through the stereoscope is a
surprise such as no painting ever produced. The mind feels its way into the very depths of
the picture. The scraggy branches of a tree in the foreground run out at us as if they
would scratch our eyes out. The elbow of a figure stands forth so as to make us almost
uncomfortable.
―…Oh, infinite volumes of poems that I treasure in this small library of glass and
pasteboard! I creep over the vast features of Rameses, on the face of his rockhewn
Nubian temple… and then I dive into some mass of foliage with my microscope, and
trace the veinings of a leaf so delicately wrought in the painting not made with hands,
that I can almost see its down and the green aphis that sucks its juices.
―…If a strange planet should happen to come within hail, and one of its
philosophers were to ask us, as it passed, to hand him the most remarkable product of
human skill, we should offer him, without a moment‘s hesitation, a stereoscope
containing an instantaneous double-view of some great thoroughfare.‖
It‘s worth noting that while Hamilton had an obvious financial interest in promoting
the stereoscope, Holmes never did. He gave his design to a Boston merchant named
Joseph L. Bates, who manufactured and sold the device from his ―fancy goods‖ shop at
132 Washington Street. (The ―Monarch‖ stereoscope I inherited, made by Keystone,
bears a 1904 patent, but its design had not changed in any important way since Holmes‘
time.)
By 1890, stereoscopes and stereo views had become a huge industry in the U.S. and
Europe, with four companies in the U.S. alone dispatching photographers to every corner
of the Earth and competing to sell stereograph cards to the public. Every summer, the
companies hired hundreds of college students to fan out across the countryside, hawking
the latest series of travel, documentary, educational, comic, burlesque, or ―sentimental‖
views. Weddings and railroad scenes were popular, as were recreations of the life of
Jesus. During Word War I, stereo views captured on the battlefields of Europe gave
people back home a glimpse of that war‘s massive troop movements and its horrifying
carnage. (One of the stereographs I found at the book fair is a World War I view
appropriately entitled ―Human Wreckage.‖)
Popular interest in stereoscopes and stereo views tapered off gradually after the rise
of motion pictures and radio, although the technology continued to be used for
educational purposes in many classrooms into the 1950s. Stereo photography has been
kept alive to this day—though more as a toy than as a serious documentary or artistic
medium—by the View-Master, in which the traditional 3-by-7-inch stereograph card is
replaced by a paper disk holding seven pairs of transparencies.
Vintage stereoscopes and stereo views are plentiful on eBay, and I confess that my
book-fair adventure seems to be blooming into a binge of acquisitiveness. Waiting for me
at home, as yet unpacked, is a series of medical stereographs reportedly so grisly that the
seller wouldn‘t show them on his eBay page. Meanwhile, I‘ve scanned my initial
collection of stereo cards and posted them on Flickr.
Now, dear reader, if you‘ve read any of my previous columns, you know that I‘m
not likely to ramble on about history for 1,100 words without eventually bringing the
discussion back to modern media technology. What relevance does the stereographic
technology of the 19th and early 20th centuries have today, when we have so many other
ways to obtain information? I want to leave you with two thoughts.

First, newer is not always better. It may sound unlikely to anyone who has not taken
the time to view one of the old stere images in a vintage stereoscope, but the 3-D effect
produced by the best stereograph cards is stunning, even vertiginous. In an age of mostly
2-D imagery viewed on newsprint or flat screens, we have forgotten the impact that the
third dimension can add. If you thought the latest Xbox video game or high-definition
plasma display was ―immersive,‖ you should see the images captured by the masters of
the genre, photographers like Benjamin Kilburn, Chalres Bierstadt, and Eadward
Muybridge.
Second, you don‘t actually need a stereoscope or even physical stereograph cards to
appreciate these old images. By ―parallel free-viewing‖ the images, you can usually see
the stereo effect even on a regular computer monitor. Free-viewing takes a bit of practice,
but it‘s worth the effort. There‘s a tutorial on it here; it‘s all about staring at the two
images, relaxing your eyes until you see the ―third‖ image that forms between the left and
right images, then focusing in on that image. All of the stereograph images that I‘ve
included in my Flickr set can be viewed in this way.
Once you‘ve mastered free-viewing on a regular computer display, here‘s some
dessert. (Non-iPhone owners: You can stop reading here unless you‘re really interested.)
I‘ve tested free-viewing on my iPhone, and it works really well. The little black phone
with its high-resolution screen turns out to be a great medium for stereo images—much
better than I would have thought, given that the iPhone‘s display is about half the size of
a traditional stereograph card.
The implications are exciting. The iPhone and the iPod Touch make it so easy to
grab images from the Internet (and/or store them in the built-in photo album) that it‘s
now feasible to think of your smartphone as a portable stereo viewer, with access to a
potentially unlimited supply of images.
Once you've got a stereo view on your iPhone screen (I recommend going to my
Flickr photoset in your iPhone browser), just tilt your iPhone to landscape (horizontal)
orientation, an enlarge a single stereograph until a matched pair of images exactly fills
the screen. Hold your iPhone about 8 inches from your nose, and try free-viewing the
image.
After a bit of practice, the images will pop right out at you—as if your iPhone had
suddenly become a window on a real location. Let me know how it works for you! I‘ll
add new stereographs (including, perhaps, the grisly medical ones) as time allows.
Meanwhile, welcome to the new old world of 3-D photography.
Addendum, January 29, 2009: Today‘s Very Short List: Web highlights a project
by blogger Joshua Heineman to turn stereographs from the collection of the New York
Public Library into animated GIF images that wiggle back and forth between the right
and left views, creating the illusion of depth without any need for a stereoscope or free
viewing. The images are quite startling in this format—check them out here.
Addendum, February 14, 2009: I‘ve scanned the series of anatomical
stereographs mentioned above and added them to my Flickr photoset. Caution: they‘re
not for the queasy or the faint of heart. In most of the views, human cadavers have been
bisected or flayed, and their individual parts meticulously numbered with tiny typewritten
numbers attached to pins. I don‘t have the key to the labels, nor do I have any idea how
these cards were produced—they appear to have been printed commercially but then
pasted by hand onto their cardboard backings for viewing in a stereoscope, and they came
to me inside a handmade wooden toolbox. If you‘ve heard of similar stereographs or have
any clues about where these might have originated, please send me a note at
wroush@xconomy.com.
Author's Update, February 2010: I've continued to collect stereograph cards and
now have around 400 of them. I'm gradually cataloguing and scanning them and posting
the digitized versions on Flickr. I discovered with the help of librarians at the Countway
Library of Medicine at Harvard Medical School that the dissected-cadaver stereographs
were created by the Rotary Photo Company, probably in the 1890 or 1900s.
35: Ditch That USB Cable: The Coolest
Apps for Sending Your Photos Around
Wirelessly
January 9, 2009

For average consumers, the big complaint about digital photography has always
been that it‘s too hard to extract the pictures you‘ve taken from your camera or phone so
you can show them to the rest of the world. But the truth is there are so many ways to
move, share, and display digital photos today that there‘s no longer any excuse for letting
your pictures languish. This week‘s column is meant to point you toward a few cool
examples of applications to help share your photos—plus one that‘s promising but,
unfortunately, not quite ready for prime time.
There‘s one word for the hurdle that keeps a lot of people from using their digital
cameras more often: cables. To get photos of your camera, you usually have to track
down the right USB cable, hook it to the computer, then find the photo-transfer program
that came with the camera when you bought it. Wouldn‘t it be great if your camera
simply sent all of your recent photos to your computer wirelessly, the moment you turned
the camera on? Well, that‘s exactly what the Eye-Fi Explore, Eye-Fi Share, and Eye-Fi
Home cards from Mountain View, CA-based Eye-Fi let you do.
The cards are regular SD memory cards that hold 2 gigabytes of pictures and fit into
the existing SD slot in your camera. But they also include tiny radios that allow the cards
to connect to Wi-Fi networks such as the one you probably have in your house. From
there, the cards can either upload your photos to your home computer or, in the case of
Eye-Fi Share and Eye-Fi Explore, send them directly to your favorite photo-sharing site,
such as Picasa, Flickr, or Photobucket.
The top-of-the-line card, the Explore, also comes with a year of free Wayport Wi-Fi
hotspot service so you can upload photos from any of the Wayport‘s thousands of
member restaurants and hotels, including many McDonald‘s locations. The Explore card
also automatically geotags your photos—attaching the latitude and longitude to your
photo‘s electronic metadata, so that you can view them on a Web-based map. (To find the
location where each photo is taken, the card uses Wi-Fi-based positioning software
supplied by Boston‘s Skyhook Wireless. I wrote about the deal between Skyhook and
Eye-Fi last May).
My brother and his wife gave me an Eye-Fi Explore card for my birthday last week
(thanks Jamie & Jen!). I‘ve been testing it out this week, and it works astonishingly well.
A 5-megabyte photo, taken at my camera‘s maximum 3264 x 2448-pixel resolution, takes
only a few seconds to upload to my computer, and appears in my Flickr account moments
after that. I‘ve tried setting the Eye-Fi card to upload images to my Evernote account (the
wonderful digital notebook service I wrote about last July) and it works great for that too.
The added bonus here is that Evernote can recognize words in your photos—so if one of
your pictures included a billboard, a street sign, or some text on a whiteboard, you‘d be
able to find it later by searching for that text.
There are only a couple of downsides to the Eye-Fi card. One is that it uploads
everything on your memory card indiscriminately, so you‘d better be sure that the photos
you‘ve taken are really ones that you want showing up on a public photo-sharing account
(although you can adjust the privacy settings for Web-bound photos in advance). Also,
you can‘t change the card‘s settings from the camera—you have to do it using a Web-
based management interface, which means you must be at an Internet-connected
computer to switch between uploading to different sites. That‘s a bit of an annoyance,
because I share most of my photos on Flickr, but every once in a while I‘d like to use my
camera to record something on Evernote.
(Update, January 16, 2009: The Eye-Fi card‘s popularity appears to be growing;
the device just won the ―Last Gadget Standing‖ competition at the International
Consumer Electronics Show.)
But you don‘t need a fancy digital camera or a wireless SD card to get into the
mobile photo-sharing game. If you have a camera phone, you can send your photos off to
your friends or to a Web album with no hassle. There are two mobile photo-sharing
services that I particularly like, both launched in 2008, and both with automatic
geotagging features.
One is SnapMyLife, which is accessible from any mobile Web browser, but also
offers a nice specialized app for iPhone users. The iPhone app includes a Google Maps
screen that lets you browse photos uploaded by other members. With more than 500,000
people using the service, you‘re bound to find shots from a few fellow SnapMyLife users
in any urban neighborhood. Some people use SnapMyLife to build mini-travelogues; a
case in point is user a member called JD573F, who‘s at the International Consumer
Electronics Show in Las Vegas this week, uploading dozens of shots as she walks the
vast halls of the convention center.
The other mobile photo sharing app I‘ve been playing with a lot lately is called
AirMe. So far, the Colorado Springs, CO-based startup‘s service is only available as an
iPhone app, but the company says it‘s working on versions of the app for Sony and Nokia
phones. If you want to share a photo, you open the AirMe app instead of the phone‘s
regular camera application. It instantly uploads any photo you snap to the photo-sharing
service of your choice, including Flickr, Picasa, Facebook, and Twitter. This simplicity is
what makes AirMe so useful—like the Eye-Fi cards, it eliminates the laborious step of
manually selecting and uploading the photos you want to share.
Services like SnapMyLife and AirMe aren‘t meant to compete with heavy-duty
online photo communities like Flickr. They‘re mainly for sharing the more casual photos
that people capture on their cell phones. Indeed, as a longtime Flickr user, I have trouble
imagining any new photo-sharing application so cool that it would induce me to start
sharing the bulk of my photos anywhere else. But there is one new photo community,
called Fotonauts, that caught my eye recently.
The brainchild of Jean-Marie Huillot, the former chief technology officer of
Apple‘s application development division, Fotonauts has a website full of gorgeous
outdoor photos and a high-minded, Wikipedia-inspired mission to ―enable the creation of
the definitive pool of images for everyone to contribute to, discover, use and enjoy,
covering all areas of human interest.‖ Huillot says he‘s out to make photography more
social by allowing users to do things like collaborate on Web albums, build information
mashups that combine photos with maps and Wikipedia entries, and hold bulletin-board-
style discussions around each photo.
It sounds great—and I‘d definitely be interested in an application that allowed me to
do something more creative with my photos than simply plop them into a Web album.
But while the Paris-based startup has been getting a lot of fawning coverage from
TechCrunch (Keith Teare, who co-founded Edgeio with TechCrunch founder Michael
Arrington, is an employee at Fotonauts), I‘m forced to report that Fotonauts is nowhere
near the point of living up to the hype. I‘ve been testing the beta version of the Fotonauts
application, and I‘ve found it to be both buggy—repeatedly hanging my Mac—and
lonely, with little content available to browse and little discussion going on.
Fotonauts‘ one indisputably useful function, at the moment, is to automatically
synchronize whatever photos you dump into the application with your accounts on Flickr,
Picasa, Facebook, or Twitter. But as I‘ve noted above, that‘s something that many other
devices and applications can do. Fotonauts also provides a nice slide-show widget
(investor and board member Joi Ito has published a sample show on Dubai) but that, too,
is nothing unique. I get the sense that there‘s a broader technological vision behind
Fotonauts, involving tagging, the semantic Web, and better algorithms for searching
images. But little of that is visible yet. I‘m hoping that over the next few months, the
Fotonauts community will grow to something closer to critical mass, and that Huillot‘s
team will reveal more of the features that would make Fotonauts into a true ―photopedia.‖
Author's Update, February 2010: Well, speaking of “photopedia,” Fotonauts
morphed into something called Fotopedia in June 2009. It's supposed to be a cross
between Flickr and Wikipedia. It's got some gorgeous photos, but to be honest, I still
don't get it—it's basically a bunch of slide shows with related Wikipedia content pasted in
underneath.
36: Have Xtra Fun Making Movies with
Xtranormal
January 16, 2009
This week‘s column comes partly in the form of a digital cartoon about officemates
Simon and Richard. You have to watch it on the Web in order to really understand. So
stop reading now, go to the link below, then come back here.
http://bit.ly/xconxtra
Okay, welcome back. Clever, eh? Of course, it‘s just fiction. I still have a job here
at Xconomy—as far as I know. But if I ever lose it, maybe I‘ll hitchhike to Montreal and
apply for a position at Xtranormal, the startup that created the Web-based movie maker I
used to create the video.
Xtranormal, which is funded by Cambridge‘s Fairhaven Capital, was founded in
2006 and emerged from stealth mode about a year ago at the Demo 08 conference in San
Diego. (You can watch vice president Paul Nightingale‘s six-minute Demo presentation
here). The basic idea—automated synchronization of synthesized speech with animated,
3-D avatars—has been around since the early 2000s. If you don‘t remember the green-
haired virtual newscaster Ananova, here‘s a link to some archival video. But
Xtranormal‘s innovation has been to put the same technology into the hands of average
Web users—letting them produce and direct their own little 3-D movies.
It‘s a great idea that strikes the same chord as some other new consumer tools for
creating and remixing digital media—Animoto, which assembles high-energy animated
slide shows with musical backgrounds from your digital photos, being one of my current
favorites. Xtranormal‘s easy-to-use toolkit of commands and its endearing cartoon people
give it the feel of a big Lego set for adults. I made the clip above in an hour or two just by
choosing a backdrop, a couple of characters, and some basic camera angles and gestures
from Xtranormal‘s large menu of options, then writing some dialogue. The script doubles
as an interactive storyboard; Xtranormal has developed a clever drag-and-drop interface
for inserting pauses, facial expressions, and camera angle changes.
As Xtranormal puts it, ―If you can type, you can make movies.‖ And there are
people using Xtranormal in some pretty entertaining ways—for examples, check out
Deadpan Inc., Howard and Leslie, and Rejected Jokes. To quote website Lifehacker, it‘s
―a seriously addictive sandbox for crafting miniature dramas, comedies, or whatever you
can tell your little actors to do.‖
Although Nightingale talked in his demo about using the tool to build business
presentations or daily Web talk shows, I‘m not convinced that the service, at least in its
current form, has any serious business, educational, or media applications. Those may be
coming down the road, as the company gives users access to tools for building
customized avatars and laying in their own voice tracks rather than relying on the
software‘s speech synthesizer.
For now, Xtranormal is simply a heckuva lot of fun, which is enough for me. Better
yet, it‘s free, at least for now—though there are indications that this won‘t last forever,
and that Xtranormal plans eventually to sell credits that users can apply toward
publishing movies.
Xtranormal‘s About page calls the democratization of movie-making ―a massive
business opportunity,‖ and Nightingale talked at Demo about additional revenue
possibilities for the company, such as interactive marketing—think exclusive worlds
where content owners (say, Pepsi or Honda or Universal Studios) give visitors digitized
movie characters or branded props from which to build their own movies.
But apparently, these types of marketing deals aren‘t materializing quite fast
enough. According to this report in the Vancouver, BC-based blog TechVibes, the
company was forced to lay off 36 people, or about half of its staff, back in November.
There have also been complaints from users about performance issues, including long
waits for Xtranormal‘s servers to show their previews and finished movies (though I
didn‘t experience that problem myself). And the downloadable version of the Xtranormal
movie generator promised by Nightingtale last January is, so far, nowhere to be seen.
I hope the company gets through its current rough patch, because it‘s developed a
fun and intuitive tool that, with a few more features, could provide the palette for a new
generation of home movies by creative amateurs. Given the graphics-processing power of
today‘s home computers, you shouldn‘t have to be CGI professional or machinima
hacker to produce nice-looking animations.
Of course, I‘m not about to put down my writer‘s pen. At least, not until an avatar
pries it out of my cold, dead fingers.
(Addendum, 1/19/09: Talk about coincidences. Last night somebody from
Xtranormal left the following comment over on the YouTube page for my Richard &
Simon video: ―Great stuff. Did you know that Simon & Richard are the product
designers/managers for Xtranormal? Nuts. Nice write-up too. We are going through a
rough patch, but we‘re going to pull out of it and it‘s gonna be FUN.‖)
Author's Update, February 2010: Xtranormal is still in business—in fact, it has
released the beta version of a downloadable cartoon-making tool for Windows called
State.
37: E-Book Readers on the iPhone?
They’re Not Quite Kindle Slayers Yet
January 23, 2009
Well, this is the first time I‘ve written my weekly column while wearing a tuxedo.
No, I‘m not on my way home from an inauguration ball, or campaigning for higher style
standards among reporters. As I write this, I‘m getting ready to emcee Xconomy‘s Battle
of the Tech Bands 2. Our preparations for the event have eaten up most of the day, which
is why today‘s column will be brief (by my own wordy standards, anyway).
I try to keep an eye on the e-book world, and some interesting stuff has been
cropping up lately. First, Amazon has continued to experience surprising success with its
Kindle e-book reader. I‘ve panned the Kindle in the past, and wouldn‘t even think about
buying one myself until the company makes major design improvements. (Which it may
be about to do—a new version is supposedly due this spring.) But a lot of people seem to
like the thing, and after Oprah herself endorsed it in October, a pre-Christmas rush of
orders cleaned out Amazon‘s entire stock; there‘s now an 8- to 10-week wait for Kindles.
(And by that time the new one might be out.)
Which leaves an opening of sorts for competitors. So it‘s no surprise to see iPhone
app developers moving into that gap, given the attractions of the Apple device‘s high-
resolution display and touch-based interface.
But as much as I love my iPhone and dislike the current Kindle, I‘m not sure
Apple‘s gadget will take hold as a serious platform for e-books. The main problem, as I
see it, is that the iPhone‘s screen is too small to hold much text, meaning readers have to
turn a page every few seconds. If you want to try out the e-book experience on an iPhone,
however, I do have two apps to recommend.
First, there‘s Stanza from Lexcycle, a free app for the iPhone or the iPod Touch that
gives you immediate, over-the-air access to a very large collection of free public-domain
works (I‘m part of the way through Middlemarch) as well as new, paperback-priced
works from the Fictionwise catalog. Stanza has a well-thought-out interface, including a
Cover Flow-like title browser. What I like best about it is the way a simple tap on the
right side of the screen takes you to the next page. Using the iPhone‘s usual ―flick‖
gesture to go to the next page, the way some other apps do, is actually overkill for this
simple task, in my opinion—all that flicking will wear out your index finger surprisingly
quickly.
Then there‘s Iceberg Reader from ScrollMotion. Unlike Stanza, Iceberg isn‘t a
stand-alone application that‘s able to load many different books; rather, it comes as part
of an all-in-one package when you buy and download individual book titles from the
iTunes App Store, such as the marvelous fantasy novel The Golden Compass by Philip
Pullman. It‘s got an extremely nice look and feel. You almost get the sense that this is the
e-book reader application Apple would have designed, if it had included such a utility as
a native app on the iPhone. Iceberg Reader does use the flicking convention to scroll text
along the screen, but it‘s well-executed, without too much springiness or momentum
imparted by each flick, so I don‘t find it too annoying.
Meanwhile, there are more companies trying different takes on the much larger, E
Ink-based ―electronic paper‖ interface that‘s at the heart of both the Kindle and the earlier
Sony e-book readers. The New York Times published a nice roundup of the current
options just before Christmas. I‘m intrigued by the eSlick Reader from Foxit Software,
which seems to do almost everything the Kindle does for a lot less money ($229
compared to Amazon‘s exorbitant $359), and by the uber-minimalist Txtr, a
3G/Bluetooth/Wi-Fi device that was panned yesterday by Crunchgear but has a much
more elegant look (at least judging from the early product shots) than the other reading
devices out there.
Of course, any new e-book reading device or program is only as good as the catalog
of books that it can access. On that score, Amazon has a huge and perhaps
insurmountable advantage over all of its competitors. If the Kindle 2.0 includes the right
combination of improvements (meaning, if it‘s a lot less ugly and clunky than the first
one) it will probably cement Amazon‘s lead.
Regardless of what Amazon does with the new Kindle, we can probably look
forward to more improvements in display technology, both from E Ink and from makers
of standard LCD displays like the iPhone‘s. In a newsletter just yesterday, David Pogue
of the New York Times reported claims by LCD makers at the Consumer Electronics
Show that their technology is ―only 50 percent evolved,‖ meaning we should expect even
brighter, sharper, more energy-efficient LCDs in the relatively near future. Happy
reading!
Update, February 6, 2009: This week Google introduced an iPhone-accessible
version of its Google Book Search service, meaning iPhone owners in the U.S. now have
free access to the more than 1.5 million public-domain books Google has scanned at
major libraries. As Greg has reported, Amazon immediately responded with an
announcement that Kindle e-books will soon be available on mobile phones, presumably
including the iPhone. All the one-upsmanship, as the big players in the online book space
rush to make more content available on their preferred platforms but also free up content
so that it can be consumed on rival platforms, can only be good for readers in the long
run.
Author's Update, February 2010: As I mentioned in the update to Chapter 24, I
eventually bought a Kindle, in April 2009. As a result, I haven't been keeping up very
well with e-book options on the iPhone other than the Kindle app—of which I'll say more
in Chapter 43.
38: WonderGlen Comedy Portal Designed
to Plumb Internet’s Unreality, Says
Karlin
January 30, 2009

I outed Ben Karlin. Not that way: he‘s straight, at least judging from his mom‘s
foreword to Things I‟ve Learned from Women Who‟ve Dumped Me, the 2008 essay
collection Karlin edited. I mean I outed him as the creator of WonderGlen, a painfully
funny comedy website that appeared out of nowhere last October.
Purporting to be a real company intranet, the site chronicles a small Los Angeles
TV production studio working on such misbegotten ideas as ―Hobbit House,‖ a pilot
reality show where the homes of unsuspecting families are made over to look like Bilbo
Baggins‘s burrow. WonderGlen caused a stir among Internet literati, who diagnosed it as
some type of viral mockumentary along the lines of the The Office or the notorious Web-
based video series lonelygirl15, but who couldn‘t pinpoint the fiction‘s authors.
Through no special effort on my part—I got a note out of the blue offering me a
scoop—I learned last month that WonderGlen is the work of SuperEgo Industries, the
production company Karlin formed in partnership with HBO in 2007. Karlin, 38, was
born and raised in Needham, MA, and was a writer and senior editor for the satirical
newspaper The Onion from 1993 to 1996. He jumped from print into film and television,
eventually winning eight Emmy Awards as executive producer of Comedy Central‘s The
Daily Show with Jon Stewart and The Colbert Report. He‘s now working on several big
movie projects, and says he and SuperEgo partner Will Reiser started WonderGlen as a
relatively low-cost experiment in online comedy.
Having broken the WonderGlen story, I wanted to grill Karlin about the origins and
intentions of the genre-busting project, which isn‘t a website so much as ―an independent
ecosystem of hilarity,‖ to quote radio host Jesse Thorn‘s perfect description. While the
WonderGlen intranet functions as a core repository of vacation snapshots, company
policy memos, audition videos, grousing message-board posts, and the like, the world of
WonderGlen leaches far out into the Internet, including items like a James Franco
YouTube tribute to WonderGlen founder Aidan Weinglas, the website of Aidan‘s
boyfriend Dr. Dean Payne (a ―multi-modality therapist‖), fake job ads on career sites, and
links to the real (I think) erotic furniture ordered by WonderGlen‘s employees. This is
comedy on a scale nobody‘s really tried before—an interactive smorgasbord that‘s meant
at least in part to underscore the way the Internet has ―virtually obliterated‖ the line
between fiction and reality. That‘s a quote from my interview with Karlin—which I
finally scored this Tuesday, and which is presented here in full.
Wade Roush: Thanks for making time to talk, and thanks for directing that scoop
my way. A lot of people had been speculating about who was really behind WonderGlen.
Ben Karlin: It was never the intention for it to be that big of a secret. It was more
of an experiment to see how something can develop a life, absent any traditional media
push. There wasn‘t supposed to be any big reveal or anything. But you can‘t control that
shit. You can get to a point where it seems like the point of it is to be a big mystery, but
that wasn‘t really the point. The point was more just putting something out there, this
Internet flotsam if you will, and just see what happens as it‘s going through the universe.
But it just grew into something where I thought it would be the wrong idea if we tried to
keep everything secret.
WR: But all the mystery about who was behind it certainly helped the buzz.
BK: A little bit. I would much prefer—any creative person would prefer—that the
buzz be about how good something is, rather than who‘s behind it. But you can‘t really
manage buzz.
WR: Part of your problem managing the buzz may be that it‘s so hard to categorize
what WonderGlen is. It doesn‘t fit into any existing genre. Where did the idea come
from?
BK: It‘s definitely either a terrible idea, or so ahead of its time that it might take
several generations to appreciate it—if that day ever comes, which it still may not. The
idea was kind of a weird evolution. It started with the simple idea that I was going to be
doing stuff for HBO, and I wanted to do some stuff for the Web as well. I thought, well,
what if I took some of the TV stuff I was developing and created this intranet site,
because I‘m in New York and a lot of the executives are in L.A., so they could see
samples of what I was working on—script editions, shorts, things that would serve as
mini-pilots for potential TV shows. A development platform, basically.
Then as I started thinking of it more, I thought what if the site had two purposes—
one, to show HBO all this stuff, but two, as a comedy site. Then it started to get more
complicated and layered. The thing that it started out for, to show HBO our work, ended
up getting scrapped, and we said ‗Let‘s do a comedy site, and maybe some of this stuff
we do will have a life in some capacity.‘ Then as we got further into the narrative of this
company and all these people and these fictional productions and projects, the
conversation about having it function as an actual platform for actual people kind of went
away.
WR: Do you think the original concept of a dual-purpose site really could have
worked?
BK: It might have. But those two things are at such cross purposes. One of the
things we discovered early on was that for it to be a site that had actual functionality for
internal purposes, you‘d have to have things on there that you wouldn‘t necessarily want
the general public to see. And then if we did this totally transparent thing with budgets
and advertising, you‘d open up legal problems like you wouldn‘t believe. So we doubted
we could do that and we started to look at it from a different angle. I had some experience
doing websites before, but most of it involved translating an existing thing like The
Onion or The Colbert Report, where the conversation was more about how does
something that has an existing format work on the Web. With Colbert, for example, we
came up with this idea that the website was all going to be from the point of view of an
obsessed fan.
WR: Right, but nobody is fooled by that—they get the shtick right away. Was that
also the idea with WonderGlen, or did you really set out to fool people?
BK: As we were developing the site, we started out with the idea that at the
beginning, a percentage of people were going to think it was an actual company that had
actually left its back door open, and people could get into this intranet and see this stuff
they weren‘t supposed to be seeing. But once we started getting into the content, we
realized that it was so funny that most fans of comedy that we wanted to be checking out
the site were never going to be fooled. And the type of people who would be fooled by
this type of comedy are the people who still think The Onion might be a real
newspaper—and you don‘t want those as your customers.
We had gotten pretty far down the road with the design, development, information
architecture, and all the bones of it when we realized this idea of fooling people was off
the table. But we‘d gotten so far that we couldn‘t throw out what we‘d done. We decided
to plow ahead, realizing that most good things have a 1.0 version and a 2.0 version. So
this will be version 1.0 and we‘ll live with some of the things that don‘t quite work,
knowing that we can fix it another day.
So that‘s where we are. The content has gotten out there to a certain degree, but I
don‘t really think it‘s been seen by as many people in as many places as it eventually will
be. It‘s still relatively early in its life. There‘s definitely a lot more that we want to do
with it.
WR: Are you talking about marketing the site differently, or adding to the content?
BK: I don‘t understand this whole marketing thing very well, so I‘ll stick with
content. I love the content. I think some of the strongest content, unfortunately, is not the
type of content that is typically shared virally. The most common things that people share
virally are video links, obviously. And we consciously decided not to make this a totally
video-driven site. Some of the best stuff on the site is stuff like the health insurance form
they have to fill out, because they can‘t get into an HMO. They can only get into an
―HVO‖—a Health Vector Organization. So they have this form filled with really funny,
intrusive questions. It‘s a PDF document. You‘d think that would be a really interesting
thing to pass around. But there isn‘t a culture of passing around things like that, like there
is around videos. So it‘s not as much that I would change the content, as that I‘d like to
figure out a way to make that stuff easier to access.
WR: I think when you try to use the Web as a comedy medium, you‘re up against
the fact that the Internet is not really like anything that came before, like older media
were. With television, people could say ‗Oh, that‘s like the radio, but with pictures.‘ So
the first sitcoms were basically filmed radio shows. But there‘s no model for what you‘re
trying to do.
BK: The Web is still very much an evolving medium. You see certain things that
only work in one medium, and it would be a big mistake to try and turn them into
something else. Some of my favorite websites are so simple and elegant you can‘t
imagine them working in any other context or medium. Take a look at Stuff White People
Like. That‘s really funny, and I really truly like it, because it‘s a very simple idea really
well executed. I guess they did do a book, but you don‘t turn something like that into a
TV show or a movie. It just is what it is. It‘s unique to the Web.
What we are trying to do with this site is make it something that is wholly organic
to the Web—a comedy experience, in a world where video is the shorthand that most
advertisers and most people know and are familiar with. It‘s probably going to take some
time before someone cracks the nut of a wholly immersive site that is both a destination
site and also something that lives in this very scattered way that things live on the
Internet, where 90 percent of our content is seen not on the home site but elsewhere.
WR: Our focus at Xconomy is on how to take good technology ideas and build
them into real businesses, so I have to ask you the business model question. How can you
make money with something like WonderGlen?
BK: I‘m from the school that still believes that content is king. Creating a valuable
property is still the soundest way to make money. People can figure out ways to sell loans
to other banks, and then somebody repackages the loans and sells them to other banks,
but they‘re not making anything. They‘re just figuring out ways to reorganize things.
WR: Being a journalist, I‘m compelled to agree with you that content is king. But
WonderGlen isn‘t like Stuff White People Like, where there‘s only one passionate,
unpaid blogger behind it. You‘ve got a whole production company to pay for, and behind
that you‘ve got HBO.
BK: The idea at first was just to make something that was good for relatively little
money. You‘re right, it‘s not a blog that‘s one person‘s passion project, but relative to
other Internet ventures of varying degrees of ambition, this didn‘t really cost very much.
We got a lot of people to do things at a fraction of their normal rate, or even for free,
because they just liked the idea. So it has some of that good Internet mojo that you need
to have.
But this isn‘t just one person‘s project either. We wanted to make something that
would hopefully become a valuable property that people would like and want to check
out and see more from it. Absent some kind of scary business model—‘Here‘s exactly
how we are going to make money‘—the first thing was just to make something good,
because usually when you make something good, good things come out of it.
I honestly did not know this word before I started, but there are various ideas about
how to ‗monetize‘ it. But because we are primarily dealing with HBO, which does not
have an advertising-based business model, it‘s not like HBO is going to say ‗We‘ll just
send over a phalanx of advertising people to work it out.‘ The first plan was just to make
it as interesting as possible, and when we have something worth talking about, there will
be ways to pay for it.
WR: But there are some fairly expensive-looking pieces of content on
WonderGlen, like the ―Hobbit House‖ sizzle reel.
BK: We did those videos for an amount that was, from what we understand,
comparable to if not slightly less than what decent-quality Web video costs. Some of
those videos cost as little as $1,500, and the most we spent was about $7,000. That‘s the
world we‘re living in, with digital video. Plus, a lot of the directors are writers were
excited about doing it and didn‘t need to get paid. Certainly someone like James Franco
didn‘t need to get paid for his thing. More than anything, when you work in TV and film,
if you are a creative person, you just want the opportunity to do cool work. Even if it‘s
not a huge payday, it‘s something fun you can show people.
The other thing is, I‘ve been trapped in development land, where you‘re working on
these mythical projects that may never materialize for several years. Even once we had a
bunch of interesting things going with HBO, I wasn‘t going to see anything get made for
at least six months or a year, between getting a project and a script and casting it and
making it. The great thing about this was that it was relatively easy to get something that
made us laugh and that we could put online. That ―Hobbit House‖ video has been seen by
something like 1.5 million people.
WR: Can you talk a bit about your collaborators on WonderGlen? I understand that
the Kasper Hauser comedy group in San Francisco did a lot of the writing.
BK: Kasper Hauser did almost all of the writing. Kasper Hauser has been by far the
greatest creative contributor and comedic driving force of this project. The original idea
for the site and the world it would inhabit and how it would function was basically mine,
but as far as breathing life into it, and especially the specifics and the amazing little
details and the back stories and the biographies and a lot of the show ideas—the sample
shows that WonderGlen is producing—that all came from Kasper Hauser. Those guys are
basically just geniuses, and they have such an incredible comedic voice that comes from
a weird combination of just being really smart and having interesting life experiences.
And because they‘ve performed, they have that added element of knowing how to write
for character. Those guys have just been unbelievable.
WR: What about FanRocket, the digital marketing studio in West Hollywood?
From what I understand, they‘ve been helping with the viral marketing and building out
the details of the WonderGlen world.
BK: When we originally had this idea, the question was how do you market
something like this. And one of the ideas we had was that we should create as long a tail
as possible of back stories on some of these characters, to make them feel as real as
possible—never with the idea of fooling people, but more so that people would
appreciate it. Things like the idea of posting job listings at WonderGlen on Craigslist, and
going to Comicon and interviewing people about what kind of reality shows they wanted
to see, and building basic MySpace pages. FanRocket helped tremendously with all of
that.
WR: I know artists probably hate this question, but I‘m going to ask it anyway. It‘s
about your vision for WonderGlen. It seems to me that the Internet is a place where the
boundaries between what‘s real and what‘s fiction are very easily blurred. You don‘t
have to be an actual person to have a MySpace page, for example. I‘m wondering if part
of what you are doing with WonderGlen, by starting out with this pretend company‘s
intranet and then extending it out to all this other pretend content, but putting it on the
Web right next to real content, is consciously trying to call attention to that blurring.
BK: Well, first of all, thank you for calling me an artist. But without sounding more
pretentious than I already do, which is really hard: Absolutely, the point of the site is that
those lines have been virtually obliterated online. It starts with something as simple as
communication—when you‘re chatting with someone who presents themselves as a 28-
year-old woman from Tarzana, California, there is a better than average chance that
they‘re not that person. And it goes all the way down the line, to companies going to
enormous lengths to mask their connections to a website. You may think it‘s cool, and
then you find out that Sprite is behind it. It exists elsewhere in the economy, too, but it‘s
so widespread online it‘s unbelievable.
I love the idea of a website that fluidly moves between all those worlds. You‘re on
this fake site with a fake message board and the employees are talking about furnishing a
fake office, but they have their ideas about furniture, and if you click on the links, they‘re
for real furniture that you can actually buy. That is, for me, the glory of the Web—that
you can go from a site that‘s not even based on anything real, that is a fictional
fabrication, to a corporate site with a real business model where you can buy a chair, and
they both exist in equal measure. Behind one is a billion-dollar industry, and behind the
other is a guy at home with his computer, but they are equal.
WR: What do you want to do next with WonderGlen?
BK: I want to get the site out there more. Only a fraction of the people who would
like the site have actually seen it. Some of that has to do with getting press, some of it has
to do with getting up that terminal velocity where people start showing it to other people.
We haven‘t really had that breakthrough moment yet.
And consequently we probably need to make some tweaks to make the site a little
more friendly. If it‘s living more in that idea of having to function as an intranet for an
office, then it‘s not a comedy portal. It‘s not doing anybody any good if there is a barrier
to entry. There is no glory in people not getting it. So we‘d like to make it so that if
people don‘t like it, at least we‘ve showed them.
WR: If I could hazard a guess about why you haven‘t picked up that momentum
yet—I think one of the reasons people become fans of other pseudo-documentary-type
productions, like The Office, is because they come to like or dislike certain characters,
like Michael or Dwight. Some of the WonderGlen characters, like Aidan Weinglas,
sound really funny when they‘re writing a memo or an e-mail, but I‘m not sure whether
that makes them strong or quirky or sympathetic enough for people to care about them.
BK: Yeah, the problem is that the point of reference we have for this is things like
TV shows and things like lonelygirl15 that are focused on this one thing, video. I didn‘t
want to do something that has one flavor, where we serve up these first-person
testimonial videos, or whatever. I wanted the site to have so much breadth that it is
ultimately a commentary on the entire Internet. That‘s ridiculously ambitious, probably
too ambitious. But we really want to poke around with every single way we use the
Internet—and that‘s probably caused some distortion or dilution of the message.
Author's Update, February 2010: The WonderGlen site does not appear to have
been updated since late 2008.
39: How I Declared E-Mail Bankruptcy,
and Discovered the Bliss of an Empty
Inbox
February 6, 2009

I‘m not one of those people who thinks you can measure a person‘s power, talent,
or importance by the number of e-mails or phone calls they get every day. So it‘s not a
boast—indeed, it‘s more like an embarrassed confession—when I say that by early
January, my Gmail inbox had swelled to almost 15,000 messages. And that was before
we at Xconomy decided to tack our e-mail addresses at the bottom of every story so
readers can contact us more easily. I don‘t regret that policy—it‘s brought me quite a few
good story tips already. But it did mean that unanswered messages started to pile up even
faster, threatening to smother me in guilt and anxiety.
It was finally time to do something about my e-mail problem. For help, I turned to
two trusted sources. The first was executive coach Stever Robbins, aka the ―Get It Done
Guy,‖ who I met last July at Podcamp Boston (read the interview here). Stever records a
weekly podcast full of great advice about staying sane, and even having fun, as you strive
to be more productive and accomplish your goals. I remembered that in one of Stever‘s
early podcasts, he‘d responded to a listener who was desperate for tips about dealing with
his backlog of e-mail.
So I went back and listened again. For serious cases of e-mail constipation, Stever
suggested the radical action of ―declaring e-mail bankruptcy.‖ Specifically, he told the
listener: ―Delete it all. Then send a form letter to everyone who wrote saying, ‗My
backlog was too big to manage. To cope, I‘ve deleted everything. Please resend anything
important.‘‖
Something about this idea really scared me; it seemed awfully close to thumbing
your nose at everyone on your contact list. But it also seemed to offer me a way out of
my personal e-mail morass. There was simply no way I was ever going to work my way
through a 15,000-message backlog—not even if I devoted several weekends to the task.
The idea grew on me when I found out that some pretty distinguished figures, like
Stanford law professor and free-expression guru Lawrence Lessig and venture capitalist
Fred Wilson, had gone through e-mail bankruptcies and survived with their careers intact.
So, 11 days ago, on January 26, I took a deep breath and sent this note to my
heaviest e-mail correspondents:
I have waited far too long, but tonight I‘m going to clean out my Gmail inbox—
which has nearly 15,000 messages in it!!—by archiving everything (in other words,
moving it into the ―All Mail‖ folder). Apologies in advance, but if you sent me a note
recently that requires some immediate response, please ping me again, because all of my
old messages are going into the archive. It‘s the only way I‘m ever going to get my inbox
cleaned out.
Then I did what I was threatening to do, and archived all 15,000 messages. Of
course, as my boss, Bob, immediately pointed out, all I was really doing was changing
the way these e-mails are categorized in Gmail, not truly euthanizing them. I would never
just delete all that mail, because for better or worse, Gmail has become one of the main
storehouses of my digital life. (Which is obviously what Google wants, or they wouldn‘t
be giving me 7,292 megabytes of free online storage.) The messages are still there, still
searchable, if I need to reconstruct a conversation later. So, in a way, it‘s all semantics.
Despite the sleight-of-hand nature of my ―bankruptcy,‖ though, emptying out my
inbox brought an immediate sensation of lightness and freedom. Safely tucked away in
the archive, those messages were no longer pleading in 15,000 whiny electronic voices
for me to do something about them. I‘d discovered the sweetest three words in the
English language: ―No new mail!‖
Of course, the feeling only lasted about two minutes, until the next message popped
onto the screen. Clearly, declaring e-mail bankruptcy was only half of the solution. I also
needed a way to keep my inbox from overflowing again. And for that, I turned to another
trusted source, Mark Hurst.
Mark is the founder of Creative Good, a New York-based user interface design and
consulting firm, and the author of Bit Literacy, a primer on handling information
overload. I first talked with Mark a few years ago when I was writing about Web-based
time management tools; he has written a particularly effective one called Gootodo. He‘s
an incredibly nice guy, and even offered to personally coach me through some of the
techniques he lays out in his book. (If I‘d taken him up on the offer, I probably would
have found time to write a few books of my own by now.)
I found my copy of Bit Literacy and went to the chapter on managing incoming e-
mail, which Mark believes is the first step toward overcoming the stress created by all
those digital bits hanging over our heads all day long. ―Bits are heavy,‖ Mark writes.
They ―weigh people down, mentally and emotionally, with incessant calls for attention
and engagement.‖ The chapter‘s basic commandment is simple: get your e-mail inbox
down to zero messages at least once every work day, no matter what.
To do that, Mark recommends following a few rules. First, he suggests responding
immediately to e-mail from your family and friends, who, after all, matter most in your
life. Second, he urges people to trash all spam and ―FYI‖ e-mails. For each work-related
e-mail, he advises dealing with it—if this can be done in two minutes or less (a practice
borrowed from the original Getting Things Done guru, David Allen). All the remaining e-
mails that can‘t be handled in two minutes or less should be turned into to-do items on
task lists and saved for later. In every case, once you‘ve dealt with a message, you delete
or archive it.
The task-list step can be tricky. Not from a technical point of view—there are
plenty of good Web-based to-do lists, including Gootodo and the new Tasks feature in
Gmail itself. Plus, there‘s always the good old index card (aka the Hipster PDA, the
brainchild of Merlin Mann, publisher of the wonderful personal-productivity site 43
Folders). The issue is that if you‘re going to start storing important to-dos on a task list,
you have to be disciplined about actually doing some of them. Otherwise, you‘re just
moving bits around. And sadly, if you allowed your inbox to bloat to 15,000 messages in
the first place, it might be a sign that you lack such discipline.
I am happy to report, however, that I‘ve been able to zero out my inbox every day
since my e-mail bankruptcy, except for one day when I let myself off the hook after a
breaking news story necessitated one of those 16-hour work marathons. And my task list
hasn‘t grown too long in the process—right now there are only 13 items on it. Even if I
have to resort to clipped, elusive responses like ―Let‘s nail that down next week‖ in order
to dispose of some of my messages, the new method means I get to go home after work
feeling like my evenings belong to me, not to Gmail.
So, Stever and Mark—and all of the time-management experts on whose shoulders
you stand—a big thank you. You have made me a free man. Now, if you‘ll excuse me, I
have 48 messages waiting in my inbox.
Update, March 5, 2009 (from the Imitation-is-the-sincerest-form-of-flattery
Department): Farhad Manjoo at the New York Times has published a nice column
describing his own variation on declaring e-mail bankruptcy. Empty is the ―optimal state‖
for your inbox, he writes. ―Your goal, from now on, will be to keep this space as pristine
as possible, either empty or nearly so. To realize that goal, live by this precept: Whenever
you receive a new message, do something with it. Don‘t read your e-mail and then just let
it sit there—that‘s a recipe for chaos.‖
Author's Update, February 2010: Since writing this column, I have fallen off the
empty-inbox wagon a couple of times. My solution each time has been to repeat the
bankruptcy procedure and simply archive everything piling up in my inbox. So far, this
hasn't led to any serious repercussions from lost or neglected e-mails. The saving grace,
if you're desperate enough to try the e-mail bankruptcy method, is that everyone else you
communicate with is probably equally inundated in their own e-mail, so they won't notice
if you don't respond to a message—and if it's really important, they'll write again.
40: Public Radio for People Without
Radios
February 13, 2009
I have a bunch of wireless devices at home, but none of them are radios. And if I‘m
at all typical, then the radio business has a big problem.
For broadcasters, getting radio programming to people like me, who find most or all
of their news, information, and entertainment on the Internet, is challenging enough. But
the problem gets even more acute when you consider that more and more of us are
accessing the net using our cell phones. A lot of phones today can play podcasts and
streaming audio—but when it comes to finding a specific radio station‘s audio stream on
a mobile device, there aren‘t a lot of good tools. And that means members of the mobile
generation are increasingly cut off from their local radio stations.
Now, if we were only talking about commercial radio, with its evanescent mix of
Top 40 music, shock-jock antics, and right-wing political talk, I wouldn‘t be too worked
up about radio‘s crisis. It would be just one more old medium, like newspapers, finding
itself left behind by technological change. The problem is that public radio—one of the
country‘s key bastions of arts, culture, and independent news and analysis, not to mention
jazz, folk, and classical music—is also endangered.
Fortunately, the public radio community is awake to the problem. ―Cell phone
ownership and its many uses and applications also provide both potential and
fragmentation‖ for public radio, the Public Radio Program Directors Association
concluded from a survey it conducted last year. ―As consumers avail themselves of many
different functions on these devices, it will be imperative that Public Radio streaming
efforts, as well as related digital products, be available on these gadgets that are rapidly
become handheld computers.‖
This isn‘t just idle talk. Late last year, a coalition led by the Cambridge, MA-based
Public Radio Exchange (PRX) created the best tool yet for accessing live public radio
streams on a mobile device: The Public Radio Tuner, a free app for the Apple iPhone and
iPod Touch. (The effort also brought in American Public Media, National Public Radio,
Public Interactive, and Public Radio International, and was funded by the Corporation for
Public Broadcasting.)
I was ecstatic when I found the app recently. I love shows like ―On Point,‖ ―All
Things Considered,‖ ―Marketplace,‖ ―NPR: Science Friday,‖ ―Car Talk,‖ ―Fresh Air,‖
―Radio Lab,‖ and ―Wait Wait… Don‘t Tell Me!‖ But the only radio I own is the one in
my car. Since my commute to work is a disappointingly short 12 minutes—and I often
bike or walk—I only hear infrequent, short snippets of these shows.
But I‘ve always got my iPhone with me. So now I just turn on the Public Radio
Tuner, pull up my favorite local station (WBUR), and listen to my heart‘s content over
my phone‘s 3G data connection. The audio quality is perfectly adequate, and I can listen
when I‘m at home just by hooking my iPhone up to my HDTV‘s audio input jacks (using
a $6 Belkin cable splitter that I should have bought ages ago).
Perhaps the coolest thing about the tuner is that it can connect you to so many
stations around the country—more than 200 at last count. I discovered public radio as a
teenager growing up in central Michigan, so it‘s nice to be able to check in from time to
time with WKAR in East Lansing. Having lived in San Francisco and (briefly) Las
Vegas, I‘m fond of both KQED and KNPR, and my brother lives in Alaska, so it‘s also
fun to hear when the river ice is breaking up in Talkeetna on KTNA.
In the latest version of the tuner, released in January, programmers fixed some of
the app‘s early problems with frequent crashes, and added oft-requested features like
book marking, a search function, and the ability to find nearby stations using the iPhone‘s
GPS chip. With all these features, it doesn‘t surprise me that the Public Radio Tuner is
currently number 15 on the App Store‘s list of the most popular free apps. And if you‘re
not an iPhone owner, never fear—the team that built the app says the 2.0 version, which
is coming in May, is being built using technology that will be easier to port to the
Android operating system. Versions for Windows Mobile and Blackberry smart phones
may be coming later.
It‘s worth repeating, though, that for now the Public Radio Tuner only plays live
audio streams. If you want to time-shift your radio listening, you‘ll need to dig into the
podcast section of the iTunes Store. The good news is that a growing number of public
radio shows, including most of the shows I listed above as my favorites, are available as
free podcasts. If you subscribe to them, they‘ll show up automatically every time you
sync your iPhone or iPod. (Meanwhile, there‘s a report that the May update of the Public
Radio Tuner will let listeners hear on-demand content.)
There‘s an amusing coda to the story of the Public Radio Tuner app. If you look at
the app‘s page in the iTunes Store, it gets only two stars out of a possible five. That
mystified me, since the majority of the recent reviews are raves, giving the app four or
five stars. When I clicked all the way through to the earliest reviews, it turned out that the
Public Radio Tuner had a bit of a marketing problem: Most of the people who
downloaded it when it first appeared thought they were getting a tuner for all radio
stations, and therefore gave it one star out of disappointment. Some representative
comments: ―Horrific stations for stations! and the ones they do have are classical?
WTF!!!!????‖ ―Garbage unless ur over the age of 90.‖ ―This app needs more hip hop
stations or something I was not plzd.‖
Well, I‘m not over 90, but I‘m very plzd with the Public Radio Tuner. Now my
iPhone isn‘t just a phone, a music and video player, a camera, a Web browser, an e-mail
device, an e-book reader, a speech-driven search engine, a geocaching navigator, a fitness
tracker, and a four-holed flute; it‘s also a good old-fashioned radio.
Author's Update, February 2010: In mid-2009 the Public Radio Exchange released
a completely overhauled version of its iPhone app, now called the Public Radio Player. It
does offer on-demand programming, and a bunch of other delightful features. I published
an extensive interview with PRX's executive director, Jake Shapiro, in August 2009.
41: Plinky: The Cure for Blank Slate
Syndrome
February 20, 2009
If you feel it‘s time to share something online but can‘t think of anything to say, it
might be a sign that you‘re dull. If you try too hard to craft a bon mot for your blog or
some table talk for your Twitter stream, in other words, you might just be inflicting your
insipidness on the rest of us.
Or it could mean that you just need a little inspiration.
The folks at Lafayette, CA-based Plinky, a Web startup led by ex-Googler Jason
Shellen, have chosen the latter, more charitable interpretation. On January 22, they went
public with an online ―content encouragement‖ service designed to supply the dusty
nuclei for little snowflakes of confession, insight, or humor.
Every day, Plinky supplies a ―prompt‖—a provocative question or challenge—and
then helps users craft multimedia-enhanced answers that are posted both on the Plinky
site and on the social-media services of the user‘s choosing. (Currently, Plinky can send
posts to Blogger, Facebook, LiveJournal, Tumblr, Twitter, TypePad, WordPress, and
Xanga.) The prompt for February 16, for example, was ―Name a book that changed your
mind or opened your eyes.‖ The question elicited as many different answers as there were
answerers, from Naked Lunch, the 1959 novel by William S. Burroughs, to Harold and
the Purple Crayon, the classic children‘s book by Crockett Johnson; Plinky illustrated the
answers with a picture of each book‘s cover, grabbed from Amazon.com.
Other prompts lead to answers that might contain Google maps, Flickr photos, or
Amazon CD covers. The service is designed, in other words, to take advantage of the
Web 2.0-style open interfaces that allow data such as product thumbnails to be shared
and repackaged across many sites. It also encourages conversation, by allowing people to
subscribe to and comment upon other users‘ answers—the same way they might on
Facebook or Twitter, but with a prefabricated topic. ―People want to connect through
content,‖ Shellen told me by phone last week. (Our full interview appears below.)
Shellen was famous even before he joined Google for being part of the team at San
Francisco-based Pyra Labs that built Blogger, the first popular blogging platform.
(Another Pyra/Google alum, Evan Williams, went on to co-found Twitter.) So it‘s no
surprise that Shellen‘s seven-employee startup has pulled in seed money from big-name
investors like Waltham, MA-based Polaris Venture Partners. In fact, Polaris general
partner Sim Simeonov, who first tipped me off about Plinky, is the company‘s interim
chief technology officer.
Shellen says the company will go after more venture money soon. And it‘s safe to
say that the Plinky you see right now will evolve over time. For one thing, the company
hasn‘t rolled out any services, beyond the occasional advertisement, that it can actually
charge money for. And Shellen says users are already clamoring for more frequent and
more varied prompts—it wouldn‘t be too hard to generate prompts just for sports fans or
political junkies, for example.
I‘ve been playing around with Plinky for a few days; you can see my collected
answers here and at my personal blog. I‘m not one of those people has a shortage of
things to say, so I‘m probably not at the center of Plinky‘s targeted user base. But even
so, I find the tool far more inviting than Twitter or Facebook, and I‘m sure it‘s already
becoming a hotspot for many interesting online conversations that wouldn‘t happen
otherwise. As Shellen and his developers find more ways to integrate Plinky with existing
publishing platforms, it will doubtless become even more useful. Personally, I think I
would be more likely to use Plinky regularly if I could view and answer each day‘s
Plinky prompt directly from my Tumblr or Wordpress dashboard, from my desktop
Twitter client (Twhirl), or from an app on my iPhone.
Some of those capabilities may be on the way—but to hear Shellen tell it, the
company is even more excited about finding ways to mine the information that users
share over Plinky. As the user base grows, the answers could coalesce into a vast,
ongoing consumer survey that supplements review sites like Yelp or Angie‘s List.
Looking for a good place to meet an old friend for a drink? Just check out the answers to
yesterday‘s prompt.
Here‘s the (edited) text of my interview with Shellen.
Wade Roush: How did the idea for Plinky come about?
Jason Shellen: When I left Google I had a bunch of ideas percolating. Initially I
thought I was going to take the approach of something like IdeaLab—raise a little money
and get an incubator going, since the amount of money needed to start a company these
days is so much smaller. But as is usual with these things, one idea captivated me. It was
this idea that you could encourage people to create content in a more directed fashion—
that you could end up with a win-win where the content looks better, is easier to create, is
a little bit more inspired, and that potentially there would be a business model.
I was on the Blogger team before we sold the company to Google, in a business
development and product strategy role. We really struggled with how to make the tool
understandable to people, because at the time people didn‘t even know what blogging
was. Once we had the resources at Google to explain really well what blogging was,
people started signing up in droves. But many of them were no longer blogging—they
were doing something else like sharing stories, posting photographs. They weren‘t
blogging for blogging‘s sake—they had very directed activities in mind. But there were
still enough people signing up every day and then facing this big white text box and
realizing they didn‘t know what they were going to write. That really got me thinking.
You can look at any of the blogging or social networking services and they‘ll tell
you that the abandon rate is pretty high. You need some reason to contribute. I really felt
like the tools needed some attention again. Blogging software is great, but maybe there
can be something that other services can add as a layer, making use of all the great APIs
[application programming interfaces] out there—not trying to start another Blogger or
Wordpress. But we do see that with things like Tumblr and Twitter and a lot of Facebook
applications, people do want to connect through content, and they want to be inspired and
challenged in new and different ways.
WR: So how would you describe what Plinky does, at its core?
JS: The core of it is the prompts—that spark that drives you to create. But just as
important is the fact that you‘re not confronted with a big white text area. For instance,
today‘s prompt is ―Share the longest road trip you‘ve ever taken.‖ Now, the standalone
prompt idea has been tried before. Six Apart has a question of the day, for example. But
we decided to take a novel approach and use a lot of the open APIs out there to bring in
additional content. So in this case, we ask you what was the starting point of your road
trip and what was the end point, and we create a great-looking Google map for you. Then
we prompt you around that, and ask why you were going on that trip. That gentle
encouragement makes all the difference.
WR: It sounds like coming up with the daily prompts—which is currently the job
of your brother Grant Shellen—is a lot more involved than just sitting down and coming
up with a list of random questions, one for each day. You need to pick questions that lend
themselves to this multimedia enhancement.
JS: Yeah, the idea was always that the interface can change on a daily basis. It‘s
difficult—it‘s a lot of plates to keep spinning all at once. But we‘ve come up with a
templatized system behind the scenes to help with that. Today‘s prompt fits roughly into
our mapping interface. Maybe tomorrow‘s will be an offshoot of our image template.
The other part that makes this interesting is that when you look at the individual
prompts themselves, people can post their answers to their blogs or to Facebook or
Twitter. And we have an aggregated ―Most Popular‖ view, an algorithm that looks at how
many times an answer has been viewed, commented, or favorited. We are focusing quite
a bit right now on making those aggregate pages more fun, something that would keep
you contributing.
WR: Right now, you have to go to Plinky.com to enter your answer to a prompt.
But given your focus on integration with other Web services, can you envision having
Plinky widgets that would let you write answers directly from your blog or from Twitter
or wherever?
JS: Absolutely—we have most of that spec‘d out. The idea all along was to make
sure this is a portable interface. If you just look at the card-and-stack metaphor we use on
the site, you can see it‘s something that could easily be placed into a gadget or a widget.
The other day, we did release a Google Gadget that people can drop onto their Google
home page. But for right now, it‘s just delivering the daily prompt. The user still needs to
come to Plinky to respond.
WR: How do you think you‘ll be able to make money with Plinky?
JS: Obviously, we are a seed stage company, and we have some plans around the
business model. You can see that we‘re experimenting with some ads on the sight right
now. It‘s a fairly high-engagement site, and we find that people stay on the site and want
to read friends‘ answers, so advertising has always been a piece of the business model.
But we have a couple of other things we‘re not quite ready to talk about that we think
should be interesting for businesses and advertisers.
WR: Tell me about your relationship with Polaris Ventures and Sim Simeonov.
How did that come about?
JS: When I left Google, as I mentioned, I had a couple of different ideas
percolating, and I was pointed to Mike Hirshland at Polaris as somebody who was good
to sit down with. I told him about a few ideas, and he said they sounded like good ideas
but that they didn‘t sound fully baked. And he said ―You know who might be
interested—Sim, who happens to be in town and I‘m sure he‘d love to grab dinner with
you.‖ So we sat down over a beer. Sim thinks in business models, and he‘s incredibly
smart, and we hit it off.
Over the course of the next few weeks and months we met a few more times. I
briefly jumped into another company, but decided that was not for me and that I should
really pursue my own path. And by about March or April of 2008 I gave him a call and
said, ―Are you still thinking about this?‖ and he said, ―Absolutely.‖ So we started putting
together the business plan. Obviously, Sim is at Polaris, and he knows the process, he
knows what VCs are looking for, and we pitched a few folks inside Polaris and we got a
deal done with them. They put in $1.3 million, and we took another $300,000 from
angels and other folks. They seem to be big believers in giving people the ability to
create. They‘ve been a fun company to be involved with.
WR: $1.6 million is on the low end of things for an initial investment round. Do
you envision going out for more money at some point?
JS: It is on the lower end of things, but this was only for the seed stage. The things
we had hoped to prove by this spring are falling into place nicely, and we absolutely will
be going out for another round of funding soon.
WR: Plinky is similar in some ways to Twitter or the status update field on
Facebook. Of course, the prompt on those services is always the same: ―What are you
doing right now?‖—whereas your prompt changes every day. But how else are you
different?
JS: Early on in the press, we were characterized as ―yet another microblogging
service.‖ So some people thought that was what we are. But as my brother Grant is fond
of saying, we are only a microblogging service if you don‘t like to write very much.
There are people who will respond to a prompt like ―Defend your vice‖ and they‘ll just
say, ―Smoking. Never going to quit.‖ But there are just as many if not more people who
are religious about getting to Plinky every day and starting their day with it and writing
hundreds of words.
The other thing is that I was interested to see how the lightweight social model [the
ability to follow other users and see their answers to daily prompts] would work, and it
turns out that people have taken really well to that. Even during the private preview with
about 150 friends and testers back in November, we had users saying they were learning
things about their friends they didn‘t know. We had one prompt that asked ―What was
your first job?‖ and one team member said ―Wow, I didn‘t know that was my wife‘s first
job.‖ That, for me, hit the nail on the head—it was exactly what we wanted to see
happening, people learning things about other users and finding it compelling. That‘s
where we differ from a service like Twitter.
WR: I wonder whether, for some people, the idea of responding to a different
prompt every single day that might be fatiguing. It could be like getting too many
invitations on Facebook to join this or that group or fill out this or that chain letter, to the
point where you just tune it out.
JS: Actually, one of the things that has surprised me is that there are a number of
people every day saying ―give me the next prompt.‖ We seeded the system with
something like six prompts on the first day we went public, and there were people who
were immediately asking for more. So there are lots of completists who really like this
daily inspiration and want to do it even more. But I‘d say to those people who might find
it numbing that they don‘t need to do it every day. Wait for one you like and then
contribute. As Plinky becomes more prevalent, it will be more fun to contribute when you
see your friends contributing. Maybe you‘ll get groups of poeple arguing about whose
road trip was better.
WR: What do you expect Plinky to look like a year from now?
JS: There is a real hope that as we drive a lot of engagement with users, the answer
pages will become a valuable tool—something where, even if someone was not in the
mood to answer a prompt, they would find compelling. Things like ―58 percent of Plinky
users recommend this jazz album for a rainy day.‖ Mining the data, in other words. The
other thing is that as we evolve as a service, there‘s no reason that we‘re limited to the
current set of prompts. We could classify them differently, or we could send out multiple
prompts every day. You might be more likely to answer a sports prompt, or an ―esoteric
questions‖ prompt. We should be addressing our users‘ needs. The other thing we hope to
do fairly soon is start playing around even more with the interface. We‘re planning to try
some fairly radical things.
WR: Where does the name ―Plinky‖ come from? Does it mean something, or is it
one of those catchy but meaningless Web 2.0 names like Django or Joomla or Squidoo?
JS: When you turn on a fluourescent light, it makes a ―plink‖ sound. We talk about
it as a way to describe the moment of inspiration. But it also just has a nice sound. We‘re
going to win some horrible award for it, I‘m sure. But I think it works pretty well to
describe what we‘re doing.
42: Massachusetts Technology Industry
Needs a New Deal, Not a New Brand
February 27, 2009
If Silicon Valley didn‘t exist, Boston would have to invent it in order to have
someplace to feel inferior to.
That‘s the thought that occurred to me when I read an article in the Boston Globe
last week about the Information Technology Collaborative. This new posse of industry,
government, and academic leaders met in Cambridge recently to discuss the best ways to
publicize the information technology sector in Massachusetts. The idea is to recapture
some of the luster that Route 128 used to enjoy as the East Coast‘s answer to Silicon
Valley. The group tossed around new brand names for the state, such as ―The Innovation
Hub,‖ but rather than settling on any single message, it decided to commission a
$150,000 study to demonstrate how the infotech sector contributes to the Massachusetts
economy.
Here at Xconomy, we haven‘t gotten directly involved in these kinds of discussions
and studies. Nor have we given them much ink. It‘s not because we don‘t care—on the
contrary, our mission is all about chronicling the innovation ecosystems in and around
our home cities (which also include San Diego and Seattle). Rather, it‘s because we‘re
usually too busy writing about actual innovators at actual companies—both the successes
they‘ve achieved and the challenges they face.
Boston‘s problem is not a lack of tech entrepreneurship. (If you don‘t believe me,
check out the 1,700 stories we‘ve published on our Boston site alone since our founding
20 months ago.) Indeed, there‘s so much amazing innovation going on in Massachusetts
already that it seems superfluous to worry about better marketing slogans or how much
mental real estate Boston occupies relative to Silicon Valley. It would be far better for
economic growth in the state if public-private initiatives like the Information Technology
Collaborative focused on a few substantive policy reforms targeting the all-too-numerous
obstacles to prosperity for local businesses, technology professionals, and entrepreneurs.
Don‘t get me wrong: it‘s important to show the outside world, especially businesses
considering locating in Massachusetts, that the state has an innovation-friendly
government. But elected officials are already doing a pretty good job of that—witness
Governor Deval Patrick‘s recent tour of the Cambridge Innovation Center and his West
Coast trade mission, and Boston Mayor Tom Menino‘s creation this week of Boston
World Partnerships, a group that hopes to use online social-networking tools to play up
the value of doing business in Boston.
So instead of commissioning more studies and devising advertising campaigns to
help people ―discover‖ Massachusetts, let‘s take a closer look at local problems we could
fix and local success stories that could be emulated or amplified. I‘m not a policy expert,
and other observers could probably come up with a more trenchant or realistic list. But
here are just five of the ideas officials could choose from:
1. Clone MIT inventions like the Deshpande Center at other universities in
Massachusetts. We probably didn‘t need one more report to tell us this, but MIT‘s
impact on the global economy, via companies founded by its alumni, is gigantic,
amounting to some $2 trillion a year, according to a Kauffman Foundation study released
last week. The report attributed much of MIT‘s success at churning out successful
graduates to a well-developed ―entrepreneurial ecosystem‖ in which the institute, the
local technology and venture-capital communities, and students themselves all assume
key roles.
Kauffman vice president Lesa Mitchell told me the foundation supported the study
mainly because it needed to put more data behind its campaign to nurture similar
ecosystems at other schools. The foundation just gave the University of Kansas a major
grant to set up an organization modeled on the Deshpande Center for Technological
Innovation, which matches MIT innovators with business mentors and provides seed
funding to get ideas from the lab bench to the prototype stage.
But we need more Deshpande Centers right here. Schools like Babson College,
Northeastern, Boston University, and the University of Massachusetts have plenty of
innovative faculty and students who could benefit from equal access to experienced
mentors and the venture community. Unfortunately, not every school has a wealthy and
public-spirited alumnus like Desh Deshpande, who founded Sycamore Networks, willing
to pony up the cash needed for an array of seed grants. This is where the state should step
in—either with direct grants to schools, or by funneling money through the state‘s ample
array of quasi-public technology advocacy agencies, like the Massachusetts Technology
Leadership Council, the Massachusetts Technology Transfer Center, the Massachusetts
Technology Collaborative, and the Massachusetts Technology Development Corporation.
2. Upgrade Boston’s transportation infrastructure to make commuting easier.
One obvious project—and an expensive one, though it‘s exactly the kind of infrastructure
investment that the Obama Administration says it wants to make to put people back to
work—would be to add a second set of train tracks to the MBTA‘s Fitchburg-South
Acton line. Because there‘s only a single track west of Acton, there aren‘t enough trains
to get professionals who live in Boston out to their employers‘ offices along the I-495
corridor in the morning, or to bring them home in the evening. If you live in Boston and
you work at IBM‘s new facilities in Westford and Littleton, for example, the earliest you
can get to the office is after 9:42 a.m., when the first outbound train arrives at Littleton/I-
495. That‘s a business-unfriendly transportation policy if I ever heard of one. The fact is,
so many people make the ―reverse commute‖ from the city to the suburbs today that it‘s
no longer reverse.
And while we‘re at it, how about getting serious about fixing the 103-year-old
Longfellow Bridge, which, as anyone can see, is a rusting hulk? The Department of
Conservation and Recreation recently removed lane restrictions that were in place during
a six-month, $12.5 million emergency repair project, but those measures were merely
palliative. The disruption to business in both Boston and Cambridge if this vital
automobile, rail, bicycle, and pedestrian artery had to be shut down for safety reasons
would be massive. Estimates are that rehabilitating the bridge could cost $200 million to
$400 million and take 10 years to complete. All the more reason to start now—and as
President Obama keeps reminding us, spending is stimulus.
3. Ease siting and permitting hassles for new technology projects. New England
has a promising young cleantech sector developing new ways to generate energy from
wind, sunlight, wood chips, municipal waste, cow manure, you name it—but ironically,
when it comes time to build pilot facilities and go after commercial-scale customers, most
of these companies look to places like Florida, Texas, or Michigan, where state and local
governments are far more accommodating toward new facilities.
Part of the problem is that New England towns and cities take the ―home rule‖
tradition to a ridiculous extreme. When I visited IST Energy last month to learn about its
compact waste-to-energy machine, executives told me the biggest obstacle to selling its
technology is the tangle of inspections, reviews, and permits that potential customers
face, which are different from town to town. ―The best support we could possibly get
from the state would be clearing regulatory pathways for all new energy technologies, not
just ours,‖ IST Energy‘s vice president of corporate development David Montella told
me. ―No one would characterize the Northeast as an easy locale‖ for green energy
projects, he said.
Obviously, this problem doesn‘t affect information technology companies as
severely as it does cleantech firms. Software developers bent over their terminals are a
pretty benign bunch, environmentally speaking. But even in infotech, the expense and
difficulty of building and permitting facilities like clean rooms for semiconductor
research force many companies far out of their way. SiOnyx, an innovative company
exploring applications for ―black silicon,‖ settled in an office park in Beverly, MA—
quite a hike from both downtown Boston and the core of the state‘s electronics industry
along Route 128—because that was the only place it could find an existing, unoccupied
clean room.
Legislative reforms enacted in 2006 created a system that lets Massachusetts towns
opt into a fast-track permitting system, but as of mid-2008, fewer than 50 of the state‘s
351 cities and towns had joined, according to Boston-based research and consulting firm
Mass Insight. The state government needs to do more to reward communities that adopt
fast-track permitting and penalize those that don‘t. (You can see how individual towns
are doing along several measures of tech-friendliness at masstrack.org, a site maintained
by the Massachusetts High Technology Council.)
4. Play up New England’s gay-friendly credentials. Any startup that‘s trying to
choose between, say, Cambridge and Palo Alto for its HQ needs to consider what kind of
environment it will be asking its employees to live in. Is it one that respects the rights of
all citizens equally, black or white, Hispanic or Asian, gay or straight? With the passage
of Proposition 8 in California last November, outlawing gay marriage, the place that
Apple, Google, Facebook, and eBay call home joined the ranks of states where voters
have said gay people are not entitled to equal protection under the law.
Eventually, majorities in those states will realize that they are on the wrong side of
history. But for the moment, Massachusetts and Connecticut have the right side to
themselves—and as I argued in a column in November, it‘s time to play up that fact. This
is one area where a little marketing wouldn‘t hurt. What better way to wake up California
and other anti-gay-marriage states than to launch a campaign to lure their most talented
gay and lesbian employees here, where they, their spouses, and their families will be fully
welcomed?
5. Make non-compete agreements illegal in Massachusetts. Local venture capital
leaders like Spark Capital‘s Bijan Sabet have been arguing for some time now that the
Massachusetts tradition of strict enforcement of non-compete clauses in employment
contracts stifles innovation. It prevents qualified engineers or marketers from moving
from one company where their services or ideas may not be needed to another where they
may well be. It keeps the creative professionals who are germinating ideas and potentially
building tomorrow‘s leading companies from even talking with one another, for fear of
retribution from their current employers. For the new legislative session that began in
January, State Representative Will Brownsberger has introduced a bill that would (non-
retroactively) disallow non-compete agreements, putting Massachusetts on an equal
footing with California in this area. The Patrick Administration should get behind the
Brownsberger bill.
***
Implementing a few of these business-friendly changes, or the many others being
suggested nowadays, would help Massachusetts entrepreneurs keep doing what they‘ve
always done best—innovate. Take care of that, and the branding will take care of itself.
Author's Update, February 2010: The Information Technology Collaborative has
met several more times since this February 2009 column, and is pursuing a range of
ideas for enhancing Massachusetts' power as an innovation hub. Marketing the state's
attractions more effectively is one of them, but by no means the only one. This month the
group publicized the findings of the $150,000 study I mentioned above. The study found
that the information technology's impact on the state economy is enormous: The 10,000-
plus IT companies doing business in Massachusetts spend $65 billion a year—equivalent
to about 18 percent of the state‟s GDP—and are responsible for another $29 billion in
spending by local suppliers and contractors and $19 billion in consumer spending by
employees. But the report was short on recommendations for shoring up the sector.
43: Three New Reasons To Put Off
Buying a Kindle
March 6, 2009
I titled my January 23 column ―E-Book Readers on the iPhone: They‘re Not Quite
Kindle Slayers Yet‖ (see Chapter 37). How quickly technology marches ahead. In the
weeks since then, three very compelling new options have arrived for people like me who
want to read e-books but balk at the price tag on Amazon‘s Kindle 2, the best dedicated
e-book reader on the market. As it turns out, the real Kindle killer may be Kindle itself—
the iPhone version, that is.
Option 1: Google Book Search for iPhone and Android. On February 5, Google
introduced a mobile-friendly version of its five-year-old book search utility. Google Book
Search is the public face of Google‘s massive project to scan millions of out-of-print
books held at famous libraries, make their text searchable, and show all or parts of the
books online. If you open a book in Google Book Search on a regular PC browser, you
see the actual page images that Google captured. But for the smaller screens of mobile
devices, Google came up with a way to show just the raw text, as extracted by optical
character recognition (OCR) software. For mobile subscribers inside the United States,
the new service offers access to the full text of a staggering 1.5 million public domain
books.
The huge up side to Google‘s move is that it puts so many books at the fingertips of
mobile users, wherever they may happen to be. The down side is that OCR technology is
still imperfect, so the extracted text is often garbled. The older, fancier, or more unusual
the typography in the original book, the more nonsense characters show up in the Google
interface. But to counter that problem, the Google Book Search team has built in a nifty
feature: just by tapping the screen, you can instantly download Google‘s image of the
page, to get a look at the original text.
I don‘t think lots of people are going to read entire books using Google‘s interface,
which, even apart from the OCR problem, is marred by slow downloads (even on a Wi-Fi
connection) and a tiny, non-adjustable font. But it‘s a fantastic reference tool for people
on the go.
Now, readers of this column will know that I‘ve been critical of the settlement
agreement reached last fall between Google and the Authors Guild, the Association of
American Publishers, and several publishing houses. Those organizations saw the fact
that Google was scanning copyrighted, but out-of-print books, as well as public-domain
books, as a huge copyright violation. In the settlement, Google agreed to pay damages to
the authors of books already scanned, while also setting up a way to share profits with
authors when, at some point in the future, Google gives Book Search users the ability to
purchase full-text downloads of out-of-print books. My concern is that thanks to the
concessions the authors and publishers extracted, those downloads will be a lot more
expensive than they would have been if Google had been allowed to go ahead with its
scanning project unimpeded.
But none of that affects the mobile version of Google Book Search, which is limited
(so far) to the free, public-domain books Google has scanned—generally, those published
before 1923. Obviously, that covers centuries of great literature, from Juvenal‘s Satires to
Dante‘s Inferno to The Adventures of Sherlock Holmes.
Option 2: Shortcovers. The Kindle isn‘t available in Canada—or anywhere else
outside the United States, for that matter. So on February 26, the Canadians took matters
into their own hands, launching a mobile bookstore called Shortcovers. It‘s the creation
of Indigo Books and Music, the Toronto-based bookstore chain that also owns the
Chapters chain. (If you wrapped together Barnes & Noble, Borders, Books-a-Million, and
Powell‘s Books and put them north of the border, you‘d have Indigo.)
You can buy Shortcovers e-books from the company‘s website and then read them
online or using the free Shortcovers app, which is available for the iPhone and
Blackberry phones. Having tried it out on the iPhone, here‘s what I like about
Shortcovers: You can read the first chapters of all Shortcovers books for free. I if you
prefer, you can buy subsequent chapters one at a time for $0.99 each, rather than
spending $9.99 for a whole book and then finding out you don‘t like it. You can access
magazine articles, blog posts, short stories, poems, and other sub-book-sized chunks of
content. And there are some neat community features built into the service that I haven‘t
seen from any other e-book vendor, such as the ability to publish your own e-books to the
Shortcovers store for free, and the ability to create ―mixes,‖ personalized compilations
that you can share with other people.
Unfortunately, Shortcovers also has a few shortcomings. The iPhone app won‘t start
up at all if you don‘t have a Wi-Fi or 3G connection—so forget using it to read on a
plane. Once you‘ve finished reading a sample chapter, Shortcovers doesn‘t give you an
easy way to buy the rest of a book: you have to navigate back to the app‘s catalog-
browsing area, find the book again, and then click the ―buy‖ button, which then shoots
you over to the Shortcovers website—it‘s all very confusing. When you‘re reading, the
app‘s title panel and control bar take up quite a bit of the screen‘s scarce real estate. This
leaves less room for text, meaning you have to spend more time scrolling. And there are
formatting snags: In the book I bought to test the service (David Denby‘s Snark), there
were numerous typos, most often missing spaces that resulted in runonwordslikethis.
Finally, the book prices can be a bit steep—many new titles are $9.99, but others are as
much as $16.
Option 3: Kindle for iPhone. When Amazon‘s Jeff Bezos unveiled the Kindle 2 a
month ago, he said the company planned to make the 240,000 books that Amazon has
converted for reading on the Kindle available for other devices as well, starting with the
iPhone. But I don‘t think anybody (except maybe Walt Mossberg) expected him to follow
through on that promise so soon. Amazon‘s ―Kindle for iPhone‖ app, introduced March
3, is a little marvel. And this probably isn‘t what Bezos wants to hear, but it came out just
in time to stop me from spending $359 on an actual Kindle.
My mouse finger hovered over the Kindle page‘s ―Add to Shopping Cart‖ button all
of last weekend. The devil on my left shoulder said ―Go ahead, buy it—you write about
gadgets, you need it for your work.‖ The angel on my right shoulder said, ―Didn‘t you
just hear the guy on NPR? You‘re supposed to be saving enough money to cover six
months of living expenses in case the economy really implodes.‖ The ambivalent guy in
the middle put off the decision.
Then the iPhone app appeared. To be clear, it‘s not a substitute for the real Kindle,
whose e-paper display is probably the most readable on the market (not to mention the
most energy-efficient—it uses so little juice that battery life is a non-issue). But the
iPhone version does include several of the other features that make the Kindle so hard to
resist, including wireless access to all 240,000 Kindle editions, a flat $9.99 price tag for
new bestsellers (and lower prices on many other books), and a beautifully stripped-down
reading interface with none of the onscreen clutter that mars the Shortcovers screen.
(Amazon‘s interface designers seem to have paid close attention to Stanza, one of the
iPhone e-book apps I reviewed in January.)
The app offers a nice selection of fonts and font sizes, a bookmarking function, and
all the other e-reading basics. Turning pages is a simple matter of flicking the current
page to the left (which is actually the gesture I tried the first time I got to play with a
Kindle 2, only to remember that it doesn‘t have a touch screen). I flicked my way through
most of Malcolm Gladwell‘s Outliers the other night and found the experience to be quite
comfortable.
In short, Kindle for iPhone has slaked my thirst for a Kindle, at least for now.
Still, no matter what app you use, the iPhone will always be sub-optimal as an e-
book reading device. Its small, backlit LCD screen can‘t hold much text, drains the
phone‘s battery relatively fast, and causes eye strain for some users. So the people who
will probably get the most pleasure out of the Kindle iPhone app are those who already
own a real Kindle: a feature called ―Whispersync‖ lets them use their iPhone as a more
mobile substitute for the Kindle in a pinch. Whispersync keeps track of where you
stopped reading a book on your Kindle and opens it at the same point on your iPhone,
and vice-versa. So you could use your iPhone to read a chapter of the latest Grisham on
the subway, then switch back your Kindle when you get home.
Which is pretty cool. In fact, maybe I‘ll talk myself into buying that Kindle yet.
Come on, you know you want one…
Update March 6, 2009 8:10 a.m.: Another bookstore chain is getting into the e-
book game. A news item yesterday indicates that Barnes & Noble has purchased
Fictionwise, one of the longest-lived e-publishing companies around (it was selling e-
books way back in 1998-99, when I worked at NuvoMedia, the maker of the Rocket
eBook). Fictionwise has a very good e-book reading app called eReader; it works on the
iPhone as well as Pocket PC, Palm, Symbian, and Windows Mobile devices, not to
mention Windows and Macintosh computers and even the OQO handheld PC. So make
that Option 4.
Author's Update, February 2010: As mentioned in the update to Chapter 37, I did
eventually buy a Kindle. I'm very happy with it, and I've found that the Whispersync
feature works exactly as advertised. See Chapter 51.
44: Top 9 Tech Updates: Photosynth,
Geocaching, Google Earth, and More
March 13, 2009
I‘ve been writing World Wide Wade for almost a year now; this is the 44th
installment. A year is a long time in the technology world—long enough for many of the
gadgets, services, and websites I‘ve covered in the past to evolve cool new features. So I
thought I‘d revisit a few of my previous columns and fill you in about what‘s changed.
1. Beyond megapixels. In my April 4 and June 6 columns in 2008, I wrote about
the Gigapan community site, where you can upload super-high-resolution photos stitched
together from lots of regular digital shots. In January of this year, a new company called
GigaPan Systems introduced a $379 robot camera mount that puts gigapixel imaging
within the reach of hobbyists. It takes care of the tedious part of gigapixel imaging by
guiding your camera through hundreds or thousands of individually-angled shots, with
just enough overlap to give the stitching software something to work with.
2. News aggregators on steroids. Last April 11, I wrote about my favorite news-
tracking tools on the Web, including Netvibes and Alltop. Netvibes hasn‘t changed much
in the last year, but Alltop, a cool aggregator that uses pop-up windows to squeeze a lot
of news onto a single page, has exploded beyond all bounds. It had about 55 categories of
RSS feeds when I last wrote about it; now there must be well over 500, on everything
from Atheism to Zoology. And for tech-news enthusiasts, there‘s a site called TechFuga
that recently got a nice overhaul that makes it more competitive with the uber-popular but
somewhat tired TechMeme. The new features at TechFuga include Twitter searching,
reflecting the fact that more and more people are getting their news from each other via
the red-hot microblogging service. (Speaking of Twitter, you can follow me there at
―wroush―.)
3. Earth as you’ve never seen it. On April 18, I wrote about Google Earth 4.3,
which featured improved navigation and a larger crop of 3-D buildings. The latest version
of the world‘s most popular geo-browser, Google Earth 5.0, came out in the middle of
last month. The coolest improvements: a fantastic view of the ocean floor, the ability to
delve back in time and see aerial imagery from the 1980s and earlier, and imagery for
Mars as well as Earth and the Moon.
4. An art museum in your living room. If you‘ve got an HDTV already, there‘s
no reason to buy one of those expensive digital photo frames. My April 25 column talked
about GalleryPlayer, a company that provided software and imagery for turning your TV
into a digital art exhibit. Unfortunately, GalleryPlayer went out of business in July
(though founder Scott Lipsky, an ex-Amazon exec, hinted that it had merely been sold
and might re-emerge). Luckily, there are still plenty of ways to find and display high-
resolution images on your big screen. DeviantArt is a great place to browse and
download free HD-resolution images created by professional artists and photographers.
And if you hook up your computer to your TV, you can use software like Slickr or
FlickrFan to display those images—or your own—in the form of animated slide shows.
5. An elephant never forgets. My July 18 column was about Evernote, a fantastic
cross-platform system for storing and tracking all the info-flotsam in your life: Web
pages, photos, receipts, you name it. I still add material to my Evernote account every
day, and the company just keeps making the software better and better. There‘s now a
version for Android phones (on top of the existing Web, Windows, Mac, Windows
Mobile, and iPhone versions). In December, Evernote (whose logo is an elephant) added
a file synchronization feature, so you can use it to keep copies of important Word files,
PDFs, PowerPoints, and other electronic documents, and more recently, it rolled out a
vastly improved version of its Web Clipper, which is the tool I use most often. A feature I
plan to try soon is the recently-announced Shoeboxed, a service that will scan that pile of
business cards and receipts on your desk and put them right into Evernote. And if you
used Google Notebooks—which Google gave up on in January—you can easily import
all of your notes to Evernote and pick up where you left off.
6. Cutting the cord. In my July 25 column, I threatened to give up my cable TV
subscription and switch to watching my favorite shows online, via video aggregators like
Hulu. Well, it took me a while to gather up the courage, but last week I finally made good
on the threat, and dropped my $80 digital cable package at Comcast in favor of a $10
lineup of about 23 local channels (which I kept just in case I ever feel the need to watch
live news). While I was at it, I canceled my land line, which only telemarketers ever
called anyway. Now we‘ll see what life post-cable is really like—I‘ll let you know how
it‘s going in a future column. Fortunately, there are now convenient video-on-demand
services from both Netflix and Amazon; the Roku Player, which now taps into both
services, gets good reviews. And I‘ve got four seasons of The Wire waiting for me on
DVD.
7. Scene stealer. On August 29, I wrote about Photosynth, an amazing visualization
tool from Microsoft Live Labs. Photosynth lets you upload up to 300 photos taken in a
single location (say, Boston‘s Copley Square) and then organizes them into accurate 3-D
arrays that you can explore almost as if you were walking through the actual scene. The
big news here is that Greg Pascale, a former Microsoft intern, just finished a free
Photosynth viewer for the Apple iPhone called iSynth. It works great—in fact, it‘s better
than Microsoft‘s regular online Photosynth viewer, because the iPhone‘s multi-touch
interface provides such a natural way to interact with the images.
8. Cache is king. My September 19 column about geocaching, ―GPS Treasure
Hunting with Your iPhone 3G,‖ was pretty popular. But at the time, doing any serious
geocaching required switching back and forth between different applications—one such
as Geopher Lite for looking up geocaches and their locations, and another such as GPS
Kit for actually navigating to the specified locations. In January, the company that
invented geocaching and acts as the official clearinghouse for the sport—Groundspeak—
came out with an all-in-one geocaching application for the iPhone 3G. I‘ve tested it in the
field, and it works great. It‘s well worth the $9.99 price tag.
9. Spring forward. My November 21 column was about Springpad, an online
notebook service launched last year by Boston‘s Spring Partners. This Web application
lets you create task-oriented Web pages (called springpads) that include text notes, to-do
lists, contacts, calendar events, maps, photos, and other media. Interestingly, rather than
pitching Springpad as a general organizational tool—the way Evernote does with its
service—Spring Partners has chosen to roll out its technology in stages, starting with a
series of specialized springpads with seasonal themes. The first custom springpads were
designed to help with holiday shopping and meal planning. And last month, Spring
Partners founder Jeff Janer told me about the new springpad designed for date planning
(which appeared in early February, in time for Valentine‘s Day) and an upcoming
wedding-planner springpad.
Janer says the service has been gaining traction among four groups in particular:
productivity addicts, including devotees of David Allen‘s Getting Things Done time-
management method; cooks, who use the popular meal-planning springpad to organize
their trips to the grocery store; ―mommy bloggers,‖ a surprisingly large contingent, who
use springpads as workbooks to plan upcoming posts; and 25-to-35-year-olds, who
appreciate the date planner. ―Our core idea is to find repeatable types of activities and
events and help people get things done faster and easier,‖ Janer explains. Coming soon:
mobile versions of specific springpad features such as to-do lists.
45: Google Voice: It’s the End of the
Phone As We Know It
March 20, 2009
Brace for impact, again. Google is about to change the way you think about
telephones.
The information giant has a pattern of setting its sights on an existing technology,
moving in with overwhelming software-engineering force, and upending all of our old
expectations. We didn‘t know we needed ads alongside our search results, and Google
turned keyword-based advertising into a multi-billion-dollar industry. We all thought e-
mail was something we could only access and manage using desktop programs like
Outlook, then along came Gmail. We thought we had to go to libraries to find out-of-
print books, then Google went and created Google Book Search. We imagined cell phone
platforms would always be controlled by a few elite carriers and handset makers, then
Google started Android.
To be clear about it, Google didn‘t invent keyword-based advertising, Web mail,
book scanning, or open-source software. It just figured out how to apply such
technologies more cleverly and pervasively than anyone else. And that‘s what it has done
once more with Google Voice—the renovated version of Grand Central, the phone-
number-unification service it bought in 2007.
Grand Central was a startup that allowed users to sign up for a single phone number
for life. A call to that number would automatically ring through to any or all of the other
phones the user designated, meaning they no longer had to give their acquaintances
separate home, office, and mobile numbers. Google paid somewhere north of $50 million
for the technology, then spent more than a year and a half rebuilding it to work with its
own infrastructure. Starting March 12, Google upgraded old Grand Central‘s existing
users to Google Voice accounts, and started inviting in a few beta testers. It plans to open
up the free service to anyone in the U.S. starting ―soon―—in a few weeks, by all
accounts.
I‘ve been testing Google Voice for the last couple of days, and I‘m impressed. I
think the service will mark a kind of tipping point in public perceptions of telephony.
Before this, it was still possible to think of the phone system as something predating the
Internet and therefore distinct from it, surrounded by its own set of customs and usage
patterns. After this, we‘ll think of phone calls more as if they were audio e-mails—
finding their way through the uber-network to their intended recipients wherever those
recipients may be located, and leaving a digital record that can be stored, searched, and
manipulated on the Web.
There are a lot of features to Google Voice, which makes the overall concept a bit
hard to explain, as I‘ve realized over the past couple of days as I‘ve talked with friends
and colleagues about it. So I‘ll try to simplify things. You start by signing up for a new
phone number in your area code of choice. Google provides a search page where you can
look for numbers that spell out mnemonics like ―617-IM2-COOL.‖ In practice, there
aren‘t that many numbers available, so you might have to search for a while before you
find one that spells out something that appeals to you, and that won‘t embarrass you five
or 10 years from now. (Google could do a better job explaining the number selection
process—and it wouldn‘t hurt if they showed a picture of a phone keyboard, to remind
you of what letters go with what numbers.)
In the same way that an e-mail address doesn‘t correspond to a single computer,
your Google Voice number doesn‘t correspond to any single phone. Indeed, that‘s the
beauty of the whole system. So once you‘ve picked your number, the first thing to decide
is which actual phones should ring when someone calls it. You can tell Google Voice to
route calls to your office phone, your home land line, your mobile phone, your vacation
rental, your Aunt Minnie‘s house where you‘re staying for the weekend, or all of the
above.
The next big decision is about how Google Voice should handle voicemail
messages, for those times you can‘t answer or don‘t want to. As soon as someone leaves
a message, it goes into your Google Voice inbox, which you can access by calling the
service or by directing the browser on your computer or your mobile phone to the Google
Voice website.
If you like, you can simply let messages pile up in your inbox, and check them once
in a while by calling in or visiting on the Web. Or you if you want to know about new
messages right away, you can set Google Voice to notify you via e-mail or SMS text
message.
Now here‘s the really cool part. Rather than just notifying you that you got a
voicemail the way your cell phone does, Google Voice can—if you choose—send you a
text transcription of the message itself. Transcriptions are created automatically using
speech recognition software, so they aren‘t as accurate as one might like, but they get the
gist across. After just a couple of days as a Google Voice user, I can attest that reading
the transcripts of your voicemail messages is 10 times faster than listening to them. And
if you think the speech-recognition software garbled something crucial, you can always
call into Google Voice or go to the website to play the original recording, which is stored
forever, or at least until you delete it. (Thanks, by the way, to everyone who responded to
my Twitter post yesterday asking for help testing Google Voice. It was great to hear from
all of you! But you can stop now. My Google voice inbox is getting alarmingly full.)
Interestingly, after you‘ve gotten a few voicemails, your online Google Voice inbox
starts to look a lot like your Gmail inbox (see the image on the previous page). You can
star important messages, search the text transcriptions for key words or names, and even
dump unwanted voicemails from telemarketers into a spam folder. Indeed, the
resemblance to Gmail is so strong that the day when you‘ll be able to view your Gmail
messages and your Google Voice voicemails from the same interface can‘t be very far
off.
Google Voice has a bunch of other handy features: You can arrange free conference
calls just by having multiple people call your Google Voice number at the same time; you
can choose to have text messages sent to your Google Voice inbox forwarded to your
phones, while at the same time keeping them organized in your inbox right alongside
your voicemails; you can place international calls at extremely low rates by dialing into
your Google Voice account first and punching in a code to tell it you want to make an
outgoing call; the text transcriptions are cleverly shaded according to how confident
Google‘s speech-recognition algorithms are about its guesses (see image below); and you
can record whole calls, or sections of calls, and save the recordings in your inbox. (This
last feature may prove especially useful for us journalists. Alas, recorded calls aren‘t
automatically transcribed, at least not yet. Now that would be a huge plus for someone
like me, who does several phone interviews a day.)
But to me, the key advances in Google Voice—the reasons why it‘s history-
making—are only three. And only the first of these things came from Grand Central—
Google added the other two.
1. It separates phone numbers from phones, making phone calls fungible and
redirectable. This may even herald a day when everyone will be electronically reachable
everywhere via some unique identifier like…their real name, maybe?
2. It transcribes voice messages into text and lets your receive and review that text
from any device, which is an incredible time saver. And think of the value of having
copies of all those voice mails you deleted and wished later you had saved. (Spinvox and
other services already offer voicemail transcription, but for a fee.)
3. It treats voicemail recordings and transcriptions like e-mails, allowing you to
manage them online using the same process you‘ve developed to manage your e-mail
inbox.
The rest is all bells and whistles. Indeed, a few of Google Voice‘s extra features
could prove troublesome. The recording feature may not be kosher in all states—Google
leaves it up to you to figure out whether you can legally record a phone conversation
(though the caller does get an automatic ―call recording‖ warning if you choose the
record option). And there‘s a feature called ―ListenIn,‖ a legacy of Grand Central‘s
technology, that brings back a whole world of awkwardness I thought we‘d left behind
when the old-fashioned answering machines were replaced by network-based voicemail.
It‘s a screening feature that lets you listen to someone as they‘re recording a voicemail,
then break in to talk to them if you wish. Undoubtedly, we‘re in for a whole new
generation of messages that start off, ―Hey, I know you‘re listening in and screening your
calls, pick up, dammit!‖
I‘d love to hear about your own experiences with Google Voice. Just don‘t call me.
I wouldn‘t want to have to declare voicemail bankruptcy.
46: Tweets from the Edge: The Ins and
Outs (and Ups and Downs) of Twitter
March 27, 2009
If you already know all about Twitter—if you spent mid-March in Austin tweeting
away with your pals at South by Southwest, if you can explain the differences between
Twhirl and Twitterrific and Tweetdeck, and if you‘ve already mastered thinking in 140-
character fragments—this week‘s column is not for you. It‘s for all the other people, the
ones who have recently been coming up to me—their resident technology columnist—
and asking ―What is Twitter, anyway, and why should I care about it?‖
For the uninitiated, here‘s a simple way to think about Twitter. It‘s a tool for mass-
mailing postcards to everyone who cares enough about you to sign up to receive them.
These people are called your followers. At the same time, it‘s a tool for collecting
postcards from the people you care about—the people you follow. The thing is that on
Twitter, the postcards are electronic, they can carry no more than 140 characters of text,
and they‘re delivered very fast, many times a day, no postage necessary.
Now, if you look at the history of actual postcards, there are some interesting
parallels to the emergence of Twitter. Postcards were invented in the late 1860s (in the
Austro-Hungarian Empire, of all places), but for decades, only government postal
services were allowed to print and sell them. Around the turn of the century—1898 in
America, 1900 in Japan—private companies finally obtained the right to publish
postcards. This not only led to an explosion of creative postcard designs, but encouraged
millions of people to adopt this abbreviated form of communication, which was much
more convenient than putting a letter in an envelope. In 1908, Americans sent one
another 677 million postcards—more than seven per citizen.
The Internet was conceived by government scientists in the late 1960s and was
handed over to the private sector in the late 1980s. Twitter came along more or less on
schedule in 2006. Like postcards, Twitter messages—‖tweets‖ in the parlance developed
by Twitter users—are brief by necessity. They‘ve caught on in part because they‘re so
much easier to generate than the alternative: e-mail messages with hundreds or thousands
of recipients into the cc: line. And as with postcards, no one really expects a response to
their tweets.
Twitter messages are breezy, entertaining, and only occasionally informative—
you‘d never use Twitter for something important like, say, inviting a friend to dinner.
Tweets are also ephemeral—they‘re usually glanced at and forgotten, which is necessary,
since if you follow lots of people, you likely get hundreds of tweets a day. (You can
check them on your page at Twitter.com, or you can use a specialized desktop Twitter
―client‖ like the aforementioned Twhirl, Twitterrific, and Tweetdeck.) So Twitter isn‘t at
all like your e-mail inbox, where each message demands some kind of action. It‘s more
like a constantly flowing stream that you can dip into at your leisure.
If tweets are so trivial, then why should you care about Twitter? The truth is that
right now, you don‘t need to, any more than you care about postcards. Sitting out Twitter
is not going to cripple your career or leave you socially isolated in the way that sitting out
e-mail or the Web might.
On the other hand, Twitter is undoubtedly the most unexpected and fast-growing
social phenomenon on the Internet of 2008-2009. It‘s probably here to stay, in one form
or another, and could turn out to be just as significant as wikis, blogs, and social
networks. (Interestingly, one of the founders of Twitter, Evan Williams, was also the co-
founder of Blogger, which made the first easy-to-use blog publishing tool.) So if you
want to understand how millions of people are experiencing cutting-edge social media
today, you should probably sign up for a Twitter account and start following a few
people. You can follow me by going to www.twitter.com/wroush and clicking the
―Follow‖ button.
What‘s not to like about Twitter? Well, for one thing, it can quickly lead to what
you might call ―communication saturation‖ or ―Twitter litter.‖ It might nice to receive
one or two postcards from your friend who‘s vacationing in Greece. But if he sends you
eight postcards from Athens, seven from Santorini, and five more from Mykonos, you‘ll
start to think he‘s a little weird. In the same way, many people tweet too often, about
matters that—no matter how piquantly phrased—fall below the threshold of interest to
other busy humans. (A recent Current SuperNews cartoon, embedded at the end of this
column, makes wicked fun of this kind of Twitter self-absorption.)
Luckily, it‘s easy to unfollow Twitter bores. In my own experience, there are plenty
of people who tweet with more care, sharing insights and factoids that are truly likely to
engage or enlighten their followers. And if you want to witness a genuine wisdom-of-
crowds moment, wait until you have a few hundred followers, then tweet a frantic request
like ―Help, my TiVo erased Nip/Tuck, is it online somewhere?‖ or ―Quick, I need a
limerick about bowling.‖ You‘ll get a wealth of useful, or at least good-humored,
responses.
By the way, you can ignore Twitter‘s own description of what Twitter is about: ―A
service for friends, family, and co–workers to communicate and stay connected through
the exchange of quick, frequent answers to one simple question: What are you doing?‖ It
may have started out that way—and some people still use it that way—but the fact is that
there are now as many different styles of tweets as there are styles of blogs. People share
anecdotes, links, complaints, jokes, regrets, music and movie reviews, life‘s little
triumphs and defeats, even breaking news—the first pictures of the splashdown of US
Airways Flight 1549 in the Hudson River came from a camera-phone owner using
Twitpic (a Twitter-based photo-sharing service).
There is one question about Twitter I can‘t answer, and that‘s whether the company
will ever find a way to make money on the platform. (Its venture investors must hope so.
Boston‘s Spark Capital, New York‘s Union Square Ventures, Seattle‘s Bezos
Expeditions, and Japan‘s Digital Garage have together poured more than $22 million into
Twitter.) There was word in the Wall Street Journal this week that Twitter plans to
introduce paid commercial accounts that would offer more features than free accounts.
But it wasn‘t clear from the WSJ article when this might happen, or what the extra
features might be.
All I can say about this is that if Twitter does create a class of grownup, paying
customers, it had better be ready to provide them with grownup customer support. Right
now Twitter‘s help desk is essentially useless. Here at Xconomy, I‘ve been working for
more than a month to get Twitter to evict a squatter who set up a Twitter account under
the name ―Xconomy.‖ My first help ticket, submitted February 27, went unanswered for
more than three weeks. I eventually learned that the issue had been marked in Twitter‘s
help system as resolved—but when I read the accompanying note, I discovered that
Twitter had merely given up. ―Twitter Support is closing older tickets in order to get an
accurate idea of current problems,‖ said the cheery note. ―Due to a ticket backlog, Twitter
Support may‘ve been unable to respond to your request in a timely manner. Our
apologies!‖
I certainly understand the pressure of laboring under a huge queue of messages.
Heck, just a few weeks ago, I wrote a column about my own decision to declare e-mail
bankruptcy and start fresh with an empty inbox. But if you‘re a tech company, I don‘t
think that just throwing out all your old help requests is an effective way to deal with
your backlog.
Adding to my annoyance, Twitter also closed out my second help request without
actually resolving it. This time their ―solution‖ was to send me a couple of automatically
generated e-mail messages that picked up on keywords in my ticket but had nothing to do
with my actual problem. Twitter, if you‘re listening: Now would be a good time to
smooth this all out, before I get really ticked off.
The customer-support issue leads to a bigger question. Given that Twitter, the
company, has no clear path to monetization and no real record of reliability or
responsiveness, I think it‘s legitimate to wonder how long Twitter, the social
phenomenon, can keep gaining momentum. If tweeting is truly fundamental—that is, if
Internet users start to think of it as a basic feature of the Internet comparable to e-mail or
instant messaging, as I believe many already do—then it may turn out to be too important
to leave to Twitter. The Internet community has well-established ways of dealing with
such situations: either the original owner of the technology hands control over to a non-
profit standards body, or the open source community creates a non-commercial
equivalent and everyone switches. It will be interesting to see which of these happens
with Twitter.
Meanwhile, Twitter‘s millions of users will keep tweeting away. It‘s too addictive
to stop. (If you want to get serious about it, check out this blog post yesterday from Don
Dodge, about Twitter tips from Guy Kawasaki, who has 94,000 followers.) So try it
out—and send a 140-character postcard my way.
Author's Update, February 2010: You can now follow twitter posts from all
Xconomy writers at twitter.com/xconomy. We've also created a Twitter list that makes it
easy to follow the tweets of more than 120 leading innovators in the Boston area, at
twitter.com/xconomy/boston-innovators.
47: Will Hunch Help You Make
Decisions? Signs Point to Yes
April 3, 2009
Last week I wrote about Twitter, a flawed and difficult-to-grasp social media
technology that nonetheless becomes addictive once you get the hang of it—so much so
that it‘s quickly changing the way many people communicate. This week I‘m going to
write about Hunch, a flawed and difficult-to-grasp social media technology that
nonetheless becomes addictive once you get the hang of it—so much so that it‘s bound to
change the way many people make certain kinds of decisions.
The product of a New York City startup founded by Flickr co-creator Caterina
Fake, Hunch is designed to help us all cope with the problem of choice. Where should I
go on vacation? Should I drop out of college or get my degree? What cool new video
game should I buy? What‘s the best sleep aid for me? Which New York City museum
should I visit?
For almost any personal decision, chances are that someone else has already
thought it through and can list the leading possibilities. But while the Web offers plenty
of community sites where you can solicit such advice—I reviewed a bunch of them,
including Yahoo Answers, back in 2006—Hunch has added an ingenious twist. It‘s the
―decision tree,‖ an algorithm that guides you through a big choice by asking you to make
lots of smaller, easier ones.
Hunch has decision trees for roughly 1,300 topics so far. Each poses a series of
multiple-choice questions. Your answer at each point determines which branch of the tree
you‘ll follow, until you wind up at a single recommended answer. A simple decision
might involve only one or two questions, while a complicated one can have a dozen or
more. Decision trees are devised by users themselves—in fact, if you feel like you‘re an
expert on something, Hunch encourages you to build a tree yourself, or help improve
existing trees by adding new questions.
There‘s an important wrinkle, however, that makes exploring Hunch more than just
a process of clicking through a bunch of mathematically preordained decision trees. The
site remembers how you‘ve answered other questions, and over time, it builds up a
picture of your preferences. That information is factored into the final recommendation,
and might even override your answer to a specific question within a tree.
As an example of all this, here are the questions you‘ll see for the topic ―Should I
buy an Amazon Kindle?‖—which, as regular readers of this column know, is a decision
I‘ve been struggling with myself.
• Do you have a commute that allows you to read during it?
• Do you frequently travel with more than two books in tow?
• Do you subscribe to any major newspapers in print versions?
• Do you wish you could dynamically resize the text of print publications?
• For now, photos appear in black and white on the Kindle. Is this ok?
• Are you concerned with conserving paper in order to save trees?
• Do you get a particular sense of satisfaction from storing a book you‘ve read on a
bookcase?
• Are you clumsy with personal electronic devices like cell phones?
• Does having quick access to a dictionary/wikipedia seem valuable?
• Do you have several books at once on your nightstand?
When I went through this tree mechanically answering ―Yes‖ to every question,
Hunch told me that there was a 92 percent chance that the right answer for me is ―Yes,
you should buy a Kindle.‖ I couldn‘t find anything on the Hunch site that explains how
these percentages are calculated, but I‘m guessing that the other 8 percent represents the
room for doubt left by my ―Yes‖ answers to the seventh and eighth questions, which
would militate against buying one of the e-book reading devices. (Or maybe Hunch
knows somehow that I‘ve been trying desperately to come up with reasons not to spend
$350 on a Kindle.)
The Kindle question was constructed with a yes/no answer, but most decision trees
on Hunch can lead to a variety of results. And if the site is missing an important result,
you‘re free to add it, and to specify which questions in the tree should lead to that result.
(I couldn‘t help adding my own name to the list of results for the question ―Which
technology writer would I like?‖)
For certain questions, Hunch can hit surprisingly close to the target. When I played
through the ―Where should I go on vacation?‖ topic, Hunch guided me straight to the
answer that was already at the top of my personal list: Florence. I wasn‘t even trying to
steer the answers toward Italy, at least not consciously. When I played through the topic
―What‘s the best dog breed for me?‖ I ended up, reassuringly, with Australian
Shepherd—which is, of course, the breed I already own. I was less happy with Hunch‘s
answer to ―Which superhero am I?‖: Watchmen‘s Dr. Manhattan, who, while certainly a
hunk, is too aloof for my taste.
But at this early stage, with so few people using the site, it would be hard to portray
Hunch as a place to turn for consistently trustworthy recommendations. There just hasn‘t
been enough time for users to fill out the branches of the trees. The topic ―What‘s a good
spa in Boston?‖ for example, has only four possible outcomes—which is embarrassingly
incomplete when you consider that local review site Yelp lists more than 110 day spas
around Boston. And some of the questions users have programmed into Hunch are so
predictable and simplistic that they‘re essentially rephrasings of common knowledge. For
the topic ―Where should I live in the Bay Area?‖ the first question in the tree is ―Would
you rather have: Great weather, with a subdued, suburban lifestyle, or iffy weather, but
with an exciting, urban lifestyle?‖ To me, that‘s precisely the same as asking ―Would you
rather live in Palo Alto or San Francisco?‖
Still, the more people who use Hunch, the smarter it will get. The process may be
slow—I suspect that building a truly useful decision tree is harder than it looks, and that
it will take a while for Hunch to build up a community of volunteers with the requisite
thoughtfulness and expertise. But it‘s happened before. Just look at Wikipedia.
And Hunch‘s general model feels new and exciting. My own prediction is that
millions of users will be drawn to the site, which turns the potentially stressful process of
reaching a decision into a fun, interactive quiz. The decision-tree format may not respect
the subtlety and grayness of the real world; Hunch‘s style guide insists that the answers to
each question in a decision tree be mutually exclusive, which, in real life, they rarely are.
But the trees do offer a convenient way to navigate through a mess of possibilities, and
perhaps to reach unexpected and thought-provoking answers. And hey, it‘s got to work
better than a Magic 8-Ball.
48: Boston Can Survive, Even Thrive,
Without Today’s Globe
April 10, 2009
It‘s difficult to see how the Boston Globe can last long in its current form. Even if
its owner, the New York Times Co., extracts the entire $20 million in concessions that it
demanded this week from the paper‘s unions, the paper would still lose $65 million this
year, according to the company‘s own figures. A business hemorrhaging cash at that
rate—in a hemorrhaging industry—is unlikely to attract a buyer, or at least a palatable
one. The only thing keeping the Times from shutting down the Globe completely may be
the huge unfunded pension liabilities and severance payments it would owe to laid-off
employees.
Sensing that the 137-year-old newspaper‘s situation has grown truly dire, a
grassroots group of Boston-area bloggers, led by Paul Levy, the CEO of Beth Israel
Deaconess Medical Center, has been staging a ―blog rally to help the Boston Globe‖ this
week. In the April 6 post that started the rally, Levy called the Globe an ―important
community resource‖ and said his goal was to stimulate discussion in the blogosphere
about steps the paper could take to rebuild its revenues.
With all due respect for Levy and the dozens of bloggers who have responded to his
call, community activism can‘t save the Globe. Neither can the paper‘s unions, no matter
how much they give back in pay cuts, lower pensions, and reduced health benefits, nor
can its executives, no matter how much they give back in bonuses. Either the Globe will
save itself by hitting on a new model, or it will find a white knight, or the Times Co. will
continue to cover the paper‘s losses. But it‘s not realistic to expect it to do that forever.
After all, the Globe‘s troubles aren‘t simply the product of the recession. Revenues
from print advertising and classifieds are shrinking because technology is giving
businesses more efficient ways to reach their customers than newspapers. And as the
Globe‘s own columnist Scot Lehigh noted this week, the paper suffers from a ―self-
defeating business model…we‘re selling the paper with one hand and giving it away on
Boston.com with the other. That‘s never made any sense—the more so since website ads
aren‘t anywhere near the revenue-generator that print ads are.‖
I have nothing personal against the Globe. I haven‘t subscribed since the mid-
1990s, but I certainly believe that a city as diverse, energetic, and productive as Boston
deserves to covered by a community of professional journalists—and for the better part
of the last two centuries, major newspapers like the Globe have been those journalists‘
most important home. All things being equal, New Englanders would be better off if the
Globe somehow survived.
But all things are not equal: the Globe doesn‘t have a sustainable business, and the
larger newspaper industry is in its death throes. The regional monopolies newspapers
used to hold over the advertising market no longer apply. Paper, printing presses, and
fleets of delivery vans are just too expensive. As NYU Internet and media guru Clay
Shirky observed in a sympathetic but clearheaded analysis last month, the fact that the
profession of journalism and the business of newspapers have been so closely intertwined
since the mid-1800s is largely a historical accident—a matter of convenience for both
sides. It‘s time to acknowledge that the two are not the same; that while newspapers may
succumb to creative destruction, journalism will likely emerge alive and well. ―When
someone demands to know how we are going to replace newspapers, they are really
demanding to be told that we are not living through a revolution,‖ Shirky writes. ―They
are demanding to be told that old systems won‘t break before new systems are in
place…They are demanding to be lied to.‖
If Bostonians aren‘t willing to cover the cost of running the Globe out of their own
pockets—which they most assuredly are not, given that the company would have to
charge reader several dollars per copy, every day of the week, to cover its real
expenses—then they should adjust to reality, and stop looking for ways to prop up a
doomed enterprise. Instead, they should be asking themselves what kinds of information
they do value.
If they value international news, they can turn to GlobalPost, a new Web-based
international news publication headquartered in Boston (and led by a former Globe
staffer). If they value a long history of editorial independence and Pulitzer Prize-winning
journalism, they can turn to the Christian Science Monitor, which has discontinued its
daily print edition and shifted resources to an expanded website, CSMonitor.com. If they
value hyper-local coverage of specific communities, they can turn to the Gatehouse
chain‘s huge network of Wicked Local websites. If they value alternative news, they can
turn to the ThePhoenix.com, or if they want a smart daily survey of the local blogosphere,
they can turn to Adam Gaffin and Steve Garfield‘s Universal Hub. If they value coverage
of the Red Sox and other local teams, they can turn to Over the Monster or dozens of
other local sports blogs. And dare I say it—if they value coverage of the local
technology, business, and venture investing scene, they can turn to Xconomy.
My point is that the local Web is already teeming with great journalism. Even if you
took the Globe out of the equation, there would be plenty of local writers ready to fill the
vacuum, and plenty of outlets to fill up the Boston page of your RSS aggregator every
morning. And let‘s be honest: we‘ll still have the Boston Herald, and the Globe, or parts
of it, will probably stay around in some form. Even if the once-unthinkable comes to pass
and the Times Co. shuts down the Globe‘s print edition, there would be good reason to
keep operating its online counterpart, Boston.com, which is far less expensive to run and
is already one of the region‘s top Web destinations.
It would not be a surprise—and it be would hard to call it a tragedy—if the Globe
were to follow the same path as the Seattle Post-Intelligencer in one of Xconomy‘s other
home cities. The P-I‘s owner, Hearst Newspapers, shut down the print version of the
paper on March 17. About 20 of the newsroom‘s 150-plus staffers were kept on to run
Seattlepi.com. And while that may sound small by city-room standards, a staff of 20
would the be the envy of most local Web publications—indeed, my heart skips a beat at
the thought of what Xconomy could do with that many writers.
Interestingly, Hearst Newspapers president Steven Swartz said Seattlepi.com (with
which Xconomy has a content-sharing arrangement) would not be an online newspaper.
―It‘s an effort to craft a new type of digital business with a robust, community news and
information Web site at its core,‖ he said in an announcement. While the site will feature
traditional breaking news and commentary by veteran journalists and columnists, it will
also link to community blogs, and will have ―new columns from prominent Seattle
residents; more than 150 reader blogs, community data bases and photo galleries.‖
Now, while that all sounds nice enough, it isn‘t likely that Seattlepi.com it will have
the heft of the old P-I. One of the biggest reasons to regret the troubles of the P-I, the
Globe, and other big papers is that they have played an important watchdog role. Indeed,
as WBUR detailed just this morning, the Globe is largely responsible for the bringing
about the current investigation into the business dealings of former Massachusetts House
Speaker Sal DiMasi‘s associates, not to mention the truth about sexual abuse by priests in
the Catholic Archdiocese of Boston. It‘s hard to say who, in the post-newspaper era, will
keep knocking on the doors that the powerful don‘t want opened.
But as Clay Shirky notes wryly, ―‗You‘re gonna miss us when we‘re gone!‘ has
never been much of a business model.‖ Journalists who want to keep working need to
join—or start—digital publications that don‘t merely cling to dying revenue streams, but
that experiment tirelessly with new ones, whether that means online display advertising,
new types of interactive ads, underwriting and sponsorships, virtual goods,
micropayments, ―freemium‖ models that put some content behind a paid firewall,
donations from community members and foundations (the model pursued by the Voice of
San Diego in Xconomy‘s third home town), or all of the above.
The era of print is ending. The era of creative journalism may be just beginning.
Author's Update, February 2010: In October, the New York Times Company said it
had decided not to sell the Globe, at least for now. Reports indicated that the offers from
suitors weren't attractive enough. Meanwhile, union concessions have lowered the
Globe's operating costs, meaning the Times Company isn't losing quite as much on the
paper as before.
49: RunKeeper’s Mad Dash to the
Marathon Finish: Of Foot Injuries, Viral
Video, and Dressing Up as an iPhone
April 17, 2009

If you‘re out watching the Boston Marathon on Monday and you see a giant iPhone
limp past, chances are it‘s Jason Jacobs inside.
Jacobs is the hyperkinetic founder and CEO of Boston-based FitnessKeeper, which
makes a highly popular run-tracking application for the Apple iPhone 3G called
RunKeeper. (He was also a panelist at Xconomy‘s recent Forum on the Future of Mobile
Innovation in New England.) As of yesterday, the free version of RunKeeper was the
17th most popular free health and fitness program in the iTunes App Store, and the $9.99
Runkeeper Pro was the 34th most popular paid fitness app.
But after Monday, the app‘s ratings may go even higher, thanks to a fascinating
publicity stunt—sorry, ―social media campaign‖—that Jacobs and his high-school pal
David Gerzof, who teaches a social media class at Boston‘s Emerson College, described
to me this week. It‘s actually a cool case study in the power of Web video, charitable
giving, a group of ambitious college students, the Twittersphere, and a guy in a funny
costume to build excitement around a piece of software. But whether Jacobs himself will
still be walking on two feet by the time the stunt is over is an open question—as this
YouTube video, published today, explains.
First, a bit about the software: The RunKeeper app uses the iPhone 3G‘s built-in
GPS chip to measure how far and how fast a jogger (or hiker or biker) has traveled on
each outing. It also creates a map, accessible on the RunKeeper website after your run is
completed, showing the exact path you followed, complete with little mile or kilometer
markers. It‘s a great tool for tracking the distance you covered on each run and the pace
you kept.
And because the RunKeeper website makes it easy to share the data on your runs
with friends via Twitter, e-mail, or your Facebook or MySpace profile, the app also helps
you tap into a network of friends or fellow fitness enthusiasts who will, in theory, cheer
you on. (Although they might just be disgusted at how much exercise you‘re getting
while they sit at home reading Twitter.)
The paid version of RunKeeper differs from the free version in only two respects:
it‘s ad-free, and if you‘re wearing headphones on your run, a voice will tell you how far
you‘ve gone every mile, every kilometer, or every five minutes. The voice is female and
sounds dauntingly fit—like a somewhat mean aerobics instructor, which is probably the
perfect tone to strike in this context.
Jacobs, a Babson College MBA graduate and longtime runner, has been operating
FitnessKeeper on the cheap. He‘s the only full-time employee, and most of his team (he‘s
outsourced a lot of the programming work to Boston-based Raizlabs) is working for
equity rather than cash. That means he hasn‘t had a lot to spend on marketing and public
relations.
Enter Gerzof, who runs Brookline, MA-based public relations firm Big Fish
Communications, got his master‘s degree in marketing communications from Emerson,
and has been teaching two classes a year there since 2002. ―We work mainly with startup
companies, and often times I find companies way early, before they have the resources to
take on Big Fish,‖ Gerzof told me this week. ―FitnessKeeper happened to be one of those
companies.‖ Gerzof knew all about RunKeeper, since Jacobs, who‘d gone to high school
with Gerzof, had called him a couple of times for advice on marketing.
Gerzof says he recently persuaded the Marketing Communication department at
Emerson to let him start a course on social media and Web marketing. Through the
Google Grants program, he‘d obtained funding for a class project in which teams of
students helped local non-profits design free keyword-based advertising campaigns using
Google‘s AdWords service. ―But I also wanted to give them some real-world experience
with for-profit companies, since that‘s where most of them are going to go to work,‖ says
Gerzof. ―These scrappy startups are happy to get whatever help they can get. So I went
through my contacts, and Jason instantly popped off the page.‖
Jacobs agreed to let a team of five Emerson students in Gerzof‘s class—Sam
Citron, Cassie Kling, Alleigh Marre, Carly Narvez, and Greg Townsend—turn
RunKeeper into their capstone project for the spring semester.
Says Marre: ―We bounced around a bunch of ideas. Our biggest challenge wasn‘t
necessarily getting Jason a fan base, since he already has a ton of followers on Twitter,
and he does a really good job of staying in contact with users. We decided that we needed
to figure out a way to connect with brand evangelists, and get them really excited about
something that RunKeeper was doing. So we ultimately decided to do a promotion
around the Boston Marathon.‖
The team‘s idea was to have Jacobs run the marathon in an iPhone costume—while,
of course, wearing an actual iPhone with the RunKeeper app going—and to chronicle the
preparations, the race, and the Emerson project itself in a series of edgy Web videos, the
second of which is out today. [Update, April 21, 2009: the third video is now up as well.]
Jacobs committed to the project about three weeks before marathon day. Which is
where the story starts to get really wild. Jacobs says he ran in the 2007 Chicago Marathon
and finished ―in a pretty decent time,‖ despite the 90-degree weather. But he had to pull
out of the 2008 Chicago race because of an overuse injury (plantar fasciitis) in one foot,
and he hasn‘t done any serious marathon training since. His Chicago time didn‘t qualify
him for the Boston race. Luckily, he was able to obtain a coveted marathon bib (the
number is 22790, in case you want to track him on Monday) through Boston‘s Spaulding
Rehabilitation Hospital when one of the people running for its fundraising team had to
pull out. Not so luckily, he reinjured his foot during a recent 12-mile training run. So he‘s
been going to rehab and staying off the foot, and will be running the 26.2-mile race in
what could charitably be called non-peak condition. (‖Delusional‖ might be another word
for it.)
―It‘s an extremely aggressive timeline,‖ Jacobs acknowledges. ―There are all kinds
of logistics that need to go into it from a marketing standpoint, and I‘m already
undertrained, and now I‘m battling this injury. I would have been worried just from an
endurance standpoint, but I can‘t even worry about that now, because I have to stay off
my foot between now and the race. There was so much drama that we made the decision
that we were going to film the process of pulling together the campaign.‖
In other words, Jacobs and the Emerson team are intentionally positioning the
RunKeeper videos in the realm of what you might call reality-show unreality. It‘s a meta-
media-land where the videos are partly about the making of the videos, complete with
hand-held shaky-cam videography, rock music in the background, and confessionals
where the students share their doubts about whether Jacobs can finish the race—or
whether, indeed, the whole project is going to blow up in their faces. The campaign
clearly and cleverly targets smartphone-owning twenty- and thirty-somethings whose
tastes in video have likely been shaped by shows like ―The Apprentice,‖ ―The Real
World,‖ and ―Behind the Music.‖
The Emerson team plans to film Jacobs‘ run from at least three locations during the
marathon and produce a wrap-up video that will appear sometime after Monday. They‘re
also helping to staff the FitnessKeeper booth at the John Hancock Sports & Fitness Expo,
a free event at Boston‘s Hynes Convention Center running today through Sunday. And
they‘re promoting the whole effort vigorously on Twitter, on the RunKeeper website, and
through e-mail newsletters distributed by Waltham, MA-based e-mail marketing
company Constant Contact.
Of course, there‘s also a charitable cause that gives the RunKeeper campaign a
humanitarian spin: Jacobs is running the marathon as part of ―Race for Rehab‖ team at
Spaulding, which also happens to be the institution treating his injured foot. As of
Thursday afternoon, Jacobs had raised $2,226 toward his goal of $10,000. (You can
donate here.)
So, will Jacobs finish the race? How much money will he raise? If his foot holds up
past Heartbreak Hill, will he survive the media attention, the hypothermia (Monday‘s
forecast calls for showers and a high of 46 degrees), and the race‘s other hazards?
Perhaps most important, will the madcap, last-minute marathon campaign boost
RunKeeper‘s brand, or just come off as goofy?
You‘ll have to watch next week‘s video and judge for yourself. ―Honestly, my
biggest concern about having Jason run 26 miles in an iPhone costume is chafing,‖ jokes
Gerzof.
Whatever happens to Jacobs—and whatever kind of buzz the campaign generates
for RunKeeper—the project has already been good experience for the Emerson team, and
it illustrates how quickly savvy marketing professionals (and those who train them) are
shifting toward Web-based communication. When these students graduate and put ―social
media consultant‖ on their resumes, they‘ll have a real example to point to.
―It was nice not to be stuck in a classroom writing a media plan, like we‘ve done in
every other class,‖ says Emerson‘s Marre. ―It‘s been really interesting to work with
somebody on a real-life timeline and put things together that have the potential to be big
for a real-life company.‖
Says Jacobs: ―These students have really stepped up. They are super into it, and
probably doing five times the work that would be required for the class, and they‘re not
even getting paid. But we‘re having fun and learning a lot. If I don‘t finish the race, it
doesn‘t affect the story that much—except that the hero might not emerge victorious.‖
Author's Update, February 2010: Jacobs did ultimately finish the marathon, with a
very respectable time of 3:55:07.
50: Cutting the Cable: It’s Easier Than
You Think
April 24, 2009
In a column published last July (see Chapter 15), I vacillated publicly about
whether it was time to stop paying extortionate rates to my local cable provider, Comcast,
for the privilege of watching 17 minutes of commercials with every hour of
programming.
Well, it took me a while, but in early March I finally cut the cord. I pared back my
cable TV lineup to the basic $10 per month level (which includes 30 local and
community-access channels) and handed back Comcast‘s set-top box/DVR. At the same
time, I canceled my land-line digital telephone service and went cellular-only, of which
I‘ll say more some other week. Of course, I kept my cable Internet service—which is
surprisingly fast, averaging 15 to 20 megabits per second.
And I am here to report that life without premium cable channels is just fine.
Now, I certainly have not given up watching TV shows. In fact, I probably consume
just as much video content now as I did before, maybe more. The difference is that these
days I‘m getting the majority of it on demand, over the Internet, with few or no
commercials. It‘s easier to do this than ever before, given the explosion of new
technologies and services around online video—a few of which I want to describe in
today‘s column.
First a quick illustration of how quickly the online video market is changing. Here‘s
a chart that I included in my July column, showing which of my favorite shows were
available online and where. (By ABC and NBC, I mean ABC.com, NBC.com, etc.):
Here is the same chart today, with the addition of two new favorites that I wasn‘t
watching last year (24 and Fringe) and one video source that hadn‘t fully emerged as of
last summer (Amazon Video on Demand):

As you can see, the chart has filled in quite a bit. All of my favorite shows are now
available on Hulu. Many of the shows that weren‘t available last year from iTunes now
are, thanks in part to last September‘s rapprochement between Apple and NBC Universal.
Veoh has also filled out its list significantly, and Amazon Video on Demand has come
out of nowhere to become a serious rival to iTunes (well, not quite nowhere, but its
predecessor service, Amazon Unbox, sucked, to be frank).
(I have to mention in passing that I no longer watch Friday Night Lights, which lost
its magic somewhere in season 2, or Heroes, which jumped the shark ages ago. But I kept
them in the chart for completeness' sake. Also, Battlestar Galactica, Pushing Daisies,
and Terminator: The Sarah Connor Chronicles have all ended or been canceled. And I
haven't listed the shows that I only ever obtained online, such as Mad Men. Also, you'll
notice that I don't watch any TV sports. I realize that the prospect of giving up the sports
broadcasts monopolized by ESPN and their ilk would be a show stopper for many sports
fans.)
―But wait,‖ you say. ―Why would I want to watch TV on my laptop or my desktop
monitor, especially when I dropped a grand last year on a new HDTV?‖ I have three
words for you: Cables to Go. This one-stop online shop has cables for connecting every
type of computer to every conceivable brand of television. I have one cable that connects
my Windows computer‘s VGA port to my TV‘s serial input, and another that connects
the mini-DVI video port on my Mac laptop to my TV‘s DVI-I input. A third handles
audio. I can just fire up Hulu or iTunes on one of my computers, plug it into my TV, and
I‘m ready to watch from across the living room, often in high-definition quality.
There are two other new technologies that have helped to smooth my defection
from cable. One is the new category of what you might call ―10-foot browsers‖: video
aggregators with big, boxy interfaces that make it easier to browse and access video
content when the computer is hooked into your TV and you‘re sitting all the way across
the room. My favorite is Boxee, a Mac program that‘s paired with a very cool iPhone app
that turns your phone into a remote control. (A Windows version of Boxee is on the way.)
The Zinc video browser, from Littleton, MA-based ZeeVee, also works well.
The other new technology is the Roku Player, a little miracle in a box that lets you
access on-demand movies and TV shows from both Netflix and Amazon. I bought one of
the $99 Roku devices last month, and it works so well that I haven‘t watched one actual
Netflix DVD since I got it. (It should be noted, however, that many Netflix movies aren‘t
yet available for viewing on the Roku.)
The Roku Player connects directly to your TV and comes with its own small
remote. To use it, you need to have either a Netflix or an Amazon account (preferably
both), and either a home Wi-Fi network or an Ethernet cable long enough to connect your
Internet modem to the player. The machine is simplicity itself: All it does is allow you to
watch on-demand videos that you‘ve already chosen online by adding them to your
Netflix ―Watch Instantly‖ queue—a free service for anyone with at least an $8.99 per
month plan—or by renting them at Amazon Video on Demand. You can pause, rewind,
fast-forward, and rate videos, and that‘s it. The video is sharp and clear, though the faster
your home network, the better your results will be. (Just yesterday, as it happens, the
Roku service got upgraded to support high-definition rental movies and TV shows from
Amazon.) I‘ve only experienced one serious glitch with the Roku box—it lost its
connection to Amazon while I was renting the final episode of Battlestar Galactica on
March 20, probably because every other sci-fi fan was watching it at the same time.
Together, all of these resources more than make up for my old cable subscription,
and I‘m saving $75 per month—meaning, from one point of view, that the Roku box paid
for itself in less than two months. I‘m hardly a pioneer in the cable-free movement, of
course: after my July column plenty of you wrote in to say you‘d already cut the cord,
and the media have been full of articles lately about how households can economize on
entertainment and communications expenses. (An April 8 New York Times article
entitled ―How to Cut the Beastly Cost of Digital Services‖ was particularly good, though
it didn‘t go into the online alternatives to the cable channels.)
The new focus on pay TV‘s exorbitant cost and how to lower it must be extremely
worrisome for the cable and satellite TV companies themselves. Even apart from the
competition they‘re facing from online entertainment, they‘re dealing with flat to
declining overall demand, largely because they‘ve done such a good job saturating the
market up to now. (The number of basic cable subscribers hit a peak of 67 million
households in 2001, and has since declined to about 64 million, according to the National
Cable & Telecommunications Association.) I bet their nightmare scenario goes
something like this: Customers cut back on premium channels out of short-term frugality,
then discover how much stuff there is to watch online, and then never add back their
premium services after the recession ends.
But I have to say that I don‘t have a lot of sympathy for the cable companies.
Where are the new services, the discount packages, the cross-media offerings that might
justify a cable bill of $1,200 a year or more? Why can‘t I bookmark Web addresses I see
on cable TV and have them sent to my e-mail address, the way Boston‘s
Backchannelmedia is helping on-air stations do? Why can‘t I start watching a show on
Comcast and finish watching it on my iPhone? The technology for such services is there,
but the cable companies aren‘t adopting it nearly fast enough to keep their restive,
technology-savvy, early-adopting customers (in other words, people like me and probably
like you) from defecting to other media sources.
There have been rumblings from the cable TV industry that the only way to prevent
an implosion like the one striking the newspaper industry may be to cut off the supply of
online video. In fact, Time Warner and Comcast are discussing initiatives to limit Internet
video to customers who also pay for cable TV. But I‘ve got news for them: While putting
up such barriers might scare some existing cable customers into paralysis, it isn‘t going to
bring back those who have already cut the cord. In my case, it would just mean that I
would stop watching the affected shows altogether, or rent them from iTunes or Amazon,
or perhaps watch them on my mobile phone over my 3G (and soon enough, 4G) wireless
connection.
The cable companies need to start adapting to the day when they will be nothing but
dumb Internet pipes—because consumers are already starting to see them that way.
Author's Update, February 2010: I haven't once regretted my decision to dump
premium cable channels. And I'm happier than ever with my Roku Player, which got a
free software upgrade in November that makes it even more useful; the box can now
access music from Pandora, photos from Flickr and Facebook, and a bunch of other cool
online resources.
51: Why Kindle 2 is the Goldilocks of E-
Book Readers
May 8, 2009
Fans of this column know that I spent months dithering over whether to buy
Amazon‘s Kindle 2 e-book reader. I had mercilessly panned the original Kindle, mainly
for its ungainly looks. And while I was much more impressed by the Kindle 2 when it
came out in February, I was put off by the $359 price tag, which left me casting about for
more excuses to resist a purchase.
Well, I finally ran out of excuses and let my inner geek take over. My new Kindle 2
showed up last Wednesday, and I‘ve been enjoying it immensely, for reasons I‘ll detail
below. But as luck would have it, my Kindle arrived exactly a week before Amazon CEO
Jeff Bezos announced another new Amazon device, the large-screen Kindle DX. So the
first question I want to tackle is whether Kindle 2 owners should feel any buyer‘s
remorse—that is, whether they would have been better off waiting until this summer,
when the DX, with its much bigger 9.7-inch screen, will start shipping. I don‘t think so.
The Kindle DX will be great for reading electronic documents where some extra
formatting aids comprehension—meaning textbooks, business documents like PDF
brochures and white papers, and maybe magazines and newspapers. But for any
document where the text is primary, meaning the vast majority of current fiction and
nonfiction literature, the DX will be overkill. And for $489, the announced price of the
DX, you could buy a very good netbook or even a basic laptop and get access to a much
broader world of digital media, and in color to boot.
Or you could spend nothing and simply read e-books on your mobile phone. The
excellent resolution of smart phones like the iPhone actually makes them credible e-book
readers. Companies like Lexcycle, Shortcovers, and Amazon itself have come out with
very nice e-book software for the iPhone, and e-books are the fastest-growing category of
applications in the iTunes App Store. But the iPhone‘s weakness—-for purposes of
reading, anyway—its its limited screen size, which means you have to flick to the next
page every few seconds.
The Kindle 2 feels to me like the Goldilocks of information display devices: bigger
than a smartphone, but smaller than a tablet PC. Its electronic ink display, which
measures 6 inches diagonally, is more than twice the size of the iPhone‘s screen. It can
hold about the same amount of text as one standard paperback book page, depending on
the font size you‘ve selected. So you press the ―next page‖ button only twice as often as
you would turn the pages of a printed book (since the Kindle doesn‘t have two facing
pages, the way printed books do). But it‘s still small enough to make the device
extremely light and portable. You can read it comfortably using one hand. I can imagine
pulling out my Kindle 2 on a bus or a subway car. I‘ll be surprised if I ever see anyone do
that with a Kindle DX.
Reading on the Kindle 2 is a beautiful experience. It is no less immersive than
reading a printed book. (The first two e-books I read on the Kindle were The Guernsey
Literary and Potato-Peel-Pie Society and Pride and Prejudice and Zombies—The Classic
Regency Romance, Now with Ultraviolent Zombie Mayhem; I recommend both heartily.)
Of course, I didn‘t really need to be convinced on this score. I first fell in love with e-
book devices in 1999, when NuvoMedia brought out the Rocket eBook—in fact, I liked it
so much I went to work for the company for a couple of years. But I‘m still amazed by
how much displays have evolved over the past decade. The Kindle‘s electronic paper
display, made by Cambridge, MA-based E Ink, is sharp and clear. It sips electricity like a
hummingbird, meaning the battery lasts for days between rechargings. And the screen‘s
momentary flicker when you turn a page—which is needed to fully erase the previous
screen, sort of like shaking an Etch-a-Sketch—isn‘t nearly as annoying as it was on the
original Kindle, thanks to the improvements E Ink built into the Kindle 2‘s electronics. In
fact, the screen redraws itself quickly enough now to allow a fully interactive interface,
with pop-up menus for doing things like jumping around within or between books.
Far more earthshaking, however, is Whispernet, the 3-G wireless network that
Amazon built for the Kindle family of devices. Even if you left out the electronic paper
screen, wirelessness would make the Kindle a huge improvement over all previous e-
book devices, because it lets you shop for books, magazines, and newspapers on the
device itself and download them instantly, from practically any location where you can
get a cellular signal.
The fact that Amazon has also released an iPhone app for reading Kindle editions
makes it clear that the company‘s long-term e-book strategy is to sell content, not
gadgets. (As David Pogue puts it, ―The Kindle is just the razor. The books are the
blades—ka-ching!‖). Going wireless was a master stroke, because it makes book-buying
frictionless. In Wednesday‘s press event introducing the Kindle DX in New York, Bezos
revealed the startling fact that for books with electronic Kindle editions, digital unit sales
amount to 35 percent of print sales. In other words, if Amazon sells 1,000 copies of a
print book, it will sell 350 copies of the Kindle version. I‘d argue that the wireless feature
alone accounts for most of this success. That said, it was something Amazon couldn‘t
afford to leave out if it wanted the Kindle to appeal to the same consumers now
accustomed to downloading songs, videos, and applications instantly to their iPhones and
other mobile devices.
I won‘t review every other feature of the Kindle 2 here, but I do want to mention
just three of the lesser-known functions that have made me even happier with my
purchase.
Clippings. Thanks to Web-based tools like Evernote and Instapaper, I have become
an inveterate clipper of articles or passages that I find on the Web and want to remember
for later. It‘s easy to do the same thing on the Kindle 2, by manually marking the
beginning and the end of a passage in a book, then saving it to the device‘s clippings file.
If you‘re reading a newspaper or magazine article, the Kindle provides a helpful menu
item that copies the entire article into the clippings file. The next time you connect the
Kindle to your PC using the provided USB cable, you can copy the clippings file to your
desktop, and if you want to save the clips to Evernote or some other tool, you can just cut
and paste from this file. What would be even better, of course, would be the ability to
send clips straight to Evernote using Whispernet. But not even the iPhone has this
function yet.
Personal document transfers. When you get a Kindle 2, you also get an e-mail
address like myname@kindle.com. You can use the address to e-mail documents to your
Kindle via Amazon‘s servers, which will reformat them to display correctly on the
device. I‘ve tested this for Word, PDF, and JPEG files and it works great. Amazon says it
also works for GIF, PNG, BMP, and ZIP files.
The company recently increased the price of these transfers slightly—they used to
be $0.10 per e-mail, and now they‘re $0.15 per megabyte, rounded up to the nearest
megabyte. But that‘s still a pretty negligible amount. And you have to remember that
Amazon charges nothing for all other Whispernet traffic. (Just try asking AT&T or
Verizon Wireless to reduce your data plan bill to zero.)
Beyond just sending yourself documents, the fact that your Kindle has a unique e-
mail address means you can program Web-based services to send mounds of free content
to your device, where you can read it at your leisure. My favorites so far are Instapaper
(for clipping long Web pages such as magazine articles) and Kindlefeeder (for sending
any RSS feed to your Kindle).
And here‘s a sneaky trick: for works that are in the public domain, you can make
your own perfectly legal e-books. I did this last weekend after discovering that Amazon
doesn‘t yet sell any Kindle editions of poetry by William Carlos Williams. (In fact, the
Kindle store is pretty short on poetry in general.) So I tracked down a few websites that
have published Williams‘ poems, copied and pasted them into a Word file, and e-mailed
it to my Kindle.
Image viewing. Over the USB connection, you can create a pictures folder and
copy photos from your computer onto your Kindle. The Kindle 2‘s screen, thanks to
those upgraded electronics, can show 16 levels of gray, meaning it creates pretty good
black-and-white renditions of almost any picture. See the photo above for an illustration.
You probably won‘t find yourself using the Kindle to show off pictures of your kids or
pets to your friends, the way you might with your iPhone, but it‘s still a nice feature to
have.
The Kindle 2 has many other nifty features, such as a built-in dictionary, a
rudimentary Web browser that lets you search Wikipedia and other sites and even send e-
mail, and a text-to-speech engine—which makes any book into an audio book, and
consequently became the subject of a ludicrous dispute between Amazon and the
Author‘s Guild. (This is the same group of know-nothing dinosaurs who tried to stop
Google from scanning out-of-print library books and making them searchable—but more
on that in a future column.) Amazon hasn‘t acted on some of my other suggestions about
how to soup up the Kindle platform—by experimenting with subscription-based book
clubs or book bundles, for example, and by giving potential buyers a chance to try out the
device at bricks-and-mortar retail stores. But I have a feeling they‘re not finished
innovating.
Author's Update, February 2010: Since this column was published, public interest
in e-book reading devices has only grown. The Kindle 2 is Amazon's best-selling item;
Sony and Barnes & Noble have introduced their own wireless e-book readers, both using
the same E Ink electronic paper technology that Amazon uses in the Kindle 2; and Apple
has gotten into the game with its new iBooks app for the iPad. The e-book story
continues in Chapter 72.
52: People Doing Strange Things With
Soldering Irons: A Visit to Hackerspace
May 22, 2009
You might think that all of the engineering brainpower in cities like Boston, San
Diego, and Seattle is sucked up by high-voltage startups or by giant employers in the
software, server, or semiconductor businesses. But proof that there‘s plenty of surplus
technological creativity in these regions is popping up in odd places like Willoughby &
Baltic, a ―hackerspace‖ I visited last month in Somerville, MA.
The group‘s workshop—which was located until recently above a Subway
sandwich shop in Davis Square and is in the process of moving to a former machine shop
in Union Square—is essentially a clubhouse for geeks who like to build stuff in their off
time. The ―stuff‖ ranges from robots and other electronic toys to jewelry and interactive
art installations—and to build it, members have collected a veritable museum of castoff
equipment, from lathes, mills, kilns, and forges to soldering irons and
spectrophotometers.
Like more than 50 other hackerspaces in the U.S., Willoughby & Baltic is built
around the philosophy that it‘s more fun to share tools, equipment, and ideas than to
tinker alone in the garage or the basement. That makes it a living example of the ―maker‖
epidemic, which got underway in the San Francisco Bay area roughly five years ago. The
movement draws momentum from a burgeoning open-source hardware movement born
in Europe, and is infecting new cities at a formidable rate. (Seattle is home to at least
three hackerspaces—Hackerbot Labs, the 911 Media Arts Center, and Saturday House—
and a group called Hackerspace SD is getting organized in San Diego as well.)
The founder of Willoughby & Baltic, who gave me a tour of the Davis Square
workshop and gallery space back in April, is Meredith Garniss. Trained as an artist at
Boston‘s Northeastern University, Garniss long held various software engineering
positions in the desktop publishing industry. But she left her job at digital font maker
Bitstream in 2001 to paint, teach, and lately, hack hardware—a pastime she believes is
best pursued in groups, where people can teach one another new skills. At any given
Willoughby & Baltic gathering, a jewelry maker might end up sitting next to a hydraulics
expert, leading to all sorts of crazy projects. ―We were thinking about calling the group
The Society for Soldering Things to Other Things,‖ Garniss jokes. ―We don‘t take any of
this too seriously. We just like to have fun and build stuff.‖
The hackerspace is actually the third or fourth incarnation of the Willoughby &
Baltic brand, which started off as fanciful name for Garniss‘s electronic typeface foundry
in the mid-1990s, then went dormant for a while, and was then re-applied to the Davis
Square garage space that Garniss turned into an art studio after leaving Bitstream. The
studio evolved into a community puppet theater; the puppets went robotic; the theater
group became the Boston chapter of the international hobbyist group Dorkbot (whose
tagline is ―People doing strange things with electricity‖); and a group of Dorkbot
members eventually decided to rent the second-floor space above the neighboring
Subway and turn it into a hackerspace.
The Wikipedia definition of ―hackerspace,‖ by the way, is ―a real (as opposed to
virtual) place where people with common interests, usually in science, technology, or
digital or electronic art, can meet, socialize and collaborate.‖ The emphasis in
hackerspaces is definitely not on the kinds of commercializable technologies that we
usually cover here at Xconomy. At a recent interactive art exhibition hosted by
Microsoft‘s Startup Labs in Cambridge as part of the Boston Cyberarts Festival, for
example, one Willoughby & Baltic member showed off a patch of artificial turf that
responded to any touch with a growl or a rumble. The piece‘s title: ―Sod Off!‖ (You can
read more about the Microsoft event in this Boston Globe article by D.C. Denison from
May 18, and Wired‘s Dylan Tweney wrote a nice piece about hackerspaces for the
magazine‘s Gadget Lab blog back in March.)
As someone who long felt stifled by her various software jobs, Garniss has a theory
about what attracts people to hackerspaces. ―A lot of the people who come here at night
or on the weekend went to work at high-tech companies thinking they were going to have
a certain level of creativity, and they‘ve come to feel over time that their creativity is
being squashed,‖ she says. ―But they still need a creative, collaborative environment—so
they come here.‖
On a typical weekend, a visitor to Willoughby & Baltic might find Garniss leading
an ―Arduino Bootcamp,‖ an introduction to the open-source Arduino electronics
prototyping platform. A group of hardware hackers in Italy founded the Arduino project
in 2005 as a way to reduce the cost of student robotics projects. While it‘s designed to
encourage hands-on experimentation in the same way as Lego‘s Mindstorms platform or
Bug Labs‗ plug-and-play hardware modules, Arduino is set apart by its open source
philosophy. Anyone can buy an Arduino board (the basic microcontroller costs $35) and
start hacking it—or download and adapt the reference designs for their own purposes.
―Arduino is the Pagemaker of our day,‖ Garniss says, referring to the desktop
publishing software that launched a self-publishing revolution in the 1980s. ―All of a
sudden you could print your own books on your desktop. You could say what you wanted
through your design. It‘s all about having power and control over your craft. We‘re now
seeing that with hardware, and that was what was interesting to me about Arduino.‖ For
their $275 registration fee—most of which Garniss plows right back into the
hackerspace—participants in the Willoughby & Baltic Arduino bootcamp receive a
complete Arduino hardware kit worth $100.
Garniss says it‘s important to Willoughby & Baltic‘s freewheeling character that it
has evolved apart from any of the Boston area‘s big technology institutions, which might
be tempted to transform it into some kind of training organization or adult-ed course.
―Keeping the group away from any particular college or entity, especially in Boston, is
very important, so that it belongs to everybody instead of just to one small community,‖
she says. ―In fact, when Dorkbot Boston was starting up, there was a lot of conversation
about whether we should even do it, because we were so close to MIT, and MIT tends to
suck everything up into it. But the surprising thing was how many people came from MIT
to join Dorkbot—it gave them an alternative to something that was exclusively MIT-
based.‖
But at the same time, Garniss believes that the hackerspace phenomenon holds
some important lessons for established institutions. The fact that so many Willoughby &
Baltic members have day jobs at tech firms but still need other outlets for their hacker
urges is, she says, a sign that creativity is undervalued inside many companies.
―It‘s a real missed opportunity for a lot of corporations, who could be starting their
own hackerspaces for people to support the creative side of what they do,‖ says Garniss.
―If IBM repurposed some space to be a community hackerspace and opened it up to some
segment of the population, not only would they be able to provide their employees with a
creative outlet, but they might get some ownership over what was created in that space,
and they might find new people with skills they need, and see how they interact before
hiring them. It would be a really good bridge to the community.‖
On the other hand, ―Maker Faires‖ and the spread of hackerspaces can also be seen
as just the latest twist on a long American tradition of social organizing among hobbyists.
And as Richard Koolish, a Willoughby & Baltic member who happened to be doing some
soldering on an Arduino board when I visited the hackerspace, pointed out, work is
work—it isn‘t always supposed to engage your full imagination. ―It‘s good if you‘re
interested in what you‘re doing at work, but that‘s not your whole life,‖ Koolish says.
―It‘s not [your employer's] problem to solve all of your problems. Model railroaders,
astronomy clubs, RC airplane builders—these guys are always going to find their own
organizations.‖ Nowadays, thanks to the open-source hardware movement, they just have
a few new toys to play with.
53: Will Quick Hit Score Big? Behind the
Scenes with Foxborough’s Newest Team
May 29, 2009
There‘s a company in Foxborough, MA, not two miles away from the New England
Patriots‘ Gillette Stadium, where a crew of veteran online game developers is putting the
finishing touches on a potentially groundbreaking new game about football.
Now, I can tell you all about why the venture-funded startup, Quick Hit, is likely to
dazzle the sports gaming world with its genre-busting title when it debuts this fall. I can
explain how it combines elements drawn from fantasy-driven role-playing games, online
casual games, console games, and even TV sports. But I have to disclose something up
front: I don‘t know jack about football. I can‘t tell you the difference between a wide
receiver and a tight end, or between an offsides penalty and a yellow card. (Or is that
soccer?) So please listen carefully while I explain what‘s so interesting about Quick
Hit—but when it comes to the football stuff, don‘t ask me to vouch for the details.
The core team at Quick Hit—which, until January, was called Play Hard Sports—
includes CEO Jeffrey Anderson, vice president of product Aatish Salvi, producer Geoff
Scott, and general counsel Kelli O‘Donnell, who all left Westwood, MA-based Turbine
in 2008. Turbine is famous for building massively multiplayer online role-playing games
(MMORPGs) based on the Dungeons & Dragons and Lord of the Rings brands. To play a
Turbine game, you fork over $10 to $20 for the initial software download, plus a $10 per
month subscription fee.
Quick Hit‘s football game, which is in its beta-testing phase now and will be
opened to the public on September 9, is a very different animal. It‘s part of an emerging
category of ―lightweight games‖ that are less expensive, processor-intensive, and time-
consuming than console games or MMORPGs, but more immersive, socially interactive,
and graphically rich than online casual games like Bejeweled.
Anderson says he‘d been thinking about the need to lower the cost barrier to gamers
even before leaving Turbine. ―I became concerned about what the future would hold for
the MMORPG business,‖ he told me. ―The price point moved a lot of consumers out of
the space and made it difficult for the average or light gamer to get excited about what
was going on.‖ But Anderson‘s proposal to make Turbine‘s future games free, and to turn
to a combination of advertising and microtransactions for revenue, didn‘t sit well with the
company‘s board.
So he and his small crew of believers started fresh, with a game that would have
rich, high-quality interaction but a low enough price point (namely, zero) to be accessible
to millions of people. To build it, they turned to Adobe‘s browser-based Flash animation
platform and desktop-based Adobe Integrated Runtime (AIR) environment. That‘s the
same technology underlying desktop programs like the popular Twitter clients Twhirl and
Tweetdeck; it‘s become the dominant way for companies to deliver ―rich Internet
applications‖ without requiring users to buy or install new software.
I got a preview of Quick Hit‘s game during a visit with Anderson a couple of weeks
ago. Quick Hit users–let‘s call them team coaches—start out by assembling offensive and
defensive lineups. (The company doesn‘t have a license with the NFL, so the players and
teams are entirely fictional.) Coaches then enter an online gaming lobby, where they can
find other Quick Hit users to play against. Games last 20 to 25 minutes, with TV-style
commercials between each quarter. Since the game is free, these ads will be one of Quick
Hit‘s primary revenue sources.
For each turn in the game, the coach controlling the ball picks an offensive play,
and the coach on the other side of the line picks an appropriate defensive formation. (I‘m
skating on the very edge of my football knowledge here.) Once all the players are lined
up, the software executes the play, while the coaches look on from a bird‘s-eye point of
view. How many yards the offense gains or loses is determined in part by the
comparative health and skill levels of the simulated players, and in part by how cannily
the coaches select their plays.
The on-field graphics in Quick Hit are far more schematic, and the selection of
plays far more limited, than what users of console-based games like Madden NFL are
probably used to. But that‘s okay, because Quick Hit really isn‘t about the violence, the
simulated sweat on each player‘s brow, or the dexterity of the person at the controls. It‘s
about the play-by-play tactics and the choices that go into building a strong team over
time.
Which is where the role-playing game elements come in. In Quick Hit, the gamer‘s
team is the equivalent of a character or an avatar in a classic MMORPG. Like a fantasy
character, a Quick Hit team gains experience in the form of ―fantasy points‖ for every
game completed. (Winning, of course, garners more points than losing.) After gaining a
certain number of points, a team advances by one experience level, at which point the
coach gets ―coaching points‖ that can be applied to individual team members, such as the
quarterback or the running backs, to improve their skills. For example, a certain number
of coaching points might earn the quarterback a ―cannon arm‖ that makes passes go
deeper. Part of the strategy in Quick Hit lies in knowing the skills of your own team and
scoping out the skills of the opposing team, and picking plays accordingly.
There‘s a lot more to Quick Hit than I have space to describe here. (By which I
really mean, there‘s a lot more than I could absorb from Anderson‘s demo, suffering as I
do from football-related attention deficit disorder.) But one of the neat things about the
game is that beginners can choose from broad-brush plays like ―run‖ or ―pass‖ while
more experienced users can drill down into a playbook with scores of unique formations,
each with their own complicated choreography. From playing RPGs, I can definitely
understand the appeal of nurturing a team over time, deciding where to invest coaching
points, and testing a team‘s mettle against competitors. And I can see lots of other ways
for Quick Hit to add complexity for those who want it—not to mention monetization
opportunities. (Anderson says the company is already making plans to sell branded
virtual goods—think virtual Gatorade, giving your offensive line a temporary burst of
speed.)
But I have a hard time predicting exactly how big of a hit Quick Hit will be. There‘s
no denying that casual Internet games are on the upswing—especially social games in
which users are playing against other humans rather than a computer. And the company
was wise to focus on a game that‘s already America‘s national obsession. As Anderson
puts it, ―Most people already know everything about football.‖ (To which I say: speak for
yourself.) But he‘s right—even apart from all the passion and money generated by actual
college and NFL football, the sport has a generous online following, claiming more
fantasy-league adherents, by far, than any other sport. (There are 22.5 million fantasy
football league members in the U.S., compared to 8.1 million for baseball and 5.7 million
for basketball, according to the Fantasy Sports Trade Association.)
I do wonder, though, how quickly the three most obvious audiences for a new
online sports strategy game—console game players, MMORPG players, and fantasy
league participants—will cotton to a game that bends and mixes genres the way Quick
Hit does. My concern, in fact, is that the company may be occupying whatever is the
opposite of a sweet spot. Madden NFL players may be put off by the relatively primitive
graphics and the swords-and-sorcery-style point system, MMORPG players turn up their
noses at a game about conventional sports, and fantasy-league fans may miss the
connection to real teams and players.
But Bob (who‘s editing this piece) points out that Quick Hit might be perfect for the
many fantasy players who like the idea of console sports games but can‘t or don‘t want to
put in the time to master the controls. They‘ve already mastered the ―brain‖ part of
football video games, just not the physical part—and now they won‘t have to. And
Samantha Smith, Quick Hit‘s director of communications, says the company has done its
homework, and is confident that it‘s targeting a big audience. In a company-sponsored
survey of 1,000 football fans and gamers last summer—all male, and all between the ages
of 14 and 40—85 percent said they would ―definitely‖ or ―most likely‖ want to play
Quick Hit, Smith says. And the RPG elements that Quick Hit is incorporating, says
Anderson, ―are the same ones that have proven to be widely successful after a decade of
use in the industry…Advancement, skills, leveling are things that have been tried and
truly tested.‖
With a $13 million venture pot provided by Menlo Park, CA-based New Enterprise
Associates and Vienna, VA-based Valhalla Partners, the company should have the
resources it needs to adjust the game in response to user feedback come this September.
While its quarters in Foxborough are spacious—and even show a touch of dot-com-era
architectural enthusiasm, such as the giant gridiron on the wall of the entrance lobby—the
startup has a lean team, with only 25 employees, and isn‘t spending anything close to the
$20 million to $100 million that can go into a developing a typical MMORPG or console
game these days.
―There is a blockbuster-film approach to building these titles, but the downside risk
is that you play a Madden NFL once or twice and you‘re done,‖ says Anderson. ―On the
other end of the spectrum you can do an iPhone or Nintendo DS game for $250,000, but
those games have no real replayability or depth. In our product, we‘re building in these
elements of advancement to keep people playing.‖ And having constructed a game
engine that can handle a sport as complex as football, Anderson says, Quick Hit is in
good position to apply its approach to other sports like soccer, baseball, or basketball.
You may be wondering whether Quick Hit‘s decision to locate in Foxborough
really has anything to do with the Patriots. As it happens, the team‘s former practice field
is visible out the window at Quick Hit. But the startup‘s location actually has more to do
with mental hospitals than with football. Its offices are part of the old Foxborough State
Hospital, which, as Anderson explained to me, opened in 1889 as the nation‘s first
facility for the treatment of alcoholics (then called ―dipsomaniacs‖) and later became a
treatment center for patients with psychiatric disorders.
The facility closed in 1976 and was abandoned for many years, except for the
occasional Halloween haunted house fundraiser—doubly fitting, given the nearby
cemeteries, which are the final resting ground for 1,100 anonymous former patients. The
former hospital campus is now being redeveloped by Boston-based Vinco Properties
under the appropriately anodyne name Chestnut Green; Quick Hit chose the location
because Anderson lives in Foxborough.
And that, gentle readers, is my first and probably last column about football. You
can sign up to be a Quick Hit beta tester at www.quickhit.com; if you try it, I‘d love to
hear back about whether you think the formula works.
Author's Update, February 2010: Quick Hit finished its beta testing and opened up
for business in October 2009; Jeff Anderson told me in January 2010 that the site
attracted more than a million unique visitors in its first three months.
54: Are You a Victim of On Demand
Disorder?
June 5, 2009
If this column has a repeating theme, it‘s the amazing new capabilities we‘re all
gaining as a result of the digital media explosion. Yet like all revolutions, this one is
destroying old values, attitudes, and behaviors even as it creates new ones. I would never
trade the Web, mobile computing, and the instant access to digital culture that they enable
for the media universe that existed before, say, 1995—but I also think it‘s important to be
aware of what we‘re leaving behind. So this week I want to get down a few thoughts in
remembrance of a little something called going out of your way.
Do you find yourself listening only to the music you can download from iTunes?
Watching only the movies you can find in your cable provider‘s video-on-demand
lineup? Reading only the books you can order from Amazon? Going only to the
restaurants you can find on Yelp? I certainly do. And I think this is a growing tendency,
thanks to the ubiquity of cheap digital content and devices that can access it. At the risk
of being taken too seriously, I want to coin a pseudomedical term for this pattern: On
Demand Disorder, or ODD.
The main symptom of ODD is an aversion to any experience, product, or piece of
content that can‘t be obtained more or less instantaneously. And the main long-term
consequence may be a narrowing of one‘s world-view to exclude ideas and materials that
take a little more work to uncover.
I‘ll illustrate with a few examples from my own life. As a gadget freak, and as
someone whose job is to keep abreast of the latest digital technologies, I may be an edge
case. But perhaps you‘ll recognize similar patterns in your own routine.
1. In the three months since I bought a Roku Player—a $99 wireless device that lets
you view movies from Netflix and Amazon on your TV instantly—I have watched
dozens of movies and TV shows on the Roku. In the same time, I‘ve watched exactly two
physical DVDs from Netflix. My ―Instant‖ queue keeps turning over, but I haven‘t made
any progress on my regular DVD queue. This despite the fact that the selection of DVDs
at Netflix is still far greater than the selection of so-called ―Watch Instantly‖ movies. In
effect, I‘m sacrificing choice for availability.
2. I used to be a fairly regular buyer of books from Amazon. About six weeks ago, I
decided to splurge on a Kindle 2 e-book reader. Guess how many physical books I‘ve
ordered since then? One. Partly, I‘m just trying to get my money‘s worth out of the
Kindle. But now that I have the option of buying a book through the device‘s built-in
catalog and having it delivered wirelessly in under 60 seconds, instead of ordering it
online and waiting for it to arrive three to seven days later in the mail, I‘ve become far
more cognizant of my own impatience. When I get a hankering to read a book, I usually
want to read it now. By the time Amazon can ship me the physical book, the feeling of
urgency may have passed, or I may have found the information I needed elsewhere. The
one book I did buy was an out-of-print monograph from an academic press that will
likely never be made into a Kindle Edition.
3. I‘ve been an active amateur photographer ever since my grandfather gave me one
of his old Nikons when I was a teenager. My collection of thousands of photographs fits
into roughly four buckets: a) 1980-1990: Ektachrome slides—my grandfather‘s preferred
medium—now stored in carousels and in plastic sleeves in binders. b) 1991-1997: Color
prints, stored in albums. c) 1998-2004: Digital images, taken with my first two digital
cameras, stored on CD-Rs. d) 2005-present: Digital images, taken with my various
camera phones and my third and fourth digital cameras, stored on hard drives and on
Flickr. It‘s probably not hard for you to guess which pictures I view most often and least
often. The sad truth is that because I can pull up my Flickr photostream instantly on my
PC, my Mac, or my iPhone (or even, thanks to programs like Slickr and Boxee, on my
television), the Flickr images are the only ones I ever look at.
My personal media consumption habits, of course, are of no great consequence to
the larger world. What worries me is that as the amount of material available in digital,
on-demand form grows, our familiarity with the non-digital world may atrophy. That
would be a real shame, because there‘s a lot of stuff that either won‘t be digitized for a
while because of the logistical or legal hurdles, or shouldn‘t be experienced digitally
anyway because it loses so much in the translation.
In the first category, for example, is the huge class of books known as ―orphan
works.‖ These are the millions of out-of-print books that were first published after 1923
and are therefore still technically covered by copyright, but for which there is no
identifiable, living copyright holder. For most of these books, there‘s not enough demand
for anyone to bother reprinting them—and few publishers would take the risk in any case,
since you never know who might come out of the woodwork to sue you. So the only way
to read them is to go to a library or find a used copy at a bookstore.
Google is gradually scanning many of these books with the intention of making
them searchable and perhaps downloadable online. The terms under which it makes them
available are the subject of the controversial, as-yet-unapproved settlement between
Google and a group of publishers and authors. But even if the settlement goes through, it
probably won‘t be easy or cheap to access the full text of Google‘s digitized orphan
books. And ODD sufferers, by definition, don‘t go to the library. So orphan books are
likely to stay permanently off the radars of all the people who are becoming addicted to
Kindle-style book shopping convenience.
In the second category—stuff that shouldn‘t be experienced digitally, or at least not
only digitally—I‘d include things like classic cinema and fine art. There are some great
black-and-white movies (Casablanca, Rebecca, and The Third Man come to mind) that,
to be truly appreciated, have to be seen on a big screen, projected from real celluloid, in a
dark old moviehouse like the Brattle Theatre in Harvard Square. And having just been to
Philadelphia to see a massive exhibit on Paul Cézanne and his followers, I could go on
for hours about the difference between seeing a digital copy of a famous painting and
actually being in front of it. Yes, I am the same person who has written whole columns
about the Corbis CD-ROMs that first turned me on to Cézanne in the 1990s. But unless
you‘ve stood close enough to a Cézanne canvas to see the bumps and cracks in the paint,
to absorb the fact that Cézanne‘s brushstrokes have a texture and a direction and a
materiality that distinguishes them from the strokes of a Matisse or a Picasso or a Jasper
Johns, you‘re missing something fundamental about modern art.
In sum, there are certain artifacts of culture that will teach you more if you go out of
your way to experience them the way their creators intended. But the more of these things
that can be reproduced in ―close enough‖ fashion on our computers or mobile phones or
HDTVs, the more we may choose to forego the real things and settle for the easy things.
Now, I don‘t want to sound like some kind of snob or aesthete who thinks that
direct experience is always better than mediated experience. In fact, I‘d say that your
state of attention when you encounter something is far more important than whether you
experience it first-hand or second-hand. The main thing is to be ready to engage your
mind and your emotions. That‘s why I don‘t buy the idea that ―Google is making us
stupid,‖ to paraphrase the title of Nicholas Carr‘s much-discussed July 2008 cover story
in The Atlantic.
Carr‘s main argument is that different media affect our thought processes
differently, and that Internet content, because it‘s so often full of tempting links to other
content, is ―chipping away at [our] capacity for concentration and contemplation.‖ That
may be true for certain kinds of digital media—perhaps including blogs like this one,
where variety is the whole point. But I think the flavor and impact of a media experience
all depend on what you bring to it. If I am in the mood for Forster, then Howards End is
no less engrossing just because I‘m reading it on my Kindle and there are 270,000 other
books a click away. But if Howards End weren‘t available on the Kindle and I didn‘t
have the patience to find it in a bookstore, that would be a different story.
That‘s the danger I‘m thinking about today—that on-demand technology is training
us to settle for whatever is immediately available, and that we‘ll start judging the world
not by how much color is in it but by the content of our iPods.
The cure for ODD, happily, is simple. Pick something you‘re passionate about, start
exploring, and don‘t stop with the first thing that Google or Bing or Comcast or iTunes
offers you. Whether the tool of exploration is a computer screen, a cell phone, or your
own two eyes doesn‘t matter so much. Just find something that gets you thinking and
feeling. Now go!
55: German Web 2.0 Clothing Retailer
Spreadshirt Finds Boston Fits It to a T
June 12, 2009
When you‘re a technology reporter and you stumble across the same startup two or
three times in quick succession, it‘s the journalism gods telling you to write a story.
I first stumbled across Spreadshirt on Tax Day, April 15. (As I‘d later learn, the
Leipzig, Germany-based startup has been famous among Web-savvy fashionistas for
years, but I‘m an ignoramus about the clothing business.) I‘d just finished a visit to
Mocospace, a mobile social networking startup that occupies the second floor of an old
building at 186 South Street in Boston, in the funky little office district near South
Station. On my way out, I noticed the Spreadshirt logo on the window of the first-floor
space, which had the look of a former art gallery but seemed to be occupied by some
Euro-stylish people doing something related to T-shirts and the Internet. I stopped in and
left my card.
Two weeks later, I intersected with Spreadshirt CEO Jana Eggers at the Nantucket
Conference, a meeting of entrepreneurs and venture capitalists on tony Nantucket Island.
Eggers was the most outspoken member of a panel of startup leaders who all spoke very
frankly about what CEOs have to do to steer their companies through recession. What
struck me most was Eggers‘ defiant tone as she recounted how Spreadshirt was savaged
by the German press after the company laid off workers despite having wads of new
venture money in the bank. ―The press came after us, but I said, ‗I will not be torn down
for being a responsible CEO,‘‖ Eggers told the audience. ―Spending the money was not
the right thing to do for the company. And it‘s not for the press to tear down leaders who
are doing it in the right way.‖
Which brings me to what Spreadshirt actually does: it‘s an online boutique for
personalized clothing, from T-shirts to aprons to baseball caps, where every item can be
imprinted with a message or design that you create and upload. Individuals and
organizations—like the Nantucket Conference, for example—can set up their own
Spreadshirt ―partner‖ shops online and sell customized apparel. The Boston office is the
company‘s headquarters in North America, where it does about 20 percent of its business.
And as it turned out, a line I penned about the Nantucket meeting was one of 17 quotes
from the conference that Spreadshirt will emblazon on your very own Nantucket tee.
(The line was, ―One blue dot, on my way to a gold starfish.‖ You had to be there to
understand.)
Clearly, I had to do a piece on Spreadshirt. One evening about three weeks ago—it
must have been well after midnight in Leipzig—I got Eggers on the phone. She‘s been
leading the company in North America since 2006 and globally since 2007, enough time
to line up a couple of rounds of venture funding from the likes of Accel Partners and
Kennet Partners and to arrange a few highly publicized deals, like Spreadshirt‘s work
with CNN to put the network‘s headlines on T-shirts or with Adidas to put Boston
Marathon runners‘ personal times on branded shirts. For a CEO, Eggers is refreshingly
unguarded, and we had a long, freewheeling discussion about how the company is trying
to meld technology and fashion, why she thinks the Spreadshirt formula is so appealing,
and where she hopes to take the company over the next few years.
Below, I‘ve gathered up some of the best parts into a few bundles. This is Jana
Eggers in her own words.
On who Spreadshirts’ customers are, and what they get:
What we‘re really selling to people is self-esteem. When you are wearing a shirt
that you came up with yourself and someone compliments you on it, your shoulders go
back. There are times when I‘m on the subway and I see people looking at my shirt, and
they either smile or look puzzled, but either way, they are noticing me. That‘s what we
call our brand promise—self-esteem. So if you ask me who our customer is, the crappy
answer is ―everyone,‖ but I‘d start with the people who are really expressing themselves
now, who are the people out there blogging and tweeting and going on Facebook.
On whether Spreadshirt is primarily a fashion company or an e-commerce
company, and how it’s different from CafePress or the local T-shirt shack:
The fashion part is important to us because we‘re not just about the boxy T-shirt
you would get at a store. We carry 21 types of T-shirts, not even counting long-sleeved
shirts and polos and hoodies and bags and visors. A personalized coffee mug is nice, but
you leave it in the sink at work. Most people don‘t leave their shirts in the sink at work.
What you wear and put on your body is a lot more important. It goes back to that self-
esteem; my clothing represents me in a very personal way. But that‘s why the e-
commerce is important, because we can offer more of that fashion. It‘s enabling those
individualized tees, but for a lot of people.
We did a Boston Marathon promotion with Adidas where people go in before the
race and get technical sports shirts to wear in the race, and afterward they could get a
personalized shirt with their marathon time on it. We believe that this was the first time
something like that was done on a massive scale. People could always go to a local print
shop and say, ―Could I have eight of these shirts for my team or my friends,‖ but not with
a brand like Adidas, in a sponsored way, with the actual Boston Athletic Association logo
on it.
On why people seem to enjoy marketing companies like CNN and Adidas on
their bodies:
Because it‘s not about the marketing message, it‘s about them. One of my favorite
accounts that we worked with was a 24-hour fitness chain. They came up with this idea
of giving away personalized shirts when you sign up. The shirts said ―I do it because…‖
and then people could go and type in their own responses. Some people would put in
things you would expect, like, ―I do it because it‘s my 20th high school reunion,‖ or ―I do
it because my grandkids can outrun me.‖ But there were also some really creative ones,
like, ―Because at the bottom of every beer there is a pork chop.‖ The fitness chain said
more member clubs participated in this T-shirt campaign than any other campaign they‘d
ever run, and 40 percent of the people who got free shirts actually bought another shirt
too. So people are not afraid to wear a marketing message when they are really wearing a
statement about themselves and their own creativity.
On Spreadshirt’s choice to make Boston its North American headquarters
when it expanded here in 2006:
Fashion, technology, and e-commerce are our three areas, and Boston is one of
those places where you can get all of that. In California, you can get a lot of technology
and e-commerce, but you are missing out, typically, on the fashion side. Boston is one of
those great places where you have a lot of creativity and the arts side and you also have
the technology side and that deep history. As a bonus, it‘s only 6 time zones from Europe
instead of 9, which makes a huge difference.
We originally went into the Cambridge Innovation Center, which is a terrific place
to get started, and I love what Tim Rowe does there, but what we really wanted—which
was exactly what we got, thanks to our agent—was an old art gallery or studio at street
level in that creative area of Boston. That‘s the vibe the company has.
On Spreadshirt’s unexpected scramble to catch up with the Web 2.0 and social
media phenomenon:
We were really early on the Web 2.0 chain. It was way before social media. We
were crowdsourcing before people even had the word crowdsourcing. The Facebooks, the
Twitters just weren‘t around. It was funny and striking to me that as such an early player
on the Web, we would be kind of slow to react to Twitter and Facebook, just as
examples. It was just two months ago that we added the ability, when you buy something
on Spreadshirt, to post that to your Facebook page. You would think that a company like
us that was progressive and early on the Web—that [social media] would be where our
natural genes are. And I think we saw it, but it was just one of those things where you‘re
so caught up in your own product that you don‘t have time to step back and take a breath.
On the company’s most important growth markets:
We‘re excited about all of them. I‘m not going to play favorites. The U.S. is a little
over 20 percent of our business, which is really unusual of course, because most
technology startups have their primary business in the U.S. We are a funny little outlier,
because we were started in Europe and that‘s where we have the majority of our business.
As far as what‘s growing, we‘re still seeing a lot of growth in Germany, which is
terrific—it tells us there is a lot of power in our model. The U.S. is obviously growing for
us, and growing quite well. Those are the strongest growth markets. Down from that, it
gets kind of interesting. In the U.K., for example, we are growing more on the shop-
partners side than on the direct sales side. Why, we don‘t know. Sometimes it‘s just who
the partners are that you catch and how they do, or may some great article has hit the
press. Honestly, in Europe running this business is like running 80 different businesses.
On being an entrepreneur in Germany:
Germany does have a very strong startup community, but it‘s different from the
U.S. Unfortunately, from my perspective, I‘m used to the U.S. way. It‘s not as supported
here. Here‘s how I describe it: in the U.S. you get more false positives and in Germany
you get more false negatives. Meaning that in the U.S. there is so much support and
respect for entrepreneurs that you can probably get farther, and get more money to go
farther. In Germany there is less support and less respect for entrepreneurs, so therefore
there are probably some businesses that fail that maybe could have been viable had they
had a little more support.
Speaking as an American, I would rather have more false positives, and some
Germans would probably say they would rather have more false negatives. That‘s the
mindset. There is definitely a balance—I‘m not saying the American way is the right
way. It‘s somewhere in between. People have asked me ―Would you start a company in
the U.S. or in Germany?‖ I‘m not stupid—I would do it in the U.S. because I know all of
these things about how the system works and I‘d want the best chance for my company,
and I‘m American and my network is there. But you know, when I ask myself would I
feel better if my business was a success in Germany or in the U.S., I would say Germany,
because I know it‘s harder here.
56: Boston’s Digital Entertainment
Economy Begins to Sense Its Own
Strength
June 19, 2009

Let‘s say you live in Boston and you‘ve just hit on a great concept for a cross-media
property, with all the attendant merchandising tie-ins: a special-effects-laden movie, a
console video game, a comic, a kids‘ cartoon, action figures, a novelization, a persistent
online world—in other words, the next Matrix or Transformers or Harry Potter. To make
it happen, you‘d probably need to hire filmmaking talent from Hollywood, writers and
publishers and marketers from New York, programmers and game designers and media
network providers from San Francisco and Seattle and Los Angeles, and so forth, right?
Actually, no. Most, maybe all, of the talent and technology you‘d need to build your
dream media empire is right here in New England.
While the rest of us weren‘t looking, and without consulting one another, thousands
of creative types have been flocking to the Boston area over the past decade. They‘ve
built a critical mass of game studios, film production companies, graphics software
houses, 3-D modeling companies, digital marketing agencies, online hangouts, and the
like—what amounts, in fact, to a self-sufficient digital entertainment ecosystem.
Of course, there would be no particular reason to build your media property using
only New England talent. You don‘t get green laurels or political-correctness points for
restricting yourself to creative services from within a 100-mile radius, the way you
arguably do if you buy locally farmed food. And in an age of Friedmanian flatness, your
investors will probably force you to offshore as much of the work as you can anyway.
My point is that you could find the services here if you wanted to. And that‘s something
new and remarkable.
We‘re going to explore this emerging sector in depth during a panel discussion that
I‘m moderating on June 24 as part of the Xconomy Summit on Innovation, Technology,
and Entrepreneurship. My panel is entitled ―The Digital Entertainment Cluster: Boston‘s
Best Kept Secret,‖ and I‘ve lined up participants from local companies and organizations
that represent the whole spectrum of digital media production and delivery. Not
coincidentally, these are all companies I‘ve written about for Xconomy—just follow the
links below to go deeper.
First, we‘ll have Brett Close, CEO of Maynard, MA-based 38 Studios, which was
founded by local baseball hero Curt Schilling and is building a cross-media property very
much like the hypothetical one I outlined above; it‘s based around a massively
multiplayer online environment with the cheeky code name Copernicus. Then there‘s
Chris Gardner, chief marketing officer at Newton, MA-based Extend Media, which sells
software that media companies can use to distribute a single piece of digital content to
multiple devices, including PCs, televisions, and mobile phones.
Kyle Morton, vice president of product at Cambridge, MA-based EveryZing, will
also be on hand; EveryZing is a spinoff of local engineering powerhouse BBN, and has
turned its original speech-to-text technology into the core of a universal search engine
that helps media companies catalog the digital content they own, facilitate consumer
access, and monetize it through advertising. We‘ll also hear from Brian Shin, the CEO of
Boston-based Visible Measures, who will talk about his company‘s project to index and
track all the world‘s viral videos, the better to help clients measure the success of their
marketing campaigns.
Finally, we‘ll be joined by Jason Schupbach from the Commonwealth of
Massachusetts‘ Office of Business Development, who has the coolest title of all the
panelists: ―Industry Director, Creative Economy.‖ Schupbach‘s job is to connect people
in the creative industries to the extensive resources offered by the state government. He‘s
one of the main people in the Patrick Administration promoting services like export
planning, equipment loans, affordable housing programs for artists, and the 25 percent
film tax credit. (That tax incentive, available to anyone who creates at least 70 percent of
a film or digital media project in Massachusetts, is one of the main forces behind the
state‘s sudden emergence as a film-industry outpost; no fewer than four major movie
studios are planned for construction in Massachusetts over the next two years.)
I‘m very excited (or XSITEd, as we‘ve been saying around here all month) to be
gathering these particular panelists at one event, because I think they can tell a
compelling story about why it‘s useful to have so many elements of the digital media
production and distribution pipeline available in one place; why Boston is an attractive
place to build a digital media company; how having all of this talent in one place creates
opportunities for projects that weren‘t thinkable before the sector emerged; and what the
companies in this area, as well as Governor Patrick‘s office and legislative leaders,
should be doing to ensure the sector‘s long-term growth.
These panelists can also speak about the special ability that Boston-area engineers
and entrepreneurs seem to have to invent a cool technology, connect it to a market need,
and hone it over time until customers can‘t live without it. EveryZing, for example,
started out in 2006 under the name PodZinger, and spun its technology as a way to make
the content of podcasts more understandable to search engines—a useful but not
immensely lucrative idea. Now, just three years later, the startup‘s technology has
morphed into a versatile platform that creates tags and metadata for text, images, audio,
and video, and is used by media giants like NBC Universal to index thousands of media
files across scores of allied Web properties.
That said, there are plenty of other organizations around town who could have
supplied great panelists. Just to eliminate room for skepticism about my ―critical mass‖
claim above, let me list a few of them. In what you might call the digital tools area,
there‘s Autodesk, Avid, GenArts, Parametric, Spaceclaim, Solidworks, and Z
Corporation. In virtual worlds, there‘s Hangout Industries, WeeWorld, and the North
American office of Weblin. In online communities there‘s GamerDNA, Mocospace,
Nextcat, and SnapMyLife. In media hosting and infrastructure, there‘s Extend Media,
Brightcove, Maven, Seachange, and Verivue. In the area of search, measurement, and
monetization, there‘s Echo Nest, EveryZing, Jumptap, Localytics, Third Screen, and
Visible Measures. In the area of music and technology, there are too many companies to
mention—see a list we created in 2007. In digital publishing technology, there are
companies like Zmags and E Ink. In video games, there‘s Conduit Labs, Creat Studios,
Galactic Village Games, Harmonix Music Systems, iRacing.com, Lycos Gamesville,
Muzzy Lane Software, Quick Hit, Rockstar New England, 38 Studios, Turbine, 2K
Boston, and Worldwinner, to name just a few.
And, of course, there‘s an array of supporting organizations and institutions; the
MIT Media Lab, the Singapore-MIT GAMBIT Lab, the Interactive Media and Game
Development major at Worcester Polytechnic Institute, and the Creative Industries
Initiative at Northeastern University come to mind. I haven‘t even touched on the fields
of design, architecture, marketing, and advertising, which are all becoming more digital
every day. And none of these lists are even close to comprehensive: indeed, the state says
there are over 14,000 creative-industry businesses in Massachusetts, with 80,000
employees overall and a net economic impact measured in the tens of billions of dollars.
In the end, if you‘re someone who simply enjoys the creations of the media
industry, or who cares more about whether people are fulfilling their creative potential
than about where they do it, the fact that Massachusetts has a strong digital media cluster
doesn‘t mean a whole lot. And indeed, I‘m not trying to build an argument that Boston‘s
innovators are more talented than people in other technology clusters (like Seattle or San
Diego, to take two non-random examples), or even that they offer anything that can‘t be
found elsewhere. But I am saying—in the spirit of the XSITE motto, ―The Recovery
Starts Here‖—that the local digital media sector is positioned to pull a lot of weight as
New England and the country create a new foundation for prosperity. I hope you can join
me next week as we talk about exactly how we‘re going to do that.
Author's Update, February 2010: The June panel was a big success, and I've heard
from several people who attended that they felt it was an important step toward
establishing a more well-defined identity for the entertainment economy in New England.
57: The Eight (Seven…Six?) Information
Devices I Can’t Live Without
July 2, 2009
If you read Xconomy, chances are that digital information is a big part of your day.
You spend quite a bit of time absorbing, manipulating, and repackaging it. So here are a
few questions for you: How many different devices do you use to channel all those bits?
Is the number going up, or down? And if—as I suspect—it‘s going down, what‘s the
minimum set of devices that you think you could get along with?
Here‘s my current list:
1. Apple iPhone 3G
2. Apple MacBook, OS X 10.5
3. Dell Inspiron 8600 Windows XP laptop
4. Amazon Kindle 2 e-book reader
5. Sharp Aquos 32-inch HDTV
6. Microsoft Xbox 360
7. Canon PowerShot S5 IS digital camera
8. Roku digital video player
Note that I‘m not counting the key infrastructure devices, like the Comcast-
provided cable modem and my Netgear Wi-Fi router, that support several of the devices
above.
But even without those two indispensable items, there would still be 12 or 13
devices on my personal list, if it weren‘t for the Internet and the creative geniuses at
companies like Palm, Microsoft, Amazon, and Apple. I‘m betting the same thing is true
for many readers.
Here‘s my tale of the disappearing devices:
The PDA. I used a series of Palm devices to manage my calendar and contact lists
from 1998 until 2003, when Palm folded those functions into its Treo phones, allowing
me to say goodbye to the standalone organizer.
The MP3 player. In 2005 or so, I had a running debate with a fellow tech journo
named Eric Hellweg about whether there would ever be a successful music phone—
meaning a cell phone with a built-in music player. At the time, the only examples were
devices like the Motorola ROKR, which, to put it politely, was a piece of horse pucky
that could only hold 100 songs. I argued that not only was the technical problem of
building a more capacious music phone too hard (what manufacturer was going to put a
hard drive into a mobile phone?), but people didn‘t want such a device anyway, since
they already seemed perfectly happy to be carrying around separate devices for these two
purposes—an iPod for music and a cell phone for communications. Well, obviously Eric
won that debate in the end. The Apple iPhone, which came out in 2007, is arguably a
better iPod than the iPod itself, thanks to its larger screen and a multi-touch interface.
And even the low-end models can hold four times more music in their solid-state
memories than my first disk-drive-based iPod.
The DVD player. No need for it after I got the Xbox 360, which also plays DVDs.
The DVR. When I jettisoned premium cable TV back in March, I had no more need
for the Comcast set-top box, which also functioned as my DVR. I now get all of my
video entertainment through Internet video sites like Hulu, Netflix DVDs, and the Roku
digital player, which connects via Wi-Fi and the Internet to Netflix‘s Watch Instantly
service and Amazon Video on Demand. (Because I got the Roku box around the same
time I canceled the cable, this was technically a one-for-one swap rather than a
subtraction.)
The land line telephone. The AT&T cellular signal in the neighborhood of my
apartment recently improved to the point that I was able to cancel my Comcast digital
voice line. The signal still isn‘t great, and I can‘t rely on it for important conversations or
interviews. Luckily, there‘s always Skype—which now has a great iPhone app, in
addition to its trusty Mac and PC versions.
The death toll among information devices is likely to increase over time—with the
iPhone and similar mobile devices as the perpetrators. As Dan Shapiro, CEO of Seattle-
based Ontela, has argued in a perceptive column for our own Xconomist Forum,
smartphones are crossing what he calls the GET, the ―good-enough threshold,‖ in more
and more areas, and consequently putting older information devices in danger of
extinction. Clearly, phones have already crossed the GET as media players, and Dan
thinks point-and-shoot cameras and game consoles are next in line. I would add GPS
receivers, portable DVD players, and pocket-sized HD video cameras like the Flip.
My own list of eight devices could decrease to seven if I wanted to do without my
Kindle 2. It‘s the newest of my gadgets, and the current darling. But I can read all the
same e-books using the Kindle app on my iPhone.
And seven would shrink to six if my ancient Dell laptop expired, as it eventually
will. I keep it around for two reasons only: to store my digital photos, and to run Quicken,
where I‘ve built up about 10 years of financial and tax records. But both of these
functions are rapidly migrating to the Internet. My camera‘s Eye-Fi card sends all of my
photos to Flickr automatically, and I‘ve been experimenting with Quicken Online, which
isn‘t as powerful as the desktop version but is, in many ways, easier to use.
Six devices is probably my absolute minimum, barring some startling technological
change. As long as I work for Xconomy, I‘ll need the Mac laptop. And even the iPhone
4G or 5G or 6G isn‘t likely to duplicate the functions of my HDTV, Xbox, Canon
camera, and Roku player. (But I wouldn‘t be too surprised if my next TV had something
like the Roku built in—which would subtract one device, leaving me with five.)
So, what are your indispensable information devices? Would you like to have
fewer, or are you perfectly happy juggling a dozen or more gadgets daily? Let me, and
I‘ll compile everyone‘s feedback into a future column.
Author's Update, February 2010: My list of crucial gadgets hasn't really changed
since the time I wrote this column. I do find that I'm using my Windows laptop less and
less, especially since switching most of my financial records over to Mint.com (Quicken
Online didn't do it for me, and it's now being subsumed into Mint anyway). When the
Windows machine dies, I won't replace it.
58: Personal Podcasting with AudioBoo,
UK’s “Twitter for Voice”
July 10, 2009
The human voice is making a comeback. For a while, it looked like e-mail, instant
messaging, blogs, RSS, and all of the Internet‘s other texty goodness might permanently
eclipse the old-fashioned phone call and other voice-driven forms of communication.
Even the spread of cell phones hasn‘t halted the tide of text—more than a third of mobile
phone owners use their phones primarily to send SMS text messages rather than making
actual calls, according to research from Cambridge, MA-based Vlingo.
But a stream of new mobile-device applications designed for voice input might be
restoring the balance. This month I‘m excited about two examples in particular: the new
Voice Memo app that showed up with Apple‘s iPhone 3.0 operating system, and
AudioBoo, a nifty audio recording app for the iPhone with a surprising origin: Channel 4,
Britain‘s publicly funded alternative television network. Along with several other
programs, these apps are turning the iPhone into a handy platform for ―personal
podcasting,‖ an emerging genre of amateur digital publishing that‘s as convenient and
spontaneous as Twitter but, because it‘s actually a person talking, feels more human.
No apologies, by the way, to non-iPhone owners. With iPhone 3G now priced at
$99 and the 3GS starting at $199, there are fewer and fewer excuses for not trying out
Apple‘s marvelously powerful uber-gadget.
First, a word about Voice Memo on the iPhone. Many mobile phones come with a
voice recording function these days, so it wasn‘t a surprise to see Apple add one when it
updated the iPhone operating system last month. It‘s fairly basic: it lets you make new
audio notes and review your old notes, all of which get copied to your iTunes library
whenever you sync. There‘s also a basic editing feature that lets you trim a voice memo
by lopping time off the beginning or the end. Best of all, there‘s a ―share‖ button that lets
you send out copies of voice memos via e-mail.
I really like the sharing feature, which is great for sending people quick voice
messages, and has two advantages over conventional voicemail. First, the sound quality
is far superior. Voice memos are monaural, but they don‘t get compressed the way your
voice does when you‘re leaving a message for someone over a cellular voice network
(compression that‘s redoubled if the recipient is retrieving their voicemail from their own
cell phone). Second, e-mailing a voice memo is a non-sneaky substitute for voicemail for
those times when you want to leave a voice message but you don‘t want to risk actually
talking to the person. (Slydial offers a similar capability by connecting you directly to
someone‘s voicemail—but it‘s not foolproof, as it sometimes makes their phone ring
anyway.)
In a pinch, you can also use the iPhone Voice Memo app to record audio for
publication on the Web. It clearly wasn‘t designed for this purpose, as the app records
memos using the relatively voluminous .m4a audio format, and doesn‘t allow you to
transfer memos over a certain size by e-mail. (I‘m not sure what the limit is, but I was
unable to send a 5-minute, 12-megabyte file.) Also, it buries the synchronized copies of
your voice memos deep in the iTunes folder of your computer, where it‘s difficult to find
them. But as a test, I located one memo—a few thoughts that I recorded on a drizzly
afternoon at the Japanese Garden at Boston‘s Museum of Fine Arts—and used iTunes to
convert it from .m4a to the more compact .mp3 format, which made it small enough to
post on my personal blog at Tumblr.
But if you really want to use your iPhone as a tool for audio publishing, there are
much simpler options.
For a long time, my favorite iPhone audio recording app was iTalk, the coolest
feature of which is that it lets you transfer big audio files from your phone to your
computer very quickly over a Wi-Fi network. But with iTalk or any other iPhone
recording app (there are many), you still have to handle all the actual publishing and
distribution steps yourself: getting the file online, letting people know about it, and
providing a listening interface.
AudioBoo, which came out this spring, takes care of all that, which is why some
people are calling it ―Twitter with voice.‖ I guess you could also say that it‘s one ―k‖
short of audiobook—which gets at the amateur aspect of the project nicely. It‘s the
creation of BestBefore Media, a small group of technologists and designers in London,
and was built with financial support from 4iP (4 Innovation for the Public), a venture
fund created by Britain‘s Channel 4 to support digital media innovation.
AudioBoo isn‘t a professional audio platform by any means—it doesn‘t come with
any editing features, even a simple trimming feature like the one in Apple‘s Voice Memo.
But what it‘s really good at is sharing. The app lets you record for up to 5 minutes. When
you‘re done, you can give your recording a title and, if you want, attach a photograph. (If
you give the app permission, it will also geotag the recording with your latitude and
longitude.) Then the app automatically uploads your recording and your photo to the
AudioBoo website, which functions as a sort of community audio blog.
Each recording, or ―boo,‖ has its own Web page where other people can listen, see
the associated photo, and view the location where you recorded the boo on a map. You
can grab the HTML code that lets you embed boos in other Web page.
The iPhone app also lets you browse and hear recent boos. Right now, this list isn‘t
good for much beyond dipping at random into the vast ―boostream‖—one thing the app
lacks is a way to locate specific users and their boos. But I‘m sure the AudioBoo app will
be upgraded over time to include features like search, bookmarking, subscriptions, and
profile views that we‘re used to seeing in other group-publishing apps such as the
Tweetdeck app for Twitter, the Mobile Fotos app for Flickr, and Apple‘s own YouTube
app.
And there are two other big redeeming features to the AudioBoo platform.
1) You can link your AudioBoo account to your Twitter or Facebook account, so
that whenever you upload a new boo, AudioBoo will send an automatic notice to your
Twitter followers and post a status update on your Facebook profile. Both of these
include a link that leads your followers and your friends back to your recording‘s
AudioBoo web page.
2) You can sign up to follow your favorite AudioBoo users, just the way you would
on Twitter, except that ―follow‖ has a slightly different meaning: AudioBoo assembles
boos from everyone you‘re following into a custom podcast, to which you can subscribe
using iTunes. That means every time you sync your iPhone, there will be a new podcast
waiting for you with the latest AudioBoo updates from everyone you follow.
These two features—which take care of the distribution problem—are what turn
AudioBoo from a mere audio recording tool into a real audio publishing tool. Of course,
at the moment, much of the stuff being published on AudioBoo is rubbish. So many
people are trying it out for the first time that most boos have titles like ―My First Boo.‖
But that‘s to be expected, and in short order there will no doubt be an elite group of
AudioBoo artists finding clever, entertaining, and educational uses for the medium. Why
wouldn‘t there be? We‘ve seen exactly the same arc in the past with blogs, podcasts, and
Twitter.
There‘s one more twist on AudioBoo that‘s got me intrigued. Just this week, the
company struck a deal with SpinVox, another UK company that specializes in converting
voicemails and voice memos into text. According to BestBefore CEO Mark Rock, there‘s
an ―AudioBoo Pro‖ app in the works that will have Spinvox‘s service baked in, meaning
AudioBoo users will be able to upload longer recordings and get back text versions,
presumably via e-mail.
I absolutely can‘t wait for something like this, and I would be willing to pay real
money for it. If the transcriptions are any good, such a service would transform the way I
do my job, by allowing me to find more productive uses for the untold hours that I
currently spend transcribing audio recordings and/or cleaning up typewritten interview
notes.
But if all this digital-media acceleration leaves you a bit dizzy at times, I‘m with
you.Every week seems to bring a new mobile application or Web-based publishing
medium that promises to once again upset all of our expectations about communication,
journalism included. But I guess that‘s why I like my job—I get to ride the tiger by
writing about it.
59: Art Isn’t Free: The Tragedy of the
Wikimedia Commons
July 17, 2009
I came across a nice Isaac Asimov quote this week: ―No sensible decision can be
made any longer without taking into account not only the world as it is, but the world as
it will be.‖
The copyright dispute that went public this week between the UK‘s National
Portrait Gallery and the Wikimedia Commons is lodged firmly in the world as it is. Under
UK law, it seems pretty clear that the 3,000-plus high-resolution images that a Wikimedia
administrator copied from the museum‘s website and uploaded to the Commons are
copyrighted by the museum and are not, as the Wikimedia Foundation argues, in the
public domain.
But the case is making waves in the blogosphere because it‘s also about the world
as it will be. Digital technology is making it possible to share near-perfect copies of
priceless paintings and other cultural artifacts with anyone, anywhere, instantly. And
because the cost of this sharing is now practically zero, many people now believe the
information itself should also be free.
And perhaps it should. But unless we figure out a reasonable way to support the
institutions that spend lots of money to make these images—namely, museums—very
little of this material may actually be available for sharing in the future.
Free-culture activists are applauding Wikimedia for refusing to delete the disputed
images, but this isn‘t a simple Robin Hood story. If the Wikimedia Foundation prevails
and gets to keep the images, it could lead to an overall reduction in sharing. Don‘t get me
wrong—I‘m a public domain maniac. I‘d love to see as much of the world‘s heritage
digitized and freely shared as institutions can manage. But what I fear is that the episode
will prompt the National Portrait Gallery and other museums to either slow digitization
efforts or place greater restrictions on access to their digital collections in the future—or
both.
Let me back up and explain the Wikimedia case. The Wikimedia Commons is a
public file repository maintained by Wikimedia Foundation, the same non-profit
organization that runs Wikipedia. (Most of the images you see alongside Wikipedia
articles are stored in the Wikimedia Commons.) In March, a volunteer Wikimedia
administrator named Derrick Coetzee copied 3,300 high-resolution images from the
National Portrait Gallery‘s online database, some of them as as large as 2400 x 3200
pixels, or about 8 megapixels. He then uploaded all of the images to the Wikimedia
Commons, where, for the moment, you can view them at your leisure—just be ready for
lots of lace and powdered wigs.
Coetzee, a U.S. citizen, hasn‘t spoken out about the case, so it isn‘t clear whether he
was merely trying to make it easier for others to see the portraits, or whether he was also
hoping to goad the National Portrait Gallery into a confrontation. But that was certainly
the effect. In April, the gallery‘s solicitors asked the Wikimedia Foundation to remove
the images. It refused, for reasons I‘ll get into momentarily. On July 10, the solicitors,
Farrer & Co. of London, turned to Coetzee himself, sending him a letter (which he
promptly posted on the Wikimedia Commons) threatening to seek injunctions and
damages unless Coatzee agrees to remove the images, delete all of his copies, and
generally keep off the Gallery‘s digital lawn.
The letter gave Coetzee until July 20 to comply. As of this writing, the images are
still online, so it‘s safe to assume Coetzee and the Wikimedia Foundation are digging in
their heels. He posted an update saying he‘s being represented in the case by Fred von
Lohmann, an intellectual property attorney for the Electronic Frontier Foundation (EFF).
The main question in the dispute is not about the portraits themselves, most of
which were painted more than 100 years ago and are indisputably in the public domain.
Rather, it‘s about who owns the digital images. Are they copyrighted by the National
Portrait Gallery, which went to the expense of hiring professional photographers to
document the original paintings, and should therefore (the solicitors argue) have the right
to control their distribution and collect licensing fees from anyone who reproduces them?
Or are they in the public domain and therefore owned by everyone—in which case the
gallery has no right to prevent sharing and reproduction of the works?
The Wikimedia Foundation has a firm stance on the question. The foundation says
its position ―has always been that faithful reproductions of two-dimensional public
domain works of art are public domain, and that claims to the contrary represent an
assault on the very concept of a public domain.‖ It‘s supported in this position by a 1999
New York District Court decision, Bridgeman Art Library v. Corel Corp., in which a
federal judge ruled that a photograph intended merely as a faithful copy of a work of art
lacks originality, and therefore isn‘t entitled to copyright protection. If the work itself is
in the public domain, under this interpretation, then the photograph is too.
The Bridgeman decision obviously doesn‘t have any force in the United Kingdom,
where the presumption is still that a photograph of a painting is copyrighted. Farrer & Co.
make this argument at length in their letter—and a coalition of UK museums called the
Museum Copyright Group goes even farther, arguing that the Bridgeman ruling ―is of
doubtful authority even in the USA.‖ (District court decisions aren‘t necessarily binding
on other courts, and the Supreme Court has never ruled on the question.)
But assuming that courts in the UK, where the images were made and stored, will
find that Coetzee‘s actions are an open-and-shut case of copyright violation, it still won‘t
be simple for the National Portrait Gallery to enforce its claims, given that Coetzee is an
American citizen and that the Wikimedia Foundation‘s Web servers are in the United
States. Enforcing UK copyrights in the United States is ―possible to do, but it can be quite
expensive,‖ Struan Robertson, a copyright attorney at London-based law firm Pinsent
Masons, told the UK tech news site The Register.
So untangling all the legalities may take a while. That will give Coetzee, the
Wikimedia Foundation, and the EFF plenty of time to rally support around their
argument, which will likely be that the decision in Bridgeman should be the model for
copyright policy around the world, and that the National Portrait Gallery, by attempting
to assert its copyright in the images, is showing itself to be an enemy of the free exchange
of ideas.
But it would be unjust to paint the museum as the villain in all this. Before there are
any digital images to be exchanged, somebody has to make them—and that costs money.
The National Portrait Gallery says it has spent over £1 million over the last five years to
digitize its collection, which now consists of more than 60,000 online images. Many
other museums are undertaking similar efforts, including Boston‘s Museum of Fine Arts
(MFA), whose collection of more than 160,000 online images is believed to be the
world‘s largest.
Having made high-resolution images of their treasured artworks, museum curators
would probably like nothing better than to give them away. But they can‘t afford to. As
you may have noticed, our non-profit cultural institutions aren‘t exactly swimming in
cash, especially now that the the economic crisis has hit their usual donors so hard. For
art museums, licensing images for use on posters, T-shirts, book covers, calendars,
textbooks, and all the rest provides a vital revenue stream. If anyone could make a coffee
mug showing Van Gogh‘s Houses at Auvers without having to pay the MFA, that stream
would dry up, and the museum would have a harder time making its art available at all.
The National Portrait Gallery put the situation this way, in a statement e-mailed to
one inquirer: ―The Gallery supports Wikipedia in its aim of making knowledge widely
available and we would be happy for the site to use our low-resolution images, sufficient
for most forms of public access, subject to safeguards…The Gallery is very concerned
that potential loss of licensing income from the high-resolution files threatens its ability
to reinvest in its digitisation programme and so make further images available.‖
For the sake of argument, let‘s grant that the images Coetzee posted on the
Wikimedia Commons are in the public domain (as they would be if they‘d come from the
UK gallery‘s U.S. counterpart, the National Portrait Gallery at the Smithsonian
Institution). That still doesn‘t give Coetzee or the Wikimedia Foundation the moral
authority to copy and reproduce them at full resolution. In believing that they do have this
authority, they are likely falling into the Wikipedia mode of economic thinking.
Wikipedia works as a free global encyclopedia because it has found a way around
the ―free rider‖ problem. That‘s an economic situation in which the majority of users pay
nothing and consume far more than their fair share of a resource, while a minority do all
the work and feel insufficently rewarded. As Chris Anderson observes in his new book
Free: The Future of a Radical Price, there is no free rider crisis on Wikipedia—in fact,
the more free riders, the better. That‘s because exposure to the huge audience that
Wikipedia provides is itself the reward for the small fraction of its users who are willing
to write and edit articles for no pay.
But high-resolution photos of museum portraits are not like Wikipedia articles.
They may lack originality, but the photographers and the institutions who make them
can‘t afford to do so for free, and the exposure that free distribution brings is not
sufficient compensation. (At least, it hasn‘t been sufficient in the past. I think there‘s a
case to be made that museums should be doing more to explore how giving away high-
resolution digital art might actually help them increase revenues in other ways. But that‘s
a topic for another column.)
By publishing thousands of National Portrait Gallery images on the Wikimedia
Commons, Coetzee has made all of us into free riders, with zero reward flowing to the
gallery. I‘m sure that he loves art and is committed to supporting the free exchange of
ideas, which ultimately leads to more art. And I‘m sure that many users of the Wikimedia
Commons will find the images he‘s uploaded enlightening. But there needs to be some
way for the National Portrait Gallery to benefit from digitization and online sharing, or
the result could be the very opposite of free exchange.
Museum curators don‘t want to be seen as the high priests of art, jealously guarding
access to their relics. They really want people to see, enjoy, and learn from the art under
their care. But one has to assume that after a few more episodes of piracy, museum
directors will either have to slap much stricter digital-rights management systems their
online archives, or start facing harsh questions from their boards about why they‘re
spending so much money on digitization.
In the end, the shared cultural riches that all museum visitors draw upon might have
to be put behind thicker walls. They call that the tragedy of the commons, and it would be
a shame to see it affect such an important resource.
Addendum: After I finished this essay on Thursday, I discovered that Erik Moeller,
the deputy director of the Wikimedia Foundation, had just published a blog post critical
of the National Portrait Gallery‘s legal threats.
An excerpt: ―The Wikimedia Foundation sympathizes with cultural institutions‘
desire for revenue streams to help them maintain services for their audiences. And yet, if
that revenue stream requires an institution to lock up and severely limit access to its
educational materials, rather than allowing the materials to be freely available to
everyone, that strikes us as counter to those institutions‘ educational mission. It is hard to
see a plausible argument that excluding public domain content from a free, non-profit
encyclopedia serves any public interest whatsoever.‖
In my view, Moeller‘s logic is somewhat backward. It‘s clearly in the public
interest for museums to exist and to digitize their works. There would be no threat to the
revenue streams they earn from these digital materials—and therefore no need to lock
them up—if those who wished to reproduce the material observed common-sense limits,
or were willing to work with the museums to find some way for all parties to benefit.
As far as the Coetzee case itself, Moeller writes: ―The Wikimedia Foundation has
no reason to believe that the user in question has violated any applicable law, and we are
exploring ways to support the user in the event that NPG follows up on its original threat.
We are open to a compromise around the specific images, but our position on the legal
status of these images is unlikely to change.‖
Author's Update, February 2010: Coetzee and the Wikimedia Foundation did not
comply with the National Portrait Gallery's July 20 ultimatum; as of this writing, the
digital portraits are still online. In his reply to the gallery's lawyers on behalf of Coetzee,
the EFF's von Lohmann said the gallery's allegations were baseless, since the copying
took place in the United States, where the images are not protected under copyright law.
No further developments in the case have been publicized.
60: Project Tuva or Bust: How
Microsoft’s Spin on Feynman Could
Change the Way We Learn
July 24, 2009

―I don‘t know what‘s the matter with people: they don‘t learn by understanding,
they learn by some other way—by rote or something,‖ physicist Richard Feynman once
said. ―Their knowledge is so fragile!‖
Maybe Feynman‘s brain was big enough to simply ―learn by understanding‖—
sucking in and comprehending complex realities in a single glance. But what I think he
actually meant was that people should learn by exploring and investigating, rather than
just memorizing. Only then would their knowledge be useful and durable.
What makes Microsoft Research‘s new Project Tuva website so wonderful is not
just that it puts some of Feynman‘s most famous physics lectures online, but that it
invites viewers to explore the subject matter in exactly the way Feynman would have
recommended. The Caltech scientist was famous in part for for his lucid way of
explaining things like gravity and quantum mechanics—so the lectures certainly stand on
their own as educational set-pieces. But the transcripts, note-taking tools, and multimedia
―extras‖ that now show up alongside the videos make the material even more
entertaining, accessible, and, well, explorable.
Project Tuva was unveiled last week. It‘s named after the central Asian country
Feynman famously and somewhat quixotically wanted to visit before he died. (He never
got permission from the Soviet Union, of which it was then a part, as his friend Ralph
Leighton chronicled in his 1991 book Tuva or Bust!) The site uses Microsoft‘s Silverlight
software, a Web-based multimedia player similar to Adobe‘s Flash platform, to showcase
a series of lectures that Feynman gave at Cornell University in 1964. The lectures were
filmed by the BBC for broadcast in the United Kingdom, and weren‘t available to Web
viewers until Microsoft chairman Bill Gates, a longtime Feynman admirer, purchased the
rights and asked Microsoft Research to find a way to host digital versions online.
―I said we could host them, but we could also do something much more interesting
with it,‖ says Curtis Wong, who leads a small division of Microsoft Research called the
Next Media Research group. I‘ve known Wong for years and I make a point of following
his work, because he‘s always got some great new idea about how to take a cultural
resource and increase its value through multimedia technology.
For the concepts behind Project Tuva, Wong told me by phone this week, he
reached back to three projects he led in the mid-1990s. The first was an interactive tour,
published on CD-ROM, of the Barnes Foundation‘s collection of Impressionist and post-
Impressionist paintings outside Philadelphia. The second was another CD-ROM about
Leonardo da Vinci, built around a digital facsimile of one of Leonardo‘s notebooks, the
Codex Leicester, which also happens to be owned by Bill Gates. (See this May 2008
column for more on those two projects.) The third was an interactive video documentary,
developed as a demonstration for PBS but never aired, in which the program‘s closed-
captioning information was interspersed with hyperlinks that led to related articles in
Microsoft‘s Encarta encyclopedia.
Each project represented a step in the development of what Wong calls his
information learning model for interactive media; it‘s also been called the ―contextual
pyramid‖ or ―ECR,‖ for engagement, context, and reference. It‘s a simple idea: first, you
hook someone—whether they‘re using a CD-ROM, watching a video, or visiting a
website or a museum—with a story or an object that produces an immediate emotional
impact. Then, at the very moment they‘re most engaged and curious, you offer them
context that broadens their understanding. Finally, you provide a deep reference layer, for
the people who get so intrigued that they want to know a lot more.
I‘d love to explain all the lovingly crafted ways in which the Barnes and Leonardo
CD-ROMs and the PBS demo implemented this model, but it would take too long. Jump
back to 2008 or so: as soon as Wong found out about Bill Gates‘ quest to put the
Feynman lectures online, he realized that they cried out for the same treatment. ―As you
can tell from Bill‘s opening video, he‘s really passionate about things like this that have
the potential to inspire a lot of kids about science,‖ Wong says. ―But if you watch the
lectures and you don‘t know anything about the particular topic, it can be a challenge,
especially if you don‘t recognize the names of the people Feynman is talking about.‖
In the first lecture alone, these include Tycho Brahe, Johannes Kepler, Galileo
Galilei, Isaac Newton, Henry Cavendish, and Albert Einstein, among others. ―I wanted to
think about taking those same ideas that we had earlier [for the interactive CD-ROM and
video projects] and putting them together with this idea for putting the Feynman lectures
online,‖ Wong says.
That meant finding material that complemented each section of Feynman‘s talks.
For help with that task, Wong turned to University of Washington physicist Stephen Ellis
and a group of undergraduates belonging to the UW Society of Physics Students. ―We sat
down with them and watched the lectures and I had them take notes of all the things that
could use classification, and also where I should look on the Web for good resources that
would help you as a student to understand. That formed the kernel of the ‗extras,‘‖ Wong
says.
Using Silverlight, he spread each of Feynman‘s lectures out against a digital
timeline. At the appropriate moments in this timeline, extras pop up on the right side of
the screen—they might be photographs, links to websites or Wikipedia articles, or special
text notes, penned by Ellis, that expand on some of Feynman‘s points. If you click on one
of the extras, the video automatically pauses while the feature opens in a pop-up window.
In the first lecture, which is all about gravity, some of the coolest extras take you to
Microsoft‘s WorldWide Telescope, another Curtis Wong production. It‘s an interactive
virtual planetarium with embedded multimedia tours revealing details about heavenly
objects such as the M81 galaxy or the Horsehead Nebula. (See this column for all the
details.) WorldWide Telescope was also built on the Silverlight platform, which means it
opens up right inside the Project Tuva player. ―I wanted to show the power of the kind of
simulation that you can bring into the narrative, and WorldWide Telescope was the thing
I had handy,‖ says Wong.
Another nifty feature of Project Tuva is a note-taking area on the left side of the
screen where you can type your own observations about the lectures, which are then
saved locally on your PC. Your notes get pegged to the timeline the same way the extras
are; the next time you watch the lecture, they‘ll pop up at the appropriate time. Your
notes, as well as as the lecture transcript, are searchable.
The whole thing adds up to a platform for interactive learning that could obviously
be used to soup up almost any kind of content. MIT‘s OpenCourseware site, for example,
includes the complete lecture videos for dozens of undergraduate courses at MIT, and
would be a fantastic source of material for an expanded Project Tuva. I have no idea
whether the MIT videos are Silverlight-friendly, and it would be a big undertaking to
seed them with the appropriate extras. But that‘s the sort of thing you could probably get
a few undergrads to do for extra credit. Alternatively, you could crowdsource the task,
Wikipedia-style. ―We‘ve gotten a lot of feedback already,‖ Wong says, ―and that‘s what a
lot of people seem to want—they‘re saying ‗I want to see my stuff in there.‘‖
Alas, here comes the inevitable caveat: it‘s not clear whether or how Project Tuva
might be transformed into a general tool for educational video publishing. Bill Gates and
Microsoft are modern-day Medicis; they have so much cash that they can afford to spend
some on rare and remarkable productions like Project Tuva and WorldWide Telescope.
(The Barnes and Leonardo CD-ROMs were also artifacts of Gates‘s largesse, through
Corbis, the image archive he founded in 1989.) But Microsoft is, at bottom, a very
focused business organization, and Project Tuva doesn‘t fit with any of the company‘s
existing products. It‘s not that such ideas can‘t be commercialized—it‘s that Microsoft, as
an organization, often doesn‘t seem to have the breadth of mind to figure out how.
University of Washington computer science professor Ed Lazowska summed it up
well in a comment on Greg‘s April story about the downsizing of Microsoft‘s Live Labs,
which had been working on some amazingly ahead-of-their-time user-interface advances
like Seadragon and Photosynth. ―A drawback of Microsoft‘s ‗product group‘ structure,‖
Lazowska said, ―is that if something doesn‘t fit directly within the domain of a specific
product group, its value may not be recognized.‖ I‘m afraid that‘s exactly what will
happen to Project Tuva. In the end, it seems that the role of Wong and his colleagues at
Microsoft Research is merely to propose, while the product groups dispose.
But Wong, for his part, sounds optimistic: ―I‘m hoping that some of these ideas will
inspire the product groups to think about new markets that they might present to them,‖
he says. In that spirit, I‘ll close, as I opened, with a quote from Feynman: ―I don‘t know
anything, but I do know that everything is interesting if you go into it deeply enough.‖
Author's Update, February 2010: As I feared, it appears as if Project Tuva was
another Microsoft one-off. The company hasn't fed any new content into the platform
since the Feynman lectures went online.
61: Shareaholic Becomes the Link-
Sharing Tool of Choice—And Builds a
Vast Database on Social Media Behavior
July 31, 2009

Blogging is about active sharing. I‘ve known this on an intellectual level for years,
but working for Xconomy has made the idea very real to me. My stories reach far more
readers if I take a few extra minutes every day to share the items with my e-mail contacts
and Twitter followers, and to submit links to places like Slashdot and Y Combinator‘s
Hacker News. And after all, if nobody is aware that you posted something, what was the
point of writing it?
Of course, it‘s not just my own stories and other Xconomy articles that I share. I
find loads of cool stuff across the Web every day, and Twitter is a great vehicle for
sharing the joy with people who share my tastes.
This week I started using a browser plugin called Shareaholic that makes all of this
active sharing much easier, by providing a single button that connects me instantly to
more than 60 sharing services including social bookmarking, blogging, publishing, and
other tools. Shareaholic isn‘t new—in fact, it‘s the most widely distributed browser
plugin for sharing, with over a million downloads so far. It‘s been trendy among the
digerati at least since February 2008, when its inventor, Jay Meattle, was one of three
grand prize winners in a Mozilla-sponsored contest designed to highlight the coolest new
Firefox extensions. But somewhat embarrassingly, I only learned about it recently, when
Meattle gave a presentation at the July Web Innovators Group meeting in Cambridge,
MA.
Meattle came by Xconomy‘s palatial new offices recently to tell me more about
Shareaholic, which has grown from a plugin into a full-fledged startup based in
Cambridge. He showed me how easy it is to configure the free tool to submit whatever
Web page you‘re looking at to Digg, Facebook, Reddit, Twitter, Techmeme, Delicious,
StumbleUpon, and about three dozen other social networking and bookmarking sites and
news aggregators. You can also use it to share your discoveries with yourself, by sending
them to online notebook services like Posterous or Evernote (my personal favorite) or
your blog on LiveJournal, Blogger, or Tumblr.
And, of course, you can e-mail links to yourself or to others via Gmail, Hotmail, or
your default e-mail client. In fact, Meattle says the whole idea for Shareaholic came from
conversations with a colleague named David Cancel who, like Meattle, was tired of
having to copy URLs from the browser address bar and paste them into e-mails when the
pair was sharing Web materials with one another. Cancel is the co-founder and CTO of
San Francisco- and Cambridge-based Lookery, a targeting service for online ads where
Meattle was, until four months ago, the vice president of products. The pair also worked
together on the founding team of Compete.com, a Boston-based Web traffic analysis firm
sold last year to marketing giant TNS.
For Meattle, it was a simple matter to write some software that would automatically
grab a link from the Firefox URL bar and dump it into a new outgoing e-mail message.
And over time (meaning, working nights and weekends until a few months ago) Meattle
has been able to make Shareaholic work on multiple browsers—Firefox, IE, Safari,
Chrome, Flock, and even Songbird, Mozilla‘s open-source answer to iTunes—and
communicate with practically every Web 2.0-era sharing service that has a public API, or
application programming interface.
But why go to all this trouble to provide a free tool that generates no direct
revenue? To answer that, all you have to do is look at Meattle‘s recent history as an
entrepreneur. At bottom, both Compete.com and Lookery are about collecting and selling
data that helps other companies understand and predict consumers‘ behavior online. And
as it turns out, every time a Shareaholic user activates the tool, Meattle collects a few bits
of (anonymous) data about what Web content people think is worth sharing, where it
came from, and where it was shared to. At a million sharing actions per month, that data
is piling up quickly—and it‘s just begging to be monetized.
―At Compete, it took us millions of dollars to get to this point, and here we‘ve
already built a huge data set‖ on a bootstrapped budget, Meattle says. (Shareaholic is just
raising its first round of angel funding now.) ―It was a happy accident,‖ he says. ―We
didn‘t do this to get the data.‖ But now that the company has it, it could do any number of
things with it.
Meattle predicts, for example, that the new field of ―social media optimization‖—
services that help publishers and advertisers make the best use of social media tools like
Twitter and Facebook—has the potential to become just as big and important as search
engine optimization and search engine marketing. (In that sense, Shareaholic joins a
growing cluster of firms in the Boston area that specialize in new forms of online
marketing, including Hubspot and Crimson Hexagon.) Getting an inside look at
Shareaholic‘s data about what content is being shared most often, on what platforms,
would be any social media marketer‘s dream.
Up to now, Meattle says he has focused on making Shareaholic powerful yet easy to
use. The next six months, he says, will be spent experimenting with various business
models. ―It could be any of five different things,‖ he says. ―But it comes back to the
philosophy that if you build a good product, and keep your users number one, good
things are going to happen.‖
Shareaholic does have some competition. There‘s ShareThis, social bookmarking
tool used by many online publications, including Xconomy. And as Greg wrote a couple
of weeks ago, a Bellevue, WA, startup called Sharein.com is entering the same territory.
But as far as I can tell, Shareaholic offers connections to far more sharing services than
any of the competing tools.
If it has a drawback, it‘s that it‘s a bit impersonal, and only works as well as the
services that it connects you to. If you want to tweet about something using the tool, for
example, it connects you to TwitThat, which automatically formats the headline of the
article you‘re reading and shortens the URL, but doesn‘t let you add commentary. When I
tweet, I like to give my followers a bit of insight beyond what‘s in a headline, so for
important stuff I‘ll probably keep tweeting manually using Tweetdeck.
But Shareaholic is so convenient that I‘ve already gotten rid of the row of separate
browser bookmarklets that I used to use to connect to Tumblr, Evernote, and the like. In
fact, it‘s so much fun using the tool that I now have to remind myself, once in a while, to
stop spreading the news and go write something worth sharing.
62: Startups Give E-mail a Big Boost on
the iPhone with ReMail and GPush
August 14, 2009
As a device for managing your e-mail, the Apple iPhone isn‘t bad, but it does have
a few quirks and limitations. This week, I want to write about two brand-new applications
that work around those failings, making the iPhone into a far more powerful tool for
staying connected.
The first app grabbed my attention because of my recent brush with almost-literal
highway robbery. My drive to Michigan last week to visit my parents took me through
southern Ontario. Soon after I crossed over Buffalo‘s Peace Bridge into Fort Erie, this
astonishing little SMS message popped up on my iPhone: ―AT&T Free Message:
International data rate of $15.00/MB applies. Unlimited domestic data rate plan does
NOT apply outside the U.S.‖
I immediately put my phone into airplane mode, fearful of receiving any more SMS
messages or e-mails, which, at $15 per megabyte, would have cost me more than the gas
I was burning. That meant I was effectively off the grid during the four hours it took to
cross this little corner of Canada. I survived the hardship—but the experience did
highlight the problem that outrageous roaming charges can pose for travelers who use
mobile e-mail a lot.
As it happens, a new app called reMail can take some of the sting out of this
dilemma. It went live in the iTunes App Store yesterday, and I learned about it from
Jessica Livingston at Y Combinator, the California venture incubator where reMail got its
start. ReMail stores your entire e-mail archive on your iPhone, which means you can read
your messages without ever having to go online. You can‘t do that with the iPhone‘s
built-in mail application, which only keeps the last 50 messages. ReMail also lets you
search the full text of all your messages—which, again, the built-in mail app can‘t do. (In
a recent update, Apple added a search function to the mail app that can scan older
messages stored in the cloud, but it‘s limited to the subject line and the sender and
recipient addresses.)
―I live in e-mail while I‘m traveling—all my meetings are scheduled via e-mail,‖
says Gabore Cselle, the founder of San Francisco-based NextMail, the one-man startup
behind reMail. ―So I need access to my e-mails, all the time. Building an app which
would let me take all my e-mail with me seemed like a good idea. And it‘s saving me
money.‖
I‘ve been testing reMail, and so far it‘s working exactly as advertised. The app
connects to your Web-based e-mail account—it works with Gmail and any IMAP-
enabled e-mail service—and sucks down your entire e-mail archive. That process can
take a while (reMail spent about eight hours downloading the 78,000 messages in my
Gmail archive) but the upside is that you only have to do it once. After that, each time
you start the app, it just grabs your most recent messages.
What‘s amazing about reMail is that it uses a relatively small amount of your
iPhone‘s memory. My 78,000 Gmail messages are taking up about 4.3 gigabytes of space
on Google‘s servers. But the reMail database on my iPhone is about one-tenth that size:
432 megabytes. ―Compressing your e-mails down to a size that people would find
acceptable‖ was one of the three biggest technical hurdles to making reMail work, Cselle
says. Exactly how he pulled that off is ―a state secret,‖ he jokes, but part of the solution
was to grab just the text of each message, not attachments, which take up about 70
percent of the storage space at Gmail, according to Cselle.
―We ‗lazy load‘ attachments,‖ he says, adding, ―We download them to your iPhone
when you first click on them, and then keep it there permanently. Once open, you can be
confident that you‘ll have that PDF or JPG with you wherever you go.‖ Of course, the
more attachments you download, the more space reMail will take up on your phone.
The only problem I‘ve experienced with reMail is that it sometimes fails to connect
with Gmail, but I suspect the problem is on Google‘s side—lately I‘ve been seeing all
sorts of server errors and delays with Gmail on the Web, too. (What‘s up with that,
Google?)
Cselle says he got the idea for reMail because his parents live in Switzerland, and
every time he visits them, he gets the same AT&T text message about data roaming
rates—except that the rates are even higher in Europe, at $19.97 per megabyte. ―AT&T
must be printing money with this,‖ he says.
Cselle should know a bit about printing money: he‘s a former Google software
engineer who worked alongside Paul Buchheit and Sanjeev Singh, the inventors of
Gmail. In fact, Buchheit and Singh—who went on to co-found FriendFeed, which was
acquired by Facebook this week—are NextMail‘s primary angel investors. After Google,
Cselle spent some time as vice president of engineering at Xobni, the San Francisco
startup that built a search utility for Microsoft‘s Outlook e-mail program. He started
building the reMail app while participating in Y Combinator‘s venture incubator program
in Mountain View, CA, last winter.
The other two big technical hurdles Cselle had to overcome, by the way, were
programming reMail to quickly search the full text e-mail archives on the iPhone, and
making the app work with all types of e-mail servers, which can use different versions of
the IMAP Internet mail protocol. ―There‘s a lot of technology in this product,‖ Cselle
says.
Which is what makes him comfortable about the app‘s relatively high price: $4.99
right now, going up to $9.99 on September 1. That‘s a lot more expensive than most apps
in the iTunes App Store, and I asked Cselle why he decided against a lower price, or
going the ―freemium‖ route with a free basic version and a full-featured premium
version. ―We beta tested this app with about 100 people,‖ he says. ―We asked them what
they would pay and got responses that ranged much higher than the $4.99 we‘re pricing
this at…So we‘re comfortable pricing reMail like this.‖
Surprisingly, given Apple‘s recent reputation for rejecting apps that perform
functions that compete with (or highlight the shortcomings of) the iPhone‘s built-in
capabilities, Cselle says reMail ―sailed through‖ the iTunes App Store approval process
in only two weeks. Josh Lowensohn over at CNET speculates that Apple is getting ready
to launch its own version of full-text e-mail search, which would make reMail
unnecessary and might diminish reMail‘s status in Apple‘s mind as a competitor.
But it‘s even more surprising that Apple approved GPush, the second iPhone app
that I want to tell you about. GPush makes up for one of the inherent flaws in the
iPhone‘s built-in e-mail system, which is that it can‘t ―push‖ e-mail to your phone if
Gmail is your primary e-mail service. Gmail messages are ―fetched‖ rather than
pushed— meaning they sit on Gmail‘s servers for 15 minutes or more before the iPhone
grabs them. You can get push e-mail on the iPhone if you switch to Apple‘s $99-per-year
MobileMe service or if your company has an Exchange server. But until GPush came
along, Gmail users were out of luck.
The creation of Cambridge, MA-based startup Tiverias Apps, GPush costs just
$0.99 and actually went live in the iTunes App Store on August 8. But the company had
to withdraw the app from the store almost immediately after an unexpected crush of users
brought its servers to a near-halt and exposed an architectural flaw. Co-founder Yoni
Gontownik tells me the company is fixing the problems, and plans to make the app
available again today. [Update, 10:00 a.m., August 17, 2009: The release was delayed
over the weekend, but the app is now available in the App Store.]
GPush makes use of the new push notification functions included in the 3.0 version
of the iPhone‘s operating system. Whenever someone sends a new message to your
Gmail account, a little note resembling a text message pops up on your iPhone‘s unlock
screen, showing the sender and subject of the message. You can then open the regular
mail app to read the full message. That‘s all there is to it—once you‘ve entered your
Gmail login information into the GPush app, you never have to open it again. (For geeks
only: Behind the scenes, Tiverias‘s servers are creating a persistent IMAP connection
with Google‘s Gmail servers. When you get a new message, Gmail notifies Tiverias,
which notifies Apple, which pings your iPhone.)
The recent rejection of the native iPhone apps that Google developed for location-
sharing and voice-mail handling (Latitude and Google Voice, respectively) have
contributed to the impression that Apple won‘t allow any application onto the iPhone that
competes with built-in apps or impinges on businesses Apple may one day develop. And
these are far from the only examples: Apple also blocks mobile browsers like Opera that
would compete with Safari. Given that GPush gives Gmail users a way to get push
notifications without paying $99 a year for Apple‘s MobileMe service, the approval of
GPush is more than a bit confounding.
Gontownik speculates that after the storm of protest generated by the Google Voice
rejection, ―there‘s a lot of pressure on [Apple] to accept any app that could be considered
controversial.‖ My own theory—and I hope it‘s wrong—is that whoever approved the
app at Apple didn‘t get the memo about competing apps, and that once GPush has gotten
more publicity, the company will decide that it doesn‘t like the app after all.
But for the moment, GPush is in Apple‘s good graces. Gontownik and co-founder
Eliran Sapir assure me that the overload issues that brought down Tiverias‘s servers last
week are being fixed, using software called Scaler that moves most of the work to cloud
servers at Amazon. So visit the App Store and give GPush a try—before Apple changes
its mind.
63: Why It’s Crazy for Authors to Keep
Their Books Off the Kindle
August 21, 2009
In June, I wrote a column about the problem of ―On Demand Disorder―—my name
for the narrowing of vision that can occur when people get addicted to the instant
experiences available over the Internet and other digital media. If you only listen to the
music you can find on iTunes or Pandora or Last.fm, if you only watch movies from
Netflix, if you only buy books listed at Amazon, or if you only go to restaurants included
on Yelp or UrbanSpoon or OpenTable, I argued, you‘re probably suffering from ODD—
and missing out on a lot of great non-digital culture.
So it was a little hypocritical of me to get into a snit one weekend in July, when I
discovered that a new book I wanted to read, Ellen Ruppel Shell‘s Cheap: The High Cost
of Discount Culture, was not available for download on my Amazon Kindle 2 e-book
device. In frustration, I banged out the following Twitter post:
It‟s come to this: I want to read Ellen Ruppel Shell‟s „Cheap,‟ but there is no Kindle
edition. Wait 3-5 days? Buy at store? Fail.
More or less instantly, one of my Twitter followers, Siva Vaidhyanathan, called me
on it. Vaidhyanathan is a cultural historian and media scholar at the University of
Virginia who has written two books about copyright, and is working on another called
The Googlization of Everything: How One Company Is Disrupting Commerce, Culture,
and Community…And Why We Should Worry. He replied:
@sivavaid to @wroush: wow. That‟s sure disrespectful to people who spend years
writing books and oppose DRM. I hope impatience is working for you.
Over the course of the next few hours, Vaidhyanathan and I engaged in the
following Twitter conversation:
@wroush to @sivavaid: No disrespect intended to authors. When books are print-
only, it impedes the flow of ideas. How does that help anyone?
@sivavaid to @wroush: yet somehow we got monotheism, reformation, scientific
revolution — all without Kindle! Amazing!
@sivavaid to @wroush: besides, only rich old people have Kindles.
@wroush to @sivavaid: It‟s bad business. Publishers who bypass Kindle are
turning away sales & opting not to engage with their most valuable readers.
@sivavaid to @wroush: How do you know how many sales are lost? Amazon won‟t
say. Do you think publishers are dumb? Negotiations with Amazon are brutal.
@wroush to @sivavaid: But Amazon *does* say: When a book is available on
Kindle, 35% of buyers choose that format. I think publishers are scared, not dumb
@sivavaid to @wroush: stat says nothing about mkt penetration of Kindle. Secret
because inconveniently small. BTW, not 35% of every book. Sly.
@sivavaid to @wroush: publishers not scared of Kindle. Pubs can‟t get a good deal
from Amazon. Scared NOT to bow to Amazon strong-arm tactics. Talk 2 them
At that point, I began to sense that neither of us was having much luck winning the
other around to his position. We wound down with:
@wroush to @sivavaid: Clearly we won‟t agree on this. I will expect to see a
chapter in your book on the Amazonization of everything.
@sivavaid to @wroush: I guarantee no one will ever write a book exposing
Amazon‟s machinations!
Now, I‘m willing to admit that my original tweet was glib. My use of ―Fail,‖ in
particular, implied more derision than I really felt. (The New York Times published a
great column two weeks ago about the etymology of this peculiar interjection). And it
probably wasn‘t fair to pick on Shell‘s book; Ellen is actually an old acquaintance, and I
have no idea why her book wasn‘t initially available for the Kindle. In any case, it is now.
But many books still aren‘t. And if Vaidhyanathan really thought I was being
disrespectful toward authors, he had me all wrong. If anything, I was trying to help
authors by pointing out that there is now a population of prospective readers, myself
included, who are conditioned to look for the digital version of a book first, and who are
far more likely to buy it if it is available at the moment they need it—and, conversely, far
less likely to buy it if they have to drive to a bookstore or a library or wait for the postal
service to deliver it. Indeed, as I‘ve said before, the genius of the Kindle is not its e-paper
screen (although that‘s cool, and is the product of some serious technical innovation)—
it‘s the ability to download books and newspapers almost instantaneously via Amazon‘s
Whispernet wireless network. The Kindle is making it far easier to indulge my reading
habit, and I know I‘m buying more books now than I did before I got it.
Nonetheless, there are some understandable reasons why authors and publishers
might be wary of the Kindle. One, as Vaidhyanathan mentioned, is Amazon‘s approach
to digital rights management (DRM). Following Apple‘s lead with the iPod, Amazon has
chosen to use a proprietary file format for the Kindle, meaning that Kindle editions can‘t
be read on other devices—the exception being the iPhone, for which Amazon has
released a Kindle app. Nor can e-books formatted using popular open standards like epub
be read on the Kindle without tortuous manual preparation. Also like Apple, Amazon
makes sure that it is the sole conduit to the device: you can only buy Kindle editions
through Amazon, and while it‘s possible to transfer your own Word, PDF, or HTML files
to the Kindle, you have to do so by e-mailing them to Amazon‘s servers, which encode
them for the Kindle and transmit them back to you via e-mail or directly to the device
over Whispernet for $0.15 per megabyte.
Then there‘s the pricing issue. Most Kindle books cost $9.99, which is often $3 to
$8 below Amazon‘s already heavily discounted prices. Vaidhyanathan is correct that
Amazon doesn‘t share information about how e-books are priced or how many are sold—
so it‘s actually hard to tell how much of the Kindle discount is coming out of the pockets
of authors and publishers, and how much is being absorbed by Amazon in the form of
lower profits (or even losses) on Kindle editions.
But I have a hard time buying Vaidhyanathan‘s contention that publishers are
―scared not to bow to Amazon‘s strong-arm tactics.‖ When authors and publishers
demanded that Amazon give them the option to turn off the text-to-speech feature on the
Kindle 2, Amazon folded virtually overnight. (Don‘t even get me started about the
monumental foolishness of the Authors Guild‘s contention that readers should not be
allowed to hear their books read aloud by a computer voice unless authors get a cut. The
inanity of this idea has almost caused me to part company with the otherwise entertaining
Roy Blount Jr., president of the guild.)
If Vaidhyanathan and I were having our little Twitter debate now, instead of early
July, he would probably also mention the now-famous 1984 incident, in which Amazon
remotely deleted copies of George Orwell‘s Animal Farm and 1984 from customers‘
Kindles after it discovered that the publisher did not have the rights to the titles. I don‘t
think the episode is worth harping on, given that Amazon‘s Jeff Bezos has apologized
profusely for what he called the company‘s ―stupid‖ and ―thoughtless‖ decision to handle
the problem by meddling with books people had already purchased. But that hasn‘t
stopped the Free Software Foundation from adding Amazon to its Defective By Design
campaign, which targets media companies that use DRM, and calling for Bezos‘s
―impeachment.‖
Authors and publishers are free to take a stand on any of the issues above by
excluding their books from the Kindle platform. What I‘m saying is, doing so can solidly
be classified as cutting off one‘s nose to spite one‘s face. Sure, you can quibble with the
details of the Kindle publishing system. But if you are an author and your book is not
available for the Kindle or the other existing and emerging e-book platforms, you are in
effect telling your readers that their convenience is of no import to you; that you would
rather your book not be read at all than that you should have to suffer at the greedy hands
of the e-retailers.
You‘re also foregoing real earnings for the sake of—what, exactly? Perhaps you are
waiting for someone else to build a convenient, scalable, affordable system for getting e-
books to hundreds of thousands of readers, and then offer you a larger cut of the
proceeds. Who‘s going to do that—Google? Microsoft? Hearst? Rupert Murdoch? I don‘t
think so.
It‘s important to keep up the pressure on Amazon to make the Kindle as open as
possible. But I think it‘s also important to be realistic about the economic implications of
the larger digital revolution that the Kindle embodies. There is no reason for a digital
book to cost as much as a print book. (Even the $9.99 level is unsustainably high, in my
opinion.) And as Chris Anderson and others have been pointing out for years now, the
old pricing and distribution models are breaking down across the world of consumer
goods and services; novelists, journalists, musicians, and other creators can‘t expect to be
compensated in the same old ways they‘re accustomed to. The way forward is not to
withdraw your work from circulation. It‘s to figure out what people want and need, and
then decide how you can uniquely meet that need.
P.S. Closely related to the Kindle question is the debate over the proposed legal
settlement between Google, the Authors Guild, and the Association of American
Publishers over the Google Book Search project, and in particular, whether authors
should participate in the settlement or withdraw while they still can. Amazon, Microsoft,
the Internet Archive, and other organizations have come out against the settlement, which
I‘ve also criticized in the past. I will revisit that subject in a future column.
64: A Manifesto for Speed
August 28, 2009
My favorite limerick of all time came printed on the bottom of a coffee cup:

All hail the goddess Caffeina!


She hangs out by the coffee machina.
We‟re all on the run
But we get more work done
Since coffee came onto the scena!

Yes, this anonymous ditty breaks the rules of limericks, principally by mangling the
meter and using made-up words like ―machina‖ and ―scena.‖ But it‘s the sentiment that
appeals to me. I do get more work done because of coffee. If the sprightly elixir was good
for Voltaire, who is said to have consumed 50 cups a day, I figure it must be good for me.
I also get more work done because of e-mail. And because of the Web, and RSS
feeds, and Google, and Twitter, and my iPhone and my MacBook and my Kindle—all of
the tools, in short, that are melting our brains and impoverishing our communications,
according to a circle of naysayers who have been very busy lately publishing books and
articles with titles like Digital Barbarism and The Cult of the Amateur and ―Is Google
Making Us Stupid?‖ Technology criticism is an invaluable strain in our culture that
stretches back to such brilliant writers as Lewis Mumford, Rachel Carson, Marshall
McLuhan, and Jane Jacobs. But to tell the truth, I don‘t give much more credence to the
recent anti-digital jeremiads than I do to the periodic warnings—always swiftly
overturned by medical authorities—that caffeine is bad for your health.
The latest addition to the curmudgeon‘s club is John Freeman, the acting editor of
the UK-based literary quarterly Granta, who published a so-called ―manifesto for slow
communication‖ in the August 21 Wall Street Journal. The essay, which was adapted
from Freeman‘s forthcoming book The Tyrrany of E-Mail, argues that living in such
close and constant proximity to our e-mail inboxes stresses us out, cuts us off from the
physical world, and undermines our communication skills. Freeman thinks that spending
all day writing and answering e-mail amounts to ―simulated busyness‖ rather than
genuine productivity. And he believes that the only way to restore sanity is to ―step off
this hurtling machine,‖ jabber less, and think more. ―We need to learn to use [e-mail] far
more sparingly, with far less dependency, if we are to gain control of our lives,‖ Freeman
writes.
There are certainly days when I‘d love to ignore my e-mail. Thursdays, for
example, when I‘m supposed to be writing this column. As Freeman rightly notes, ―We
need time to shape and design and filter our words so that we say exactly what we mean,‖
and it would be wonderful, on those days, to have a few uninterrupted hours to take his
advice. But I know that closing the e-mail tab in my browser would be as unwise as
hitting the snooze button on my alarm clock. I‘d just have to deal with the consequences
later, in the form of a larger stack of urgent, unanswered messages. It‘s true, as Stephen
Covey observed, that what‘s urgent is not always the same as what‘s important—but
some e-mails are both.
So I don‘t think Freeman‘s exhortation to use e-mail more sparingly is very
practical. But I‘m having trouble dismissing his essay from my mind. As far as I can
figure out, it‘s stuck there for three reasons.
1. Freeman’s critique of e-mail amounts to an attack on a whole way of life
(mine—and probably yours).
I don‘t merely depend on broadband, e-mail, content management systems, search
engines, cell phones, and the like—I write about them. In fact, I believe they are
emblematic of the exponential technological advances that are making knowledge more
accessible, improving health, extending lifespans, and freeing more people than ever
before from manual drudgery and allowing them to engage in creative work. This
exponential change is where we got the ―X‖ in Xconomy, and it is ultimately the main
force that will lift us out of recession into yet another cycle of innovation and growth.
So when Freeman writes that ―only two things grow indefinitely…cancer and the
corporation,‖ I sputter with incredulity. It‘s clear that industrial societies have hit on at
least two other wonderful things that support indefinite growth, or have been for decades
now: semiconductor technology, and even more important, a system of organizing people
and capital around ideas and transforming them into products and services that the market
wants—that is to say, industrial R&D and the venture-funded startup model. This system
works as well as it does in part because digital communications have reduced the delays,
frictions, and inefficiencies built into any cooperative endeavor. All I can say is that
without e-mail, and lots of it, there‘s no way that most Web companies—including
Xconomy, with our small staff and three bureaus located thousands of miles apart—could
operate.
2. Freeman throws the baby out with the bathwater.
―Efficiency may be good for business and governments but does not always lead to
mindfulness and sustainable, rewarding relationships,‖ Freeman writes. Fair enough. But
what if being more efficient about our unavoidable work tasks is the very thing that gives
us time later to be mindful and to invest in real relationships? What I‘m suggesting is that
e-mail itself isn‘t the problem. The problem is that most people are never taught how to
handle it efficiently.
In his book Bit Literacy, which I cited back in February in a column on e-mail
overload, user experience consultant Mark Hurst acknowledges that ―Bits are
heavy…[they] weigh people down, mentally and emotionally, with incessant calls for
attention and engagement.‖ But unplugging from e-mail and making ―Don‘t send‖ your
mantra, as Freeman advises, is an entirely unwarranted response to this problem. There‘s
no need to stop using e-mail, or even significantly ration its use, when there are relatively
simple ways to manage your inbox so that you don‘t feel like you‘re constantly
overwhelmed.
Hurst advocates a ―zero-inbox‖ strategy: dealing with every e-mail in your inbox
and emptying it out at least once a day. To briefly repeat Hurst‘s method for doing this
(the details are in the February column), he suggests responding first to e-mail from
family and friends, then deleting all spam and ―FYI‖ e-mails, then acting on every work-
related e-mail request that can be dispatched in two minutes or less, and finally
transferring all of the remaining requests onto a to-do list and addressing them later.
While this may leave you with a swelling to-do list, it will at least give you the daily
experience what I called ―the bliss of an empty inbox.‖
3. Slower isn’t necessarily better. Sometimes, it’s just slower.
The society that abandons itself to e-mail and other forms of instant communication
may indeed fetishize speed and falsely equate it with efficiency, as Freeman charges. But
to my eyes, Freeman and other critics of modern Internet technologies romanticize
slowness and falsely equate it with wisdom and fulfillment.
―The difference between typing an e-mail and writing a letter or memo out by hand
is akin to walking on concrete versus strolling on grass,‖ Freeman soliloquizes. I would
like to see him hand-write all of the notes I type to people every day while trying to keep
my little part of Xconomy running. If he tried, I think he would conclude that the
difference is more akin to getting a massage versus getting arthritis.
Novelist Mark Helprin starts out his recent non-fiction screed Digital Barbarism: A
Writer‟s Manifesto with an entertaining little scenario contrasting the life of a fictional
California software executive in 2028 with the life of a fictional English politician in
1908. The software executive is, of course, continuously immersed in a stream of digital
data served up over the global network, even while he‘s on vacation in the (by then
tropical) Aleutians, while the politician, who‘s on holiday at Lake Como, Italy, is
completely out of touch with London, dependent on week-old newspapers and pen-and-
ink letters carried by freighter. For the software executive, ―The world flows at
increasingly faster and faster speeds. You must match them…You love the pace, the
giddy, continual acceleration. Though what is new might not be beautiful, it is
marvelously compelling.‖ The politician, by contrast, makes do with his books, his
memories, and his writing desk; he has ―learned to enjoy the attribute of patience itself,
for it slows time, embraces tranquility, and lets you savor a world in which you are
clearly aware that your passage is but a brief candle.‖
It‘s obvious long before he acknowledges it that Helprin would rather be the
politician than the software executive. To which I say: If Helprin has found a time
machine that will take him back to 1908, he is welcome to go. I hope he‘s had his shots.
Personally, having tasted what it‘s like to have instant access to nearly everyone I know
and to nearly boundless stored knowledge, I could never choose a life of isolation. Nor do
I think it would be a very good idea for politicians and businesspeople to return to an era
in which progress was limited by the speed of steamships and locomotives.
It‘s so obvious that it shouldn‘t have to be spelled out, but if we can communicate
faster, we can also air questions, reach agreements, finish transactions, resolve disputes,
and make plans faster. Think about it: who would you rather have in the White House
during an economic or political crisis, a president with a Blackberry on his belt or one
with a telegraph operator in the basement?
Freeman and Helprin can stroll on their grass by the lake and use all the postage
stamps they want. Just give me my coffee, my computer, and my cable Internet service.
65: Seven Projects to Stretch Your Digital
Wings: Part One
September 4, 2009
I love September. There‘s a back-to-school crispness in the air that always gets me
jazzed to learn something new, even though I‘ve been out of school for 15 years. Maybe
you feel it too. And with a long holiday weekend coming up, perhaps you‘ve got a few
hours free to experiment with a new tool or craft—something that will help you express a
bit of your own creativity. The question is, where to begin?
Well, if you‘re like me and you‘ve got a weakness for gadgets, software, and Web
tools, you may find something of interest in the following list of easy digital projects.
This is just a smattering of the options popping up every day for people who want to use
new media to explore the world around them and express and share their own ideas. Even
if you don‘t think of yourself as a creative type, I urge you to give these new tools a try.
Everyone has something unique, valuable, and personal to say about their life
experiences, and in many ways, the new digital technologies make it easier than ever to
say it.
In this week‘s column, I cover three projects in the areas of visual art and Web
publishing; I‘ll outline four more ideas involving different media next week. [Update
9/18/09: Actually, this turned into a three-part column. Be sure to check out part two and
part three.] Some of these items involve technologies I haven‘t written about before, and
others are things I‘ve introduced in past columns. Most of them require a bit of basic
equipment, such as an Internet-connected computer, a digital camera, or smartphone—
but the Web-based tools that I list are all free.
Pick one and have fun! I encourage you to post your results online and share a link
in the comment section here. And if you have your own favorite tools for digital self-
expression, let us know about them.
1. Make a Digital Painting with Brushes
There are plenty of powerful programs for creating computer art, like Adobe
Photoshop Elements and Corel Painter. Creative professionals often put these programs
to work using high-end gadgets like Wacom‘s Intuos pen tablets and Cintiq pen displays.
But using inexpensive software from the iTunes App Store, anybody with an iPhone or
iPod Touch can try their hand, literally, at painting digitally.
My favorite iPhone painting app is Brushes, a $4.99 program created by
independent developer Steve Sprang. It sprang to fame this summer when The New
Yorker published a Brushes painting by New York artist Jorge Colombo on its cover. The
program is extremely easy to use—you just point and draw with your finger—but its
features, like a color picker, a transparency adjuster, zooming, layers, and undo buttons,
make it surprisingly flexible.
I‘m amazed by some of the art Brushes users have created: they‘ve used the
software to evoke styles ranging from hard-edged, Mondrian-style modernism to a misty
softness that reminds me of Japanese scroll paintings. (You can see more than 7,000
Brushes paintings uploaded by more than 1,400 Flickr members here.) But the program is
also great for plain old doodling.
And there‘s an extremely cool feature that allows you to share not just your finished
Brushes paintings, but animations documenting your work, brushstroke by brushstroke.
You just log into the app‘s built in Web server from your Mac‘s browser, copy the
special ―.brushes‖ file, then open it using the free Brushes program for the Macintosh.
Here‘s a video showing how I made my first Brushes painting—it‘s amateurish,
obviously, but I had fun with it.
2. Start Lifestreaming with Friendfeed or Posterous
Blogging is so 2006. All the cool kids, like Steve Rubel of Edelman Digital, have
moved on to lifestreaming. Definitions of lifestreaming vary, but I‘d say it comes down
to having a central online clearinghouse for everything you share and store online:
writings, photos, videos, audio, documents, bookmarks, tweets, Facebook status updates,
and the like. If you‘re a creator, setting up a lifestream can be a great way to gather all
your digital creations in one place, and to make sure that all of your online friends know
what you‘ve been producing.
At the moment, there are two approaches to lifestreaming that roughly mirror each
other; my guess is that they‘ll soon merge, but for now you need to use different tools to
experience both types. The first type of lifestreaming tool, which I‘ll call an aggregator,
basically pulls things in from elsewhere. It automatically watches for the content you post
to other online services, and gathers them into a central stream or feed. The second type,
which I‘ll call a broadcaster, ingests your content directly and then pushes it out to
everywhere else.
Friendfeed, built by a group of former Google engineers who now work at
Facebook, is the leading example of an aggregator. Once you‘ve set up a free Friendfeed
account and linked it to your other online accounts, it will suck in everything you publish
to your favorite social media services, including your Tumblr or LiveJournal blog posts,
your Flickr or Picasa photos, your Delicious or StumbleUpon bookmarks, your Facebook
or Google Talk or Twitter status updates, your YouTube videos, and your Digg or Google
Reader or Reddit news feeds. In other words, you never need to post anything directly to
Friendfeed. Your Friendfeed personal page is just a convenient place for you and your
friends to see all of your other online activities. (It can even track which movies you rent
from Netflix, so be careful!)
Posterous, the creation of former Apple programmer Sachin Agarwal, is the leading
example of a broadcast lifestreaming service. When you send material to Posterous via e-
mail, it gets added to your lifestream (e.g. http://waderoush.posterous.com) and
simultaneously auto-posted to the social media services of your choice, including
Blogger, Facebook, Flickr, LiveJournal, Movable Type, Tumblr, Twitter, Typepad,
Wordpress, and Xanga. Every Posterous lifestream also has an associated RSS feed,
which is useful, among other things, for podcasting: if your friends set up iTunes
subscriptions, they‘ll automatically get the MP3 files that you e-mail to Posterous.
Rubel, who tracks new-media trends for a large public relations firm, says he gave
up blogging for lifestreaming because he was attracted to the simplicity and informality
of Posterous, which he calls ―something in between Twitter and a blog.‖ He also cites
Posterous‘s ability to handle multimedia content, which is impressive. I think a lot of
other busy users of social media services will feel the same attraction—but Posterous is
also great just for sharing the occasional photo or essay. (Your latest Brushes digital
painting, for example.)
3. Document a Space with Photosynth
If you‘re used to viewing your digital images in old-fashioned file folders on your
computer, or on photo-sharing sites like Flickr or Photobucket, then you‘re in for a shock
with Photosynth. When Microsoft first rolled out the technology a year ago this week, I
was so bowled over that I wrote a whole breathless column about it. As I explained then,
Photosynth is like a cross between collage and virtual reality; it analyzes a large number
of close-up photos of an object or a place and assembles them into a common 3-D
environment, a kind of 3-D jigsaw puzzle that you can explore using the Web-based
Photosynth viewer. If you‘re familiar with the famous David Hockney photocollage
Pearblossom Highway #2, you already have some idea of the effect Photosynth
produces—except that in the Photosynth version of Hockney‘s work, you‘d be able to
move into the space, rather than simply glancing around it.
Creating your own ―synths‖ on Photosynth can be a fun way to stretch your
photographic skills—and I guarantee that it will give you a totally new way to think
about the things you photograph. Be warned: the ―synther,‖ the program that actually
uploads your photos to Photosynth, only runs on Windows PCs. (You can download that
here.) But assuming you‘ve got access to one of those, you can get started by picking a
Photosynth-friendly subject. That could be an interior space like your living room or an
art gallery, where you‘re basically going to stand in the middle and take lots of photos
looking in every direction; it could be an outdoor object such as a building, where you‘re
going to walk around it gradually, shooting pictures as you go; or it could be a small
object like a sculpture or a vase, which you‘re going to place on a table (or even a
turntable) and shoot from every angle.
There are a few keys to creating a photoset that can be assembled into a compelling
synth. They‘re detailed in Microsoft‘s excellent Photosynth Photography Guide, but I‘ll
run through them here anyway: Use a wide-angle lens. Make sure that each feature (say,
the door of a cathedral) appears in at least three photos. When panning across a scene
with your camera, make sure that each photo overlaps the previous one by at least half.
When moving around a 90-degree corner, take at least 9 photos—every 10 degrees or so.
Limit yourself to about 300 photos; if you try uploading more than that, in my
experience, the synther tends to crash. And avoid uniform or repetitive expanses like blue
sky, white walls, or the glass grids of skyscrapers—they don‘t have enough detail for
Photosynth to match adjacent photos. (As the Photosynth team puts it, Photosynth loves
Venice, and it hates the Seattle Public Library.)
Photosynth isn‘t a fully supported Microsoft product, and it‘s not clear how it will
evolve as a technology. There‘s talk at Microsoft about integrating Photosynth into
Virtual Earth and Bing Maps, perhaps adding Everyscape-like interior spaces to
Microsoft‘s Web mapping tools. But in any case, the team behind it has made some nice
improvements and additions over the last year. One is a Silverlight-based viewer that lets
you explore synths on Mac computers, not just Windows machines. (The synther, alas, is
still Windows-only.) Another is iSynth, a very cool Photosynth app for the iPhone,
which, thanks to its multi-touch interface, is actually a better platform for exploring
synths than a regular computer.
* * *
Next time I‘ll tell you about four more cool digital projects that will take you
beyond words and images to areas like podcasting, animation, mapmaking, and 3-D
virtual worlds. Meanwhile, I can‘t resist plugging a recent book that‘s tailor-made for
creative souls who are interested in expressing themselves through various (digital and
non-digital) media. It‘s called A Creative Guide to Exploring Your Life: Self-Reflection
Using Photography, Art and Writing, by my friends Graham Gordon Ramsay and Holly
Barlow Sweet. (Full disclosure: I helped Graham and Holly with the editing on the book.)
It‘s a fantastic source of ideas, guidance, and inspiration for anyone who wants to
cultivate both their creative skills and their self-understanding.
66: Seven Projects to Stretch Your Digital
Wings: Part Two
September 11, 2009
Whether the fall is back-to-school season for you or not, there‘s always more to
learn. In last week‘s column I outlined three fun weekend projects involving new
technologies for digital self-expression. My suggestions covered art (digital ―finger
painting‖ with an iPhone app called Brushes), writing (‖lifestreaming‖ with Posterous
and Friendfeed), and photography (building three-dimensional photographic spaces with
Photosynth). This week I‘ve got two more digital projects in mind for you, this time in
the areas of podcasting and computer animation. Next week, I‘ll finish up with maps and
virtual worlds.
I‘m writing this three-part column because I think it‘s an exciting time for anyone
who‘s interested in consumer-level digital media tools. Not only are we seeing a
profusion of inexpensive new gadgets for capturing media—witness Apple‘s
announcement Wednesday that the new iPod Nano will have a built-in digital video
camera—but there are also many new Web-based services where creators can edit,
enhance, share, and promote their media creations. The only way to keep up with all
these new technologies is just to jump in and try them. So let‘s get back to it:
4. Become an Amateur Podcaster with AudioBoo
When podcasting first took off four or five years ago, most podcasters tried to
emulate radio hosts, kitting out their podcasts with fancy musical intros and outros and
other audio goodies. Just to experiment with podcasting, you needed a pricey microphone
and recording rig, audio editing software, and a working knowledge of RSS, iTunes, and
other distribution methods. But thanks to a bit of good old technological progress, the
barriers are now much lower. In fact, producing a podcast these days can be just about as
easy as making a phone call. Which means that dictating a few off-the-cuff thoughts on
your mobile device and uploading them to the Web is becoming a realistic alternative to
blogging and other more familar forms of Web-based communication.
This is precisely the point of AudioBoo, a UK-based service that I profiled in July.
If you live in the UK (or if you‘re willing to splurge on an international phone call), you
can call AudioBoo from any phone and record some thoughts, then publish the the
recording straight to AudioBoo.fm, which is basically a giant community audio blog
featuring recordings or ―boos‖ from all AudioBoo users.
But if you have an iPhone, you can use the nifty AudioBoo app to do the same
thing, without the phone calls or the attendant charges. The app has a voice recording
function that lets you talk for up to five minutes. It then uses your wireless data
connection to upload the finished boo to the AudioBoo.fm, along with a photograph and a
map of your location, if you wish. Fans can listen to your boos at the site, or they can
subscribe and get new boos delivered via RSS or iTunes. The AudioBoo site also
provides some handy code that you can use to embed your boos in your blog.
In fact, by doing a bit of social media marketing to promote your boos, you could
turn AudioBoo into your own personal audio publishing empire. Somewhat to my
surprise, I haven‘t come across anyone who‘s doing this yet; and when the amateur
podcasting phenomenon really hits the mainstream, it may be some other tool, rather than
AudioBoo, that people fall in love with. But you can definitely see from AudioBoo where
the technology is going.
You can listen to my first boo here. It‘s worth mentioning that there‘s another
super-easy way to publish audio on the Web. Remember the lifestreaming site Posterous,
which I talked about last week? You can publish a mini-podcast to your Posterous site
simply by making a recording on your mobile device (for example, using the iPhone‘s
built-in Voice Memo app), then e-mailing it to post@posterous.com. Within minutes, the
clip will show up in your lifestream inside a nice little audio player widget.
5. Create a Short Animated Film with Xtranormal
From audio to the visual: In January, I wrote about a Canadian startup called
Xtranormal that‘s pioneering an intriguing idea that it calls ―text-to-movie.‖ If you write
some dialogue and insert some stage directions, Xtranormal will generate a whole
computer-animated cartoon enacting your story. You can check out my first attempt at an
Xtranormal movie here:
Starting your own cinematic career at Xtranormal is easy. You first have to pick
from one of six ―worlds‖ or settings. They range from the sterile, black-and-white office
environment I used in my movie (it‘s great for sarcastic comedy sketches) to a retro-
futuristic cartoon world called Robotz. Next, you decide whether you want one or two
―actors‖ in your movie.
Xtranormal then opens a script-writing interface where you input some dialogue
and click on buttons to insert basic instructions for the characters, such as when they
should smile or grimace or wave their arms. You can also control the pacing of your
scene by inserting pauses, deciding when the virtual ―camera‖ should zoom in on a
character‘s face, and the like. You can run through the movie as you build it, listening to
the characters‘ computer-generated voices to see if you‘ve got the timing right.
When you‘ve got everything looking and sounding just the way you want it, you
can tell Xtranormal to render a final copy, which you can then publish to YouTube or
embed in your blog. It‘s a great way to make an animated movie without having to learn
anything about computer graphics, stop-motion filming, or the like. Of course, your
finished episode will only be as good as your scriptwriting. And this first generation of
text-to-movie tools does have its limitations—notably the synthesized voices, which lack
much human inflection. But the technology is already good enough that, again, it‘s easy
to imagine where it might be going in the future. For an example of an Xtranormal
episode that‘s pretty well written and makes amusing use of the existing tools, check out
―Kung Fu Flick.‖
When I first wrote about Xtranormal, the animation platform was entirely free, and
the company still lets you make two-character movies in the six basic worlds for free. But
since then, the company has introduced a new premium membership level that, for $5 a
month or $40 a year, will let you make longer movies and select from more characters
and worlds. Xtranormal says it‘s also working on a downloadable moviemaking program
for Windows and Mac computers, called State, that will really put you in the director‘s
chair. You‘ll be able to include up to three characters in a scene and make them walk
around the world, and you‘ll be able to add your own music and control the placement of
the camera. Calling the next Hitchcock!
67: Put Yourself On the Map, Build a
Virtual House: Seven Projects to Stretch
Your Digital Wings, Part Three
September 18, 2009

When I set out to write ―Seven Projects to Stretch Your Digital Wings‖ two weeks
ago, I really meant to put all seven projects into one column. But I‘m famous around
Xconomy for my inability to say anything briefly. If 800 words are good, then 1,600
words are even better—that‘s my motto.
The point being that I only got through three projects in that first column—on art,
writing, and photography—before I ran out of time and space. Last week, I finished two
more, on audio self-publishing and computer animation. In today‘s third and last
installment, I want to suggest two final projects that will give you a chance to express
yourself in digital media that may be a little less familiar: maps and 3-D virtual worlds.
6. Put Yourself on the Map with Platial
Mapmaking hasn‘t traditionally been seen as a craft open to amateurs, or even one
where self-expression is encouraged. A map, after all, is a public resource, and is
supposed to be objective and accurate, right? Well, maybe in theory. In practice, the
digital revolution is transforming the meaning of maps just as drastically as it‘s changing
the way we think about music and news and other forms of communication.
Platial is a website where average users can try a new form of storytelling that
combines maps, photos, and writing. Once you‘ve signed up for an account, you can
create your own themed maps for other Platial visitors to browse. Each map consists of a
set of locations that you designate on an underlying Google map; for each location, you
can add a title, a written description, photos, and Web links.
One way to use Platial would be as a kind of personal photo-travelogue, uploading
pictures from your trips across the country or around the world. But a lot of people seem
to employ Platial to document personal interests or obsessions. For example, a user
named ―Barnaclebarnes‖ has created a map of famous film locations, like the house in
suburban Tujunga, CA, where Steven Spielberg filmed E.T. And I‘m working on my own
Platial map showing locations around San Francisco used in one specific film,
Hitchcock‘s Vertigo.
You can designate a map on Platial as closed—meaning it‘s for your own personal
doodling—or open, meaning anyone can contribute to it. One cool open map is ―Where I
Was When I Heard Obama Won,‖ where you can join the more than 15,000 people who
have marked the spots where they learned of President Obama‘s historic election. For
people on the go, the folks at Platial have also built an iPhone app called Nearby that
figures out where you are and shows you nearby Platial locations created by other users.
The app also lets you create and document new locations directly from your phone.
To me, the intriguing thing about Platial is the way it melds the personal and the
public—allowing users to anchor their inner visions and insights by attaching them to
maps representing our shared landscape. And Platial is just one example of a worldwide
explosion of Web-mediated geographical expression and exploration. The phenomenon
goes by fancy names like ―neogeography‖ and ―locative media,‖ but it boils down to
connecting digital media with specific locations through various forms of geotagging and
online publishing. If you‘re interested in this kind of thing, there‘s a lot more to explore,
from Panoramio to Schmap and from geocaching to WikiMapia. One of my recent
favorites is Atlas Obscura, a compendium of bizarre and curious locations contributed by
readers.
7. Become a Virtual Architect in Second Life
A couple of years ago, I went on an extended journalistic assignment inside the
virtual world Second Life, doing research for a Technology Review feature story about
the growing overlap between digital mapping and 3-D virtual worlds. (That story,
―Second Earth,‖ came out in July 2007.) Second Life is one of the most successful online
worlds, and probably the most popular non-gaming world. Online role-playing games
like World of Warcraft may have more people online at any given time, but Second Life
is like the world‘s biggest city square; users go there to socialize or do business or build
things, not to kill dragons and battle for treasure and glory. (Though perhaps it‘s all the
same thing.)
Part of my research involved learning how to use Second Life‘s built-in object-
creation tools, which are much more extensive than those available in any other online
world. In fact, it‘s no exaggeration to say that Second Life was built by its citizens. The
San Francisco-based company that runs the world, Linden Lab, merely creates the land
underneath everything, and users colonize that territory, creating custom-built structures
and communities, even whole economies.
But my ―research‖ got a little out of hand. As a kid I had some basic drafting tools,
and I spent quite a bit of time fooling around with designs for futuristic buildings and
cities. As an adult, I‘d long been intrigued by CAD-CAM software for computerized
drawing, but it always seemed too expensive and complex to learn. But once I realized
how easy it is to create virtual objects inside Second Life, my long-dormant architecture
bug came back to bite me, and I wound up spending a few solid weeks building things—
mainly a pair of houses, one starter model and one rather elaborate mansion.
For anyone who shares my interest in drawing or architecture but thought 3-D
modeling was the exclusive province of professional designers, I strongly recommend a
trip into Second Life. Basic accounts are free, and there are plentiful ―sandbox‖ areas
where anyone can use the building tools. (If you want to build anything permanent,
though, you have to buy some virtual land to put it on. Second Life‘s ―land use fees‖—
which are really server storage fees—start at $5 per month for up to 512 square meters of
land, which is enough to build a simple house, and range up to $195 per month for a
whole 16-acre ―region.‖)
I won‘t try to describe building methods in detail—there‘s an excellent walk-
through of the basic concepts at the ―Ivory Tower Library of Primitives‖ inside Second
Life, and there are plenty of video tutorials on YouTube. But to boil it down, every 3-D
object inside Second Life is made from basic shapes called primtives or ―prims.‖ When
you create—or ―rez‖—a prim, you decide whether it should start off as a cube, a sphere,
a cylinder, a pyramid, or the like—there are 15 basic prims to choose from. Once you‘ve
rezzed a starting shape, you can move and rotate it within the world, stretch it along
different axes, remove slices from it, and apply various ―textures‖ or surface patterns, all
using your mouse pointer and some basic dialog boxes.
To build the floor of a house, for example, you‘d rez a cube, flatten it, and stretch it
out horizontally. To add the first wall, you‘d rez another cube, flatten that vertically, and
then move it into position against the floor. And so forth. From these primitive
beginnings, Second Life users have built entire castles and spaceships, casinos and
cathedrals, Zen temples and shopping malls.
After a few weeks of fiddling with the tools and building a little shoebox of a house
for my own Second Life avatar, I got serious and decided to build my virtual dream
home, a two-story affair with lots of glass, stone, and balconies. (I thought of it as a sort
of cross between the Ahwahnee Hotel in Yosemite Valley and the Vandamm House in
North by Northwest.) It doesn‘t exist inside Second Life anymore—I got tired of paying
the land use fees—but in the pictures at left you can see what it looked like at various
phases of construction. Everything you see was made from the basic prims, mostly cubes,
cylinders, and cones.
Building the house, for me, was the fun part. I didn‘t spend much time in the
finished structure, except when I was having guests over. And once my Technology
Review article was finished, I pretty much left my Second Life behind and moved on to
exploring other technologies. But for many, Second Life is a canvas for serious art. To
get a sense of what a really ambitious user can achieve in the realm of Second Life
architecture, check out the in-world Frank Lloyd Wright Museum, which includes a full-
scale reproduction of the Robie House in Chicago. I‘ve been to the real Robie House, and
the virtual version is uncannily accurate.
So, that‘s my tour of seven digital-media projects that anyone with a laptop, a
smartphone, and an Internet connection can try out for himself—moving from some of
the simplest and most familiar art forms, like finger painting, to some of the newest and
most immersive, like 3-D design. I hope you‘ll try some of these tools yourself, and
report back (in the comment section) on what you created.
68: Ansel Adams Meets Apple: The
Camera Phone Craze in Photography
September 25, 2009
Seattle-based commercial photographer Chase Jarvis is known for his arresting,
color-saturated images of people in motion—skiing, swimming, somersaulting. He‘s also
known for (literally) trademarking the phrase ―the best camera is the one you have with
you.‖ His point is that you don‘t an expensive SLR to take great pictures. You can do a
lot with the camera in your pocket or purse—which more likely than not is a camera
phone.
This week, Jarvis took his slogan to the next level, launching a trio of products—a
book, an iPhone application, and a photo-sharing community on the Web—intended to
encourage all photographers, pro and amateur alike, to get more creative with their
camera phones. This cross-media campaign is a brilliant concept—both as a digital-arts-
education project and as a piece of self-promotion for Jarvis and his studio—and it also
happens to fit in really well with the theme I‘ve been writing about in this space
throughout September in ―Seven Projects to Stretch your Digital Wings,‖ Parts 1, 2, and
3. So, if you‘ve got an iPhone, go spend $2.99 on Jarvis‘s app, called ―Best Camera,‖ and
consider today‘s column Project #8.
There are more than 1,300 photography-related apps in the iTunes App Store, but as
far as I know, Best Camera is the only one that comes with a dedicated community of
other iPhone users. The app allows you to take a picture with the iPhone‘s built-in
camera, apply a range of cool digital filters and effects, and then upload your finished
photo to a gallery that‘s constantly being updated, in real time, with new photos from
other Best Camera users. You can give the photos you like best a thumbs-up, and browse
photos either by popularity or recentness.
In addition to introducing you to a bunch of other creative souls, Best Camera will
let you play with your own images and perhaps invent your own new styles. That‘s
thanks to a surprisingly flexible interface for applying various filters to your raw images
and changing the order in which the filters are ―stacked.‖ The filters themselves go well
beyond the typical gray-scaling, contrast-enhancing, or redeye-reducing algorithms you‘ll
see in other iPhone image editing apps: working with Übermind, a Seattle software
development firm that specializes in photography-related applications for desktops and
mobile phones, Jarvis dreamed up a dozen effects altogether, including four ―signature
filters‖ inspired by his own photographic styles.
It‘s hard to describe the signature effects in words, but one filter, called ―Jewel,‖
gives photos a warm, rich, almost antique look, while another called ―Candy‖ creates an
intense, high-contrast, caffeinated feeling reminiscent of Jarvis‘s advertising
photography.
As someone who loves to spend time looking at other people‘s photos and trying to
understand their styles—I could spend hours using the ―Explore‖ feature at Flickr—I
think the community feature of Best Camera is especially fun. It‘s a nice feeling to
upload a picture and then see it appear in the public gallery, which is accessible right
from the app. You can browse the gallery from a desktop browser, too, at
www.thebestcamera.com; the bonus, if you go there, is that the ―recipe‖ used for each
photo—that is, the combination and order of digital effects the photographer chose—
shows up right alongside the image. (You can see all of my Best Camera photos here.)
Jarvis certainly isn‘t the only professional photographer singing the praises of
camera phones. Shawn Rocco, a staff photojournalist at the News & Observer in Raleigh,
NC, shoots with a long-since-obsolete Motorola E815 mobile phone. In fact, the
American art world seems to be developing a bit of a fetish for mobile-phone
photography. ―The Relentless Eye,‖ a two-month juried exhibit of hundreds of mobile-
phone photos launching today at the Helen Day Art Center in Stowe, VT, is just the latest
tribute to the craft.
Amidst all this fuss, it needs to be said that mobile phone cameras have their
limitations. They usually have tiny sensors, meaning they have fewer pixels to work with
than dedicated cameras. And they have small, fixed lenses that don‘t let in very much
light, so it‘s hard to capture moving objects or to get clear images in low-light conditions.
There are times when quality and performance really do count; if the best camera is the
one you have with you, then I‘m really glad the Apollo astronauts took Hasselblads to the
moon, and not iPhones.
But if you spend some time looking through the iPhone photos that Jarvis and other
users of his app have snapped, you quickly realize that art is often about turning
limitations into inspirations. In my personal experience, the iPhone camera produces
pictures that are relatively grainy and splotchy; bright light sources have a tendency to
bleed across images, and you get glows and haloes where none existed in real life. But
many of Jarvis‘s own shots use these odd effects to beautiful advantage. I can‘t show any
of them here due to copyright restrictions, but there‘s a cool gallery at his site, and Jarvis
has collected a whole bunch of his iPhone shots into a 256-page, $20 softcover book
entitled, naturally, The Best Camera Is The One That‟s With You. (You can order it from
Amazon or Barnes & Noble; ads for the volume are built into the iPhone app and the
community site, which is part of what makes the whole campaign so clever. But be
warned as you explore Jarvis‘s photos, writings, and videos: he isn‘t exactly short on
confidence or ego.)
The newest iPhone model, the 3GS, has video recording capabilities as well as a
still camera, so a whole culture of iPhone videographers is now sprouting up. But I‘m
stuck with a 3G for now (AT&T won‘t let me upgrade until December), so I‘ll have to
wait for a while to start hacking around in that community. By the way, I‘m aware that
this column may sometimes sound like it‘s ―all iPhone, all the time‖—but the truth is that
the iPhone is simply the best consumer-level platform these days for creative digital
experimentation, so I can‘t help myself. If the rumors about an Apple tablet device are
true, I‘m going to be spending a lot of time writing about that in 2010. Next week,
though, I promise to write about something non-Apple-related. Probably.
69: How to Launch a Professional-
Looking Blog on a Shoestring
October 2, 2009
Maybe you‘d like to have a sleek, attractive blog or website for yourself or your
business. Maybe you‘ve looked around at some of the free blogging or lifestreaming
platforms like Blogger, Posterous, Tumblr, TypePad, and WordPress.com and you‘ve
been underwhelmed by the cookie-cutter sameness of the sites you see there. If either of
those things are true, today‘s column is for you.
The free platforms used to be the only way for a beginning blogger to take
advantage of Web publishing technology. But it‘s now possible to set up a good-looking,
full-featured, highly personalized blog, simply by buying a customizable site template
and setting it up on an independent hosting service. It‘s much easier and cheaper than it
sounds. In fact, I did it last weekend, and I‘m going to walk you through it.
First, though, a word about the pluses and minuses of the free platforms. I‘ve used
quite a few of them. What‘s great about them, of course, is that they‘re free, and that they
let you set up an account and start blogging instantly. Blogger, Posterous, Tumblr, and
TypePad all make it extremely easy to create posts—in most cases all you have to do is
write an e-mail. And they let you post several kinds of material, including text, photos,
videos, and audio.
What‘s most dismaying to me about the free blogging platforms, though, is that all
of their blogs tend to look alike, with a style that‘s curiously Web 1.0. Blogger, TypePad,
and WordPress.com are the worst offenders: you can pick from a range of templates or
―themes,‖ but most of them look like they‘re straight out of 2004. Innovation is much
more alive at Posterous and especially Tumblr, which allow more customization, but
those platforms lack many of the extra features—such as integration with photo-sharing
or messaging tools—that bloggers need to keep up with today‘s social media explosion.
If you want a full-featured blog with a spiffy, up-to-date design, the truth is that you
need professionally designed theme running on top of a powerful content management
system like WordPress. The good news is that you can get these things quickly and
easily. I saw a bumper sticker on I-93 yesterday that said ―Websites designed for $500.‖
Buying a WordPress theme and setting it up on a hosting service yourself will cost you
far less than that.
A quick but important distinction: WordPress is a free, customizable, open-source
Web publishing software system, created by San Francisco-based Automattic, that
anyone can download from WordPress.org and run on their own Web server (that‘s what
Xconomy does); WordPress.com is Automattic‘s hosting service, where you can start a
bare-bones WordPress blog and the company will host it on their servers for free.
Xconomy, FYI, is built on a WordPress theme that we designed from scratch.
Last weekend I relaunched my personal blog, Travels with Rhody, using a ―store-
bought‖ WordPress theme and an independent hosting service. The whole process took
less than 12 hours and cost me $70 (plus moderate hosting fees down the road). Here are
the simple steps I followed.
1. I went shopping at WooThemes. Stumbling across this super-cool South
African Web design company a few weeks ago was what started me thinking about
replacing my old Tumblr blog. The specialty of the house at WooThemes is premium
WordPress themes. They‘ve got dozens to choose from, serving a range of Web
publishing needs: straight personal blogs, photo or art portfolios, small-business sites,
even full-on news magazine sites. When you buy a WooThemes theme, you get not only
the template that dictates where posts will show up on your home page and how to
navigate between them, but also a variety of custom plugins (software scripts compatible
with WordPress) that you can use to add extra functionality.
I fell in love with one of WooThemes‘ newest creations, a personal blogging theme
called Antisocial. Contrary to its name, it‘s designed for people like me who basically
live online and make extensive use of social-media tools. (Although maybe that is being
antisocial, on some level.) Among its features is a column of colorful buttons down the
left side of the page that lead blog visitors to your Twitter feed, your Facebook page, your
Flickr photostream, and the like. There‘s also a built-in VCF business card so that visitors
can download your contact information directly into their address books.
Cost: $70. (WooThemes has a two-for-one sale going on, so Antisocial‘s real cost
was only $35.)
2. I set up a Web hosting account at Fused Network. This Toronto, Ontario-
based Web hosting company came highly recommended by the folks at WooThemes.
Like most hosting providers, they provide shared space on their servers, free access to
basic Web publishing tools like Linux, Apache, MySQL, PHP, and WordPress, and up to
10, 20, or 30 gigabytes of transfers (i.e., Web traffic) per month, depending on the
package you choose. Unlike most hosting providers, Fused is inexpensive. You can get
started there for $9.95 per month, compared to $20 per month at MediaTemple. Other
services provide higher monthly transfer limits, but unless you think your blog is going to
be inundated with traffic, 10 gigabytes a month is plenty.
One thing I like about Fused is that—in contrast to some hosting providers I won‘t
name here—it seems to be a fairly small and responsive company run by real people. I
signed up for an account on a Sunday morning. I got unreasonably impatient after a
couple of hours went by and I still hadn‘t received my account details. I sent Fused a
support request asking what was going on. I got back a charming note saying that
everyone had been away at church. (My account details followed soon after.)
Cost: $9.95 per month. (If you buy a theme at WooThemes, you‘ll get a coupon
code that makes your first month of hosting at Fused free. So my real cost here was zero,
at least until next month.)
3. I installed WordPress at Fused and uploaded the WooThemes Antisocial
theme. The tools at Fused include a program called Installatron that lets you install free,
open-source software like WordPress on your shared server with just a few clicks. The
next step is to replace the basic theme that comes with WordPress with the custom one
you purchased. For this, you need an FTP program—I downloaded a free one called
FileZilla. This allows you to plop all of the PHP scripts, stylesheets, functions, and
images, that came with your custom theme into the ―public-html‖ or ―www‖ directory on
your new Web server. WooThemes has a helpful video that walks you through the whole
theme installation process.
Cost: $0.
4. I fiddled with the theme options to give my new blog a personal spin. Most
WooThemes themes come with a variety of options for customizing the layout, color
scheme, and behavior of your blog. WordPress makes it easy to select your favorites
using the ―Theme Options‖ panel. I chose a nice burnt-orange color scheme for Travels
with Rhody. I set up the navigation scheme so that visitors can browse my posts by
category. I added a few free WordPress plugins, including one that shows the latest
photos I‘ve uploaded to Flickr, and another that shows my most recent tweets at Twitter.
(Adding plugins to a WordPress blog is easy: you just download them from the free
WordPress plugin directory, FTP them to the plugin directory on your server, activate
them in the WordPress administrative dashboard, and use a drag and drop interface to
arrange them inside the ―widgetized‖ areas of your WooThemes theme.) I also took
advantage of several of the custom widgets that came with the Antisocial theme,
including the social-media widget that handles the aforementioned button column, as well
as the tagging and calendar widgets.
Cost: $0.
5. I created a logo. Every professionally designed blog needs a slick logo.
Fortunately you don‘t have to hire a Web designer to make one—you can do it yourself
using any number of free graphics programs, as long as you‘re willing to learn a few
tricks. I used GIMP, the GNU Image Manipulation Program, which you can download
here. I liked the look of the generic logo that came with the Antisocial theme, so I fiddled
around with GIMP‘s text, fuzzy selection, gradient, drop-shadow, and rotation tools until
I had something similar that I liked, and then uploaded it to WordPress using the Custom
Logo area of the Theme Options panel. The key thing when making a logo is to do it on a
transparent background and save it in a file format that supports transparency, such as
PNG or GIF. I found some useful tutorials on creating logos in GIMP here and here.
Cost: $0.
6. I started blogging. This, obviously, is the hard part. Once you‘ve got a nifty
personal blog, it helps if you have something to say. I plan to use Travels with Rhody just
as I always have—as a place to collect and share article links, photographs, random
discoveries, and thoughts about journalism, technology, and my other passions. For
example, I just blogged about my participation in a recent Web Innovators Group panel
in on how early-stage startups can handle their own public relations. The panel generated
quite a bit of heated discussion among the actual public relations professionals who were
in the audience, and I wanted to respond, but it‘s the sort of inside-baseball stuff that isn‘t
really appropriate for Xconomy. Whatever you decide to write about, I guarantee that the
stylish, sophisticated themes available from WooThemes will make you feel like a pro.
Cost: Priceless.
70: Facing Up to Facebook
October 9, 2009
My friend Brad King, a journalism professor at Ball State University, makes fun of
me for being such a Web and gadget geek while at the same time shunning social
networking tools like Facebook. He‘s got a point. I‘ve written a lot about Facebook,
MySpace, and their predecessors, but I‘ve never wholeheartedly joined in, the way I have
with most of the other digital media technologies that are the loose theme of this column.
I guess I never quite saw the point. Also, though it‘s probably a sign that I‘m growing
prematurely crotchety, I keep telling myself that that social networking is a fad, like some
fashionable night club that will empty out as soon as something new opens up down the
street.
Well, Facebook may still be a fad, but with 300 million users and growing, it‘s a
remarkably enduring one. It‘s probably time for me to get used to it. On top of that, I‘ve
had some experiences over the last couple of weeks that have started to change my
attitude about the site.
It started with my iPhone. Two weeks ago, as you might remember, I wrote a
column about ―The Best Camera.‖ It‘s an iPhone app created by Seattle photographer
Chase Jarvis as part of a cross-media campaign promoting his message that ―the best
camera is the one that‘s with you.‖ The app lets you apply some intriguing digital effects
to the photos you snap with the iPhone‘s built-in camera. It also lets you upload your
processed images directly to Facebook, where every new shot will show up on your Wall
and in your friends‘ news feeds.
I‘ve sent a few of my Best Camera shots to my Facebook photo albums, and a truly
surprising thing has happened. People have been commenting on the photos. Not a huge
crowd of people, but enough to make me realize that there are Facebook users who
actually pay attention to the new stuff they see every day, and that some of them care
enough to leave feedback.
I don‘t mean to sound naive—I know that posting and reading updates and
commenting on other people‘s updates are the main order of business at Facebook. The
wake-up call for me was the realization that Facebook has now become what Flickr was
originally supposed to be.
I‘ve been a Flickr user since ancient times—back before it was part of Yahoo, when
it was a funky little startup based in Vancouver and was mainly a place where people
could comment on each other‘s photos by decorating them with little thought-balloon
captions. I‘ve got thousands of photos there, and it‘s going to remain my default online
photo storage location. But nobody ever comments on my photos at Flickr anymore. At
Facebook, by contrast, I can upload a camera-phone shot and get five comments within
an hour.
What‘s up with that? I thought at first that the sheer volume of photos at Flickr
might be one explanation. There are so many new ones every day that my shots might
just be getting lost in the crowd. But from what I‘ve read, the world‘s largest photo-
sharing site these days is not Flickr or Photobucket or Snapfish, but Facebook itself. So
there‘s something else going on. And it‘s probably not so hard to understand.
Your photos on Facebook attract attention and inspire your friends to comment
because they come with a built-in context: you. They show up in the same feed with your
status updates, your tweets, your links, your comments on other people‘s stuff, your
Mafia Wars hits. In other words, your photos are only a part of you‘re sharing about your
life. The images gain value, rather than losing it, by being part of the big social-media
mix. Photos on Flickr, by contrast, inhabit a social vacuum. There‘s very little context on
a Flickr page; a picture posted there might as well have been taken by anyone.
So the genius of Facebook, I‘m belatedly realizing, is in the racket—the same
confusion of seemingly disconnected chatter that used to be exactly what bothered me
about the site. If you‘re browsing your Facebook news feed and you see a photo your
friend posted right alongside another friend‘s music recommendation and yet another
friend‘s account of last night‘s office party, you‘re probably in ―networking mode‖
already. That mean‘s you‘re much more likely to stop and comment on the photo than if
you‘re dutifully clicking through someone‘s vacation photos on Flickr. Also, to give
credit where it‘s due, Facebook makes it very easy to leave comments, and to comment
on comments, and to let everyone know you commented, and around and around.
Facebook has a few other new things in its favor, too. After the most recent
redesign, which got rid of much of site‘s old clutter, it seems much more functional and
attractive. I like the way the Wall makes updates front-and-center, Twitter style,
highlighting what was always Facebook‘s most interesting feature. The pandemic of
annoying viral vampire-and-zombie apps seems to have ebbed. On top of all that, the
service‘s population of 300 million now seems to include everyone I‘ve ever known since
elementary school. So it‘s truly the best places to keep track of all the friends and
relatives I rarely get to see in person. (I have an uncle in Connecticut whom I haven‘t
seen since I returned to Boston from California two years ago, but we‘ve communicated
several times on Facebook.)
So I‘m getting over my Facebook aversion. But I still feel a bit lost there. I have a
feeling that I could be using my time there much more effectively—but I‘m not sure how.
What I‘d really love is to see a few examples of people who are using their Facebook
presence in creative, constructive ways.
So, I want to close with an invitation: send me links to your favorite friends on
Facebook, the ones who always seem to be doing cool stuff or sending you cool links,
and I‘ll feature them (assuming their profiles are public) in a future column on the stars of
Facebook. For personal and professional reasons, I‘d be most interested in people in the
Web, software, or digital media worlds, and those who are based in Xconomy‘s home
regions—New England, Southern California, and the Pacific Northwest. But everyone is
fair game. Send your suggestions to wroush@xconomy.com.
71: The Kauffman Foundation: Bringing
Entrepreneurship Up to Date in Kansas
City
October 16, 2009

Before I came to Xconomy, I didn‘t know much about the Kansas City-based
Kauffman Foundation beyond what I heard on the radio. If you‘re an NPR listener like
me, you‘ve probably heard their underwriting plug that says it‘s the ―Foundation of
Entrepreneurship.‖
But then I started covering stories like the foundation‘s annual New Economy
Index, in which Massachusetts and Washington nabbed the top two spots last year, and its
sponsorship of projects like Ed Roberts‘ study last February showing that companies
founded by MIT graduates have an annual economic output of more than $2 trillion. I
began to see that the Kauffman Foundation, to a far greater extent than any of the
116,000 other private foundations in the U.S., is focused on the relationship between
company formation and general economic growth—and not just in a theoretical or
academic way, but on a very practical level.
It‘s a focus that happens to overlap quite a bit with our own obsessions here at
Xconomy. In fact, we‘re now partnering directly with the foundation, which has become
one of the underwriters of our new Startups Channel, a gathering place for startup-related
stories from all of the cities in Xconomy‘s network. And last week I had an opportunity
to visit the foundation‘s campus near the University of Missouri, Kansas City, to attend a
reception and dinner honoring the inaugural class of Kauffman Entrepreneur Postdoctoral
Fellows. The fellows are a group of 13 scientific researchers singled out for their
ambitions to commercialize their laboratory discoveries. (Full disclosure: the foundation
picked up the costs of my trip.)
The foundation was established in 1966 by Ewing Kauffman, an entrepreneurial
pharmaceutical salesman who founded Marion Laboratories in 1950 and built it into a
multi-billion dollar company. It‘s now one of the 30 largest foundations in the U.S., with
roughly $2 billion in assets.
Under the leadership of Carl Schramm, its president and CEO since 2002, the
Kauffman Foundation has been aggressively searching for new ways to help
entrepreneurs build high-growth companies. In remarks at last week‘s dinner, Schramm
noted that roughly one-third of the nation‘s gross domestic product comes from
companies that are less than 30 years old. Research by the foundation, he said, has shown
that just 300 to 1,000 new startups each year account for most of that one-third. So the
question the foundation has set out to answer, through a new initiative called the
Kauffman Labs for Enterprise Creation, is ―how can we create another 300 to 1,000 high
growth firms per year,‖ Schramm said.
The postdoctoral fellowship program is part of the Kauffman Labs project. Sandra
Miller, a senior fellow at the foundation who was brought in from Stanford to run the
fellowship program, says one premise is that the existing structure of American graduate
and postgraduate training means that technical founders for new science- and technology-
driven companies are always in short supply. ―You have these people who are so
incredibly trained, and are at the forefront of pioneering research, and yet their hands are
almost tied behind their backs, just because of the responsibilities they have in the lab,‖
Miller says. The fellowship program seeks to remedy that problem by giving promising
postdoctoral researchers who are already leaning toward entrepreneurship the freedom to
develop their business ideas.
The program pays the fellows‘ usual salary and benefits for a year; in return, the
principal investigator in each postdoc‘s lab is required to release them for at least 20
hours per week to work on launching new ventures around their scientific or engineering
findings. The fellows are also paired with local businesses for hands-on internships, and
they gather four times per year for intensive week-long workshops at the Kauffman
Foundation. (The first such workshop was held last week.)
Two members of the first class of Kauffman Entrepreneur Postdoctoral Fellows are
based in Cambridge, MA, Xconomy‘s first hometown, and I got to meet them at the
reception. Both, not too surprisingly, are from the family of MIT and Harvard
laboratories sometimes collectively referred to as the ―Langer Lab,‖ after Robert Langer,
the prolific MIT biomedical engineer (and Xconomist) who has trained so many of the
area‘s leading researchers and entrepreneurs.
Carolina Salvador Morales, an immunoengineering expert, is a postdoc in Langer‘s
lab at MIT and is developing ways to stimulate the complement system, part of the
human immune system, using physically and chemically manipulated nanoparticles.
Praveen Kumar Vemula, a postdoc in the laboratory of Langer protege Jeffrey Karp in
the Harvard-MIT Division of Health Science and Technology at Harvard Medical School,
is investigating the use of drug-infused polymers called hydrogel to treat inflammatory
conditions and brain tumors.
Both researchers, before the year-long Kauffman fellowships are over, could find
themselves as chief scientific officers at new pharmaceutical startups—but they‘ll need
help doing it. ―The Langer Lab has a pretty good track record at commercialization and
entrepreneurship, but the thing is, the postdocs are in there cranking things out, and they
still need help with how to take the next steps‖ toward creating companies, says Miller.
―Bob Langer and Jeffrey Karp have been really supportive, giving Carolina and Praveen
this time away from the bench. Both Praveen and Carolina have really proven their
scientific chops and have made some fascinating discoveries, and have such a strong
passion to see their research become real and ultimately to help patients.‖
The whole idea of the fellowships is to help the researchers past the practical
hurdles they‘ll face as they pursue their passions—the better to ensure that fast-growing,
job-generating companies come out the other end. But while the majority of the fellows
in the inaugural class already have specific technologies they want to pursue, Miller says
she doesn‘t expect that every single one of those ideas will lead to a company. ―A few of
them will go through the process and determine that the idea they were working on,
thinking that they were going to start a company, might not make the most sense,‖ she
says. ―But the fact that we‘ve helped them to get there is huge—we call it ‗failing fast.‘
They‘ll then have a better understanding of how to evaluate their next idea.‖
On my way to Kansas City, I also happened to meet Eddie Martucci, a senior
analyst at Boston-based PureTech Ventures who is one of the first two Kauffman
Entrepreneur Fellows. Unlike the postdoctoral program, this part of the Kauffman Labs
initiative focuses on budding entrepreneurs who are already part of a commercial
technology incubator or accelerator such as PureTech or the California-based medical
device incubators ExploraMed and The Foundry, which are also hosting fellows.
Martucci has a PhD from the Departments of Pharmacology and Molecular Biophysics &
Biochemistry at Yale, where he did proof-of-concept work on new chemical scaffolds for
antiparasitic drugs. Whether that‘s still his focus at PureTech is hush-hush—but Miller
says ―I‘m sure he‘ll be coming out of this in the next year or two with a viable company.‖
Learning from existing centers of entrepreneurship expertise like PureTech is part
of Kauffman Labs‘ mission, Miller says. ―There are a lot of incubators and
accelerators…and they all come at this activity differently, but at the end of the day they
have all been succesful in forming companies, so we thought it was important to partner
with a few of these organizations and document that process,‖ she says. The foundation is
selecting Entrepeneur Fellows both for their business savvy and for their skill at
observation and self-reflection, Miller says. ―I have seen a couple of Eddie‘s reports from
his experiences at PureTech so far, and from what he is capturing and the observations
he‘s making as he goes through this process with an amazingly sophisticated group
around him, we are learning a ton.‖
The foundation is already debriefing all of its new fellows regularly, and will
eventually start to share its findings at places like Entrepreneurship.org and
Growthology.org, two startup-oriented blogs published by the foundation. One point of
the Kauffman Labs initiative, after all, is to establish what Schramm calls ―the science of
startups‖—an understanding of the factors that really account for the success (and failure)
of high-growth firms. ―Our job is to describe and develop a science so successful that it is
copied everywhere,‖ Schramm said at last week‘s dinner. ―It‘s not like we know what
we‘re doing—but we have enormous hopes, and we have resources, drive, partners, and
stick-to-it-iveness.‖ For the sake of the entrepreneurs Xconomy covers, and the economy
in general, I hope the Kauffman folks succeed—and I‘m looking forward to writing more
about their work as it progresses.
72: Sony, Google Point the Way Toward a
More Open Future for E-Books
October 30, 2009
In a presentation at the Boston Book Festival last weekend, Jon Orwant, a Google
engineer involved in the company‘s Book Search project, made a memorable and, I
thought, quite perceptive remark about the e-book business.
―Think about the books you have at home and how you organize them,‖ Orwant
said. ―Some of you may not organize them at all. Some of you may organize them based
on the person who reads them—Mom‘s books, Dad‘s books, the kids‘ books. Some may
organize by subject or genre. I‘ll tell you one way you don‟t organize them: you don‘t
say, ‗Here are the books I bought from Barnes & Noble, here are the books I bought from
Amazon, and here are the books that were given to me as gifts.‘ We need to be very
careful to make sure that we don‘t create an environment in which digital books end up
that way.‖
What Orwant was talking about, of course, is the siloing going on in the nascent e-
book industry—the fact that if you buy an e-book for your Amazon Kindle, you can‘t
read it on a competing e-book device such as Barnes & Noble‘s new Nook, or vice-versa.
That‘s because book publishers, who are understandably spooked by the music industry‘s
implosion, are worried about losing revenue if people can copy, transfer, and share their
digital content too easily. It‘s also because many of the companies getting into the e-book
market aren‘t happy just selling you a gadget or a couple of megabytes of digital
content—they want you to buy into a whole ecosystem (i.e., the Kindle family of devices
and the 360,000 books formatted for them, or the Nook and its claimed one million
titles).
And so far that plan is working, at least on early adopters like me. I bought a Kindle
2 in May, and since then I‘ve purchased about $120 worth of books for the device, plus
subscriptions to The Atlantic and The New Yorker, and multiple Sunday editions of the
New York Times. All of this content is protected by digital rights management (DRM)
technology that would prevent me from opening it on, say, a Nook or a Sony Reader
device—and that quite likely will prevent me from reading my books 10 or 20 years
down the road, when my Kindle will be dead or obsolete and reading technologies and
content formats will undoubtedly be completely different. But those restrictions haven‘t
kept me from scarfing up more e-books: since I became a Kindle user I‘ve bought about
20 Kindle editions and exactly four physical books (two that weren‘t available as Kindle
editions, and two that were gifts for other people).
But while I‘m not particularly concerned about the fact that my Amazon e-books
are tied to my Amazon hardware (hey, I‘ve also bought hundreds of songs and videos
from Apple‘s iTunes Store that only play on my Apple MacBook and my Apple iPhone),
a lot of people are more skeptical toward the Amazon model. As e-books gradually catch
up to and surpass physical books as the main way many people access book-length
content—which they will, mark my words—continued reliance on proprietary formats
and DRM could wind up fragmenting our common literary inheritance in exactly the way
that Orwant warned about.
But I have a feeling the story isn‘t over, and that market pressures may eventually
push all of the big players in the still-young e-book business toward a more open future.
The day before the Boston Book Festival, I had a long conversation with Steve Haber,
president of the Digital Reading Division at Sony, and I got an earful about his
company‘s commitment over the last couple of years to the idea of open access to digital
content. As Sony fires back at the Kindle and the Nook with its own souped-up e-reading
gadgets, it‘s setting an example that other hardware makers and content providers would
do well to study.
Haber, whose division is based in San Diego, formerly ran sales and marketing for
Sony‘s entire imaging and audio business. (He happens to be the brother of Stu Haber,
president of IST Energy, a waste-gasification startup I profiled in January.) He was in
town for the same Boston Book Festival session on ―The Future of Reading‖ where
Orwant was speaking. (The other speakers on the impressive lineup included Mary Lou
Jepsen of Pixel Qi, Neil Jones of Interead, and Brewster Kahle of the Internet Archive.
As I reported on Saturday, Kahle used the session as the occasion to announce that the
Archive will make 1.6 million free public-domain books available to children around the
world who have XO Laptops from the Cambridge, MA-based One Laptop Per Child
Foundation.)
Haber had brought along Sony‘s entire lineup of e-reading devices: the compact,
$200 Pocket Edition, which has a 5-inch e-paper display (it‘s the same E Ink technology
used for the Kindle, the Nook, and Sony‘s previous PRS-500 and PRS-505 reading
devices); the $300 Touch Edition, with a 7-inch screen and a nifty touch screen overlay
that allows you to turn pages with an iPhone-like flick of a finger; and the forthcoming
Daily Edition, which has a tall 10-inch screen and a built-in 3G wireless modem that
allows instant book and periodical downloads. Expected to hit stores in December, the
Daily Edition is the first Sony reader to get wireless connectivity, a hugely important
feature that Amazon pioneered with the Kindle and Barnes & Noble copied with the
Nook.
After letting me play with the gadgets, Haber explained that all three are designed
to display content in a range of formats, including the open, XML-based EPUB and
ACS4 formats. That means device owners will be able to read any title published in those
formats, even those not sold by Sony. ―Customers want access to content, number one, so
it‘s important, from our perspective, that they not be tied to one store,‖ Haber told me.
He said Sony is also in the process of overhauling its own e-book store; by the end
of the year, every title it sells will have been converted from Sony‘s old proprietary
format, called BBeB, to EPUB and ACS4. These formats still support DRM restrictions,
if book publishers request them, but they are inherently more flexible than the older
formats. For example, customers of Sony‘s e-book store will be able to download titles
purchased at the Sony e-book store to their computers and to multiple Sony devices—in
fact, they‘ll be able to share a single e-book across up to 6 PCs, 6 tethered e-book devices
(i.e., the Pocket and Touch Editions), and 6 wireless e-book devices (i.e., the Daily
Edition).
On the Daily Edition, there‘s also a cool ―Library Finder‖ feature that will let users
see instantly whether their local libraries own copies of e-books they‘d like to read; if
they do, and they‘re not already loaned out, they can check them out instantly for up to
21 days. Try doing that on a Kindle.
I asked Haber how Sony could afford to stay in the e-book business, if it wasn‘t
focused on making consumers buy Sony-provided content. (The consensus among
industry watchers is that Amazon sells the Kindle at a loss and hopes to make money
back by selling e-books, which obviously have a much lower marginal cost.) ―Our
competition will do what they are going to do, but our point is continually to allow
access, as long as it‘s within the rights specified by the publishers,‖ Haber answered.
Consumer pressure will force more e-book distributors to move to open formats
over time, Haber believes. He uses an interesting analogy for the current situation in the
e-book market: ―It‘s like going to the mall with your friends, and one of your friends
says, ‗I can only go into this one store, but you guys can go everywhere.‘ It‘s illogical.
We don‘t think that to be successful in this space you have to cause the customer to be
locked in.‖ Sony has all the latest best-sellers, at competitive prices, Haber says. ―But if
you have another bookstore that you like to buy your books from, please help yourself, or
borrow books from the library. Our value equation is about choice. As long as we make
our e-book store a great experience, and customers enjoy using our devices, then we will
do well.‖
Haber talks a good game—but Sony has such a dubious track record on issues
around DRM that I‘ll probably withhold judgment for a bit longer on how serious the
company really is about access. Meanwhile, Google is not-so-subtly pushing for some of
the same kinds of openness Haber is talking about. In August it made a million of the
public-domain books that it has scanned as part of the Book Search project available in
the EPUB format. And just a couple of weeks ago, at the Frankfurt Book Fair, Google
clarified its plans for selling ―Google Editions‖ starting in 2010; these digital versions of
in-print books will be readable on a range of platforms, including computers, phones, and
dedicated reading devices.
―Here‘s how we could really screw things up‖ with the Google Editions program,
Orwant said at the book festival: ―If we released a hardware device, and it was only able
to let people read Google Editions. That might be something we could do if we simply
wanted to maximize short-term sales. So we‘re not going to do that. Rather, we want to
create an environment in which you buy a book—whether it‘s from Amazon or Barnes &
Noble or a public-domain book—and are able to read it on any device. And I don‘t even
like that word, ‗device.‘ It could be an e-reader or also a browser on a computer, a
netbook, or a mobile phone, or it could mean print-on-demand—purchasing it on the
cloud and getting a printed copy mailed to you.‖
So far, I‘ve been holding up Amazon and Barnes & Noble as the standard-bearers
for the closed e-book model, but the truth is the Sony and Google models may be picking
up some traction at both companies. The Nook can display EPUB titles. In fact, the
reason Barnes & Noble is able to claim that a million titles are available for the Nook—
nearly three times as many titles as are available for the Kindle—is that it‘s counting
500,000 of Google‘s public domain books. And B&N is introducing some other
interesting openness-oriented enticements, such as the ability to lend an e-book you‘ve
purchased to anyone with a B&N e-reading app on their PC, Mac, Blackberry, or iPhone.
Even more amazingly, Nook owners who visit Barnes & Noble stores will be able to read
entire e-books for free (I guess they plan to make back the costs of that giveaway on
coffee and bear-claws).
Even Amazon is loosening up a bit. If you have a Kindle, you can also read the
books you buy on your iPhone, and recently Amazon introduced a PC-based Kindle
application as well. You can have the same Kindle book open simultaneously on up to six
devices, if they‘re all registered to the same account. There are third-party programs such
as Calibre for converting EPUB titles to the Kindle format, which you can then upload to
your device via USB. And earlier this year Amazon bought Lexcycle, the maker of
popular mobile e-reader program Stanza, which supports EPUB books; this could be a
sign that Amazon will eventually embrace the open format (though one cynical blogger
comments that this embrace could well take the form of ―hands around the neck‖).
So, how long will it be before e-books are just like CDs or DVDs, where every disc
works in every player? A few years, minimum. Publishers will need to get more
comfortable with the whole idea of selling content digitally before they let down their
guard on DRM. And the Googles, Sonys, Amazons, and Barnes & Nobles of the world
will need to find reliable ways to attract readers to their platforms rather than resorting to
trapping them there. Who knows—perhaps we might even see a new golden age of
publishing, where e-book distributors add value to their own branded editions by
supplementing them with scholarly introductions, entertaining footnotes, interviews, or
multimedia content. (This is exactly how publishers like Penguin get away with charging
$16 for paperbacks of classic works like Wuthering Heights that have long since entered
the public domain and should, by all rights, be free.)
Our physical bookshelves may look a lot emptier in the near future—but I think our
online ones are likely to get richer and richer.
73: Is it Real or Is It High Dynamic
Range? How Software Is Changing the
Way We Look at Photographs
November 6, 2009

You know how listening to music on a friend‘s pricey Bose headphones makes it
harder to tolerate your tinny little speakers at home, or watching your favorite show on a
high-definition screen spoils you for regular TV? I‘m at a moment like that in the way I
look at photographs. For the last few weeks, I‘ve been playing around with a new
computerized technique called high dynamic range (HDR) photography, which can lend a
stunning level of brightness, contrast, and detail to digital images. And now every
traditional non-HDR image that I see looks flat and dull by comparison.
It‘s a dilemma, actually, because the HDR ―look‖ can be peculiar, artificial, even
surreal. If you lived in a world where every photograph was made this way, you‘d have a
constant migraine. But for now, I‘m a little bit addicted to HDR. And at the risk of
getting you addicted, too, I want to talk this week about how the technique works, what
you can do with it, and how it can help all of us question some of the conventions and
expectations we‘ve built up around the art of photography, and around the related art of
looking at photographs.
HDR images are unusual because they don‘t represent a single moment in time, like
most photos, but rather are digital fusions of several images of the same scene, taken at
different exposure levels. (In photography, the longer the exposure time, the more light
gets captured by a camera‘s film or digital sensor, and the brighter the resulting image.)
To collect raw material for an HDR image, photographers generally take at least three
pictures: one that‘s underexposed, one that‘s overexposed, and one at a normal exposure.
This is called exposure bracketing.
The easiest way to understand all this is to go and look at the Web version of this
column, which includes several demonstration images of a forested hill against a blue sky
filled with puffy clouds; it's at http://www.xconomy.com/national/2009/11/06/is-it-real-
or-is-it-high-dynamic-range-how-software-is-changing-the-way-we-look-at-
photographs/.
Digital cameras have come a long way in the last 10 years, but the sensors inside
them are still nowhere near as good as the human eye at handling the huge variations in
luminance that occur in the natural world. (Photographers call this variation dynamic
range.) As you can see from from the first demonstration image—the one taken at the
standard exposure level that my camera chose automatically—the trees look okay, but the
sky is pretty washed out. That‘s because the camera, in choosing an exposure that would
capture some detail in the hills and leaves, wound up gathering too much light from the
much brighter sky above.
The HDR process offers a way to compensate for this technological limitation. If
you examine the second, underexposed demonstration image, you‘ll notice that the
landscape is pretty dark, but there‘s a lot more detail in the clouds—you can actually see
how shapely they are. Conversely, in the third, overexposed image, the sky is a
featureless white blur, but you can see a lot more stuff happening in the trees—detail that
was largely lost in the shadows in the normal exposure.
Software for creating HDR images uses some fancy math to merge all three
exposure-bracketed photos into a single image that preserves the detail from both the
brightest areas of the underexposed image and the darkest areas of the overexposed
image:

Pretty cool, huh? Of course, you wind up with an image that has far more dynamic
range than photographic paper or standard LCD monitors are capable of conveying. So
most HDR software also goes through a second step called ―tone mapping,‖ which
compresses the fused image into a final version that still has a high contrast ratio and
color saturation, but also looks good when printed or displayed. (Let me forestall
criticism from traditional film photographers right now by acknowledging that certain
kinds of film have a very high dynamic range, and that good photographers can deal with
huge luminance variations without resorting to software tricks. Just look at Ansel
Adams.)
To make the images above, I simply snapped three pictures of the original subject—
the foothills of the Green Mountains in Vermont—using the auto exposure bracketing
setting on my camera. (The trick here is to hold your camera steady. It‘s best to use a
tripod.) Then I merged the photos on my Mac using Photomatix Pro, a $99 program made
by a small company in France called HDRSoft. There are other standalone programs for
making HDR images, including a few free ones, but Photomatix produces the best results,
in my experience. If you‘re lucky enough to own a recent version of Adobe Photoshop, it
has a built-in function called ―Merge to HDR‖ that does the same thing.
Since buying Photomatix, I‘ve been going a bit wild with the HDR technique. You
can see more of the results by checking out this photoset on Flickr or the slide show at the
end of this story. And don‘t stop with my images—I‘m a rank amateur at this, and to
appreciate the full possibilities of HDR photography you should also look at what
photographers like Jared Earle are doing (I‘m especially struck by this Earle image).
I have to warn you—most of these photos look strange. It‘s not that they‘re
unrealistic, exactly; in point of fact, HDR images are much closer than normal photos to
what the human eye can see in real life, given its amazing sensitivity to wide variations in
luminance. It‘s just that traditional technologies have trained us to expect something very
different from a photographic reproduction. We aren‘t used to seeing so much detail in
both the foreground and the background of a reproduction, and in both the light and the
dark areas.
It‘s because of all this eye candy vying for your attention that some HDR images
almost look staged, as if someone lit the scene with Klieg lights. HDR pictures
sometimes remind me of wall calendars from the 1950s, or of the sparkling, saccharine
worlds created by visually minded directors like Tim Burton or Bryan Fuller (Charlie
and the Chocolate Factory, ―Pushing Daisies‖).
Now, whether HDR photos make good art is another question. A friend of mine,
composer/photographer/author Graham Ramsay, feels that most HDR images look
unreal, and that they assault the viewer with a confusion of detail. He‘s an evangelist for
the idea of intentionality in photography—the careful composition of an image, in both
the shooting and the processing phases, in order to capture an emotion or tell a story. If
every square inch of an image is equally busy with texture and variation, Graham argues,
the viewer‘s eye doesn‘t know where to go; such pictures may convey a lot of
information, but they don‘t convey ideas. An HDR image, from this point of view, is no
closer to art than a photograph snapped by a robotic probe on the surface of Mars.
Another professional photographer, Michael Reichmann, anticipated some of these
criticisms back in 2005, when Adobe first added the ―Merge to HDR‖ function to
Photoshop. Reichmann called the feature ―the holy grail of dynamic range,‖ since it
allows photographers ―to easily create images that were previously impossible, or at least
very difficult to accomplish.‖ But like guns and nuclear power, he warned, photo-editing
software ―can be a force for evil as well as good,‖ and he predicted that many
photographers would use the new feature to create ―some really silly if not downright
ugly images.‖
And perhaps that‘s what I‘ve done. I certainly wouldn‘t argue that my HDR photos
are great art. But I also think that our appetites and cravings as consumers of images can
vary. Sometimes you just want to be enveloped in the artistic sensibilities of an individual
photographer or photojournalist—an Alfred Stieglietz or an Ansel Adams or a Margaret
Bourke-White or a Peter Menzel. And sometimes you may want to immerse yourself in
an image where detail and fidelity, rather than emotion or storyline, are the intention. (I
happen to be the kind of person who enjoys poring over the stunning images that the
Sojourner, Spirit, and Opportunity rovers have sent back from Mars.)
In his essay on HDR, Reichmann allowed for the possibility that ―in the hands of
sensitive artists and competent craftsmen…we will start to be shown the world in new
and exciting ways.‖ It may be a while yet before serious photographers figure out how to
use the HDR technique to make their images more expressive. But already, the
technology is forcing us to reassess what we mean when we say a photograph looks real;
it‘s exposing the fact that the kinds of photos we used to call realistic are actually
attenuated versions of what our eyes really see. (You might even call conventional photos
LDR, for low dynamic range.) HDR is just another tool in the toolbox—and of course it
would be silly to use it too often. But for now, my camera is an HDR hammer, and
everything is looking like a nail.
74: Using Google’s Building Maker to
Change the Face of Boston
November 20, 2009
When I was in fifth grade, I wanted to be an architect. (I also wanted to be a
geneticist, a meteorologist, and an astronaut. I guess I wound up doing the next best thing
to all of those sci/tech careers—writing about them.) I loved my junior builder kit, a
collection of little plastic columns and I-beams and snap-on windows that was perfect for
constructing models of International-style skyscrapers like the Sears Tower in Chicago.
The only problem with the kit was that once you‘d finished your perfect modernist
creation, you had to tear it all down before you could build something else.
Now there‘s an easy way to build as many model buildings as you want—and put
them on display for millions of people to see. It‘s Google‘s Building Maker tool, released
last month. The Web-based software lets you easily create beautifully textured 3-D
models of real buildings by matching up simple digital shapes with information from
Google‘s aerial photographs of major cities. You can store your finished models in
Google‘s 3-D Warehouse and submit them to Google for ―publication.‖ If a model is
well-constructed and no one else has built a better version, Google will insert it into
Google Earth itself.
Google made Building Maker available for about 50 world cities when it introduced
the tool on October 13. This Tuesday, it added eight new cities to the list: Boston;
Brussels, Belgium; Cologne and Dortmund in Germany; Las Vegas; Los Angeles;
Rotterdam in the Netherlands; and San Jose, CA. Once I heard Boston had been added to
the list, I couldn‘t resist diving in and playing around with the tool, starting with a model
of my own apartment building in Boston‘s South End.
After a couple of days of experimenting, I can tell that Building Maker is going to
provide some addictive fun for a lot of mapping and modeling freaks like me. But just as
important, I think it will provide a rewarding way for people who aren‘t professional
architects or cartographers to contribute to the ―geoweb.‖ Today, we can explore this
expanding digital replica of the real world through 2-D interfaces like Google Maps,
Google Earth, and Microsoft Virtual Earth. But as it gains fidelity, the geoweb could
eventually blossom into the immersive, geographically accurate 3-D online world that
futurists have called the Metaverse.
If the Metaverse does come into being someday, it will be in large part thanks to
Google, which is on a mission to ―create a three-dimensional model of every built
structure on Earth,‖ according to an October blog past by Google product manager Mark
Limber. But even a company as wealthy as Google doesn‘t have the resources to model
all the world‘s buildings on its own. So in classic Tom Sawyer fashion, it came up with
Building Maker, which makes the work so enjoyable that thousands of Google users will
be glad to pitch in.
From talking with Limber himself yesterday, I‘m convinced that this strategy is
only one part shrewdness and about three parts sheer enthusiasm. ―The world is really
big, and there are an awful lot of buildings, so I do think everybody will have to get
involved‖ to fill out the 3-D world, Limber says. ―But on a personal level, it‘s really fun
to be able to drop a couple of blocks, move them around a bit, add a texture, and voila!
There is a little bit of magic there that we hope will draw people into this whole word of
3-D, and be a little more informed about it because they participated in it.‖
Like all good pastimes, Building Maker starts out simple, but goes very deep. What
makes the tool possible in the first place is the fact that Google has deployed aerial
photographers to fly over scores of cities at low altitude, taking pictures of each
neighborhood from many angles. For any given building in these well-documented cities,
Google is likely to have photos snapped from at least six different angles. Once you
decide which building you want to model, Building Maker starts out by presenting you
with a picture from one of these angles. In the first step in the model-building process, the
program places an outline of a 3-D box over the photo, and your job is simply to drag the
corners of the box until they match up with the corners of the building in the image.
In the easiest case—a building that‘s a simple rhomboid, with a flat roof and no
wings or protrusions—placing that one box and aligning the corners is almost all you
need to do. The only further step is to examine and adjust your model from other angles.
Because you‘re working from 2-D images, you have to make sure the corners in the
digital model match the corners in at least two different images before Google can know
exactly where the model should go in the ―three-space‖ of Google Earth, and what
sections of the images should be applied to the sides of your model to make it look real.
Practically speaking, I found that you need to do the alignment from four, five, or
six angles to get everything just right. While this may sound tricky, it‘s actually quite
straightforward, and will be especially easy for anyone who‘s used other 3-D modeling
tools such as Google Sketchup (Building Maker‘s grown-up cousin) or the object creation
tools in the virtual world Second Life.
Things start to get complicated—and much more interesting—when you‘re
modeling a more complex building. It was a lucky stroke for me that the place where I
live, a building called James Court that was constructed in 2005 by a Boston-based estate
developer Kenney Development, is both absent from Google Earth (meaning I could be
the first to build a model of it) and has a fairly simple shape: it‘s a seven-story building
that forms an L-shape on the corner of Newton and Harrison Streets, near the Washington
Street corridor in Boston‘s South End. But it has just enough irregular features, including
a step-back roof on one wing and an overhanging brow on the Newton Street facade, that
I had to a learn a few of Building Maker‘s more esoteric tricks, such as the technique for
creating new boxes and attaching them to existing ones, to make the model come out
right.
As a historical aside, James Court is in a neighborhood that has gone through an
incredible transformation over the last two decades. Washington Street runs along the
narrow neck of land that, for centuries, was Boston‘s only connection to the mainland. In
the 1800s, the tidal flats on either side were gradually filled in to create room for rows of
fashionable brownstones. But by the 1980s the Washington Street area was so decrepit
that the creators of the NBC series St. Elsewhere (1982-1988) chose Franklin Square
House, the building across the street from the James Court site, as the exterior for St.
Eligius, the show‘s benighted urban hospital.
Things started to turn around in the late 1980s. The elevated railway where Orange
Line trains can be seen rumbling past the hospital in St. Elsewhere‘s opening sequence
was torn down; the street‘s parks and sidewalks were rebuilt; many new condo buildings
went up, and many historic buildings were renovated; and environmentally friendly
Silver Line buses replaced the old elevated. Nearly $600 million was poured into the
area‘s revitalization all told, and in 2008 the street won a ―Great Places in America‖ prize
from the American Planning Association.

I feel that modeling my building for Google Earth helps to extend this story in at
least a small way, by adding to the digital environment that other people can now use to
explore and navigate the reborn neighborhood. You can go to Google‘s 3-D Warehouse
to view or download my finished model of James Court—and if you click on the ―View
in Google Earth‖ button you can preview what the building will look like inside Google
Earth, if and when Google approves it. For my next project, I think I‘ll try modeling ―St.
Eligius‖ itself, a building to which James Court pays architectural homage in many ways.
(Its curving mansard roof will pose an interesting geometry challenge). [Update
11/21/09: I've now finished a first draft of the "St. Eligius" building, which, as I just
learned, started out as the St. James Hotel in 1867. President Ulysses S. Grant stayed
there in 1869.]
The reason I‘m so excited about Building Maker—and about digital mapping and
modeling tools in general—is that I think they can foster a deeper sense of connection to
the real world. Even before Building Maker, there was a burgeoning community of
volunteer geo-modelers contributing their Sketchup creations to Google Earth, but now
many more people can have the experience of literally putting something on the map. As
Limber says: ―If we put things in the hands of users, they can keep things fresh, put time
and love into their creations, and frankly build out the world in ways Google can‘t or
won‘t for a long time. We‘re trying to demonstrate with tools like Building Maker and
Sketchup and Map Maker and My Maps that maps are very dynamic things and that the
world can help to create them and keep them up to date.‖
So even if your city isn‘t one of those covered by Building Maker yet, I encourage
you to pick a location and try creating something. It‘ll nourish your inner architect. And
because the models are stored in open formats that can be imported into many different
digital environments (not just Google Earth), you‘ll be doing a favor to every citizen of
the emerging Metaverse.
75: Digital Magazines Emerge—But
Glossy Paper Publishers Haven’t Turned
the Page on the Past
December 18, 2009

With all the drama this year around newspapers, including the Boston Globe‘s near-
death experience and the actual demise of several other papers such as the Seattle P-I and
the Rocky Mountain News, there‘s been slightly less hoopla over the fate of magazines.
They‘re dealing with many of the same problems as newspapers, including a falloff in
advertising, competition from online-only media, and the ever-rising cost of paper,
printing, and distribution. One difference is that it‘s happening on a timeline that allows a
bit more breathing room, given that most magazine publishers aren‘t saddled with the
same kind of debt that‘s crushing the big newspaper chains.
That means magazines have more leeway to experiment with new publishing and
business strategies that could help them through the digital transition. For most
newspapers, it‘s already too late. When you‘re losing tens of millions of dollars a year, as
the Globe still is, you‘re fixated on cost-cutting to keep the doors open a couple more
quarters, not creative ideas for the long-term future.
So, what use have magazines been making of this time? What delightful and
innovative digital creations have they unleashed? It‘s a question that matters to me
personally, given my past experience at magazines like Science and Technology Review,
and my natural concern for the future of journalism.
Sadly, the answer is not many so far. If I had to pick a word for most of the e-
magazine experiments I‘ve been seeing lately, it would be ―unimaginative.‖ Magazine
publishers seem to hope that they can get away with transplanting their existing print
layouts onto the electronic screen—as if it were enough to take the finished publication
files, export them to PDF, and be done with it. This way, publishers wouldn‘t have to do
the hard work of rethinking the kinds of work they commission, the ways different types
of content fit together, or what makes magazines special in the first place.
Take a look at Zmags, a Boston company that works with magazines such as
SmartCEO. Zmags has a tool called Publicator that takes print magazine spreads and
frames them inside a PC browser window. Of course, cramming a whole spread onto a
computer screen means making the text pretty small, so Zmags provides a handy
magnifying-glass icon that lets you zoom in on a particular article or advertisement. It‘s a
lot like another e-magazine interface made by San Francisco-based Zinio, the main
difference being that Zinio magazines are displayed inside a standalone e-reader program
rather than a Web browser. (Technology Review experimented with Zinio while I was an
editor there.)
Both Zinio and Zmags generate a nifty little page-turn animation when you want to
look at the next spread. And both companies seem to have concluded that what readers
want from digital magazines is absolute fidelity to the print product, right down to the
familiar experience of turning the page. (Well, that‘s a little unfair. What they‘ve actually
concluded is that to sell their software to publishers, they have to make it fit with the
existing print-magazine workflow, which revolves entirely around tools like
QuarkXPress and Adobe InDesign that were developed for laying out print pages.)
Things aren‘t much better in the mobile world. Zinio is reportedly developing an
iPhone version of its reader that will give mobile readers access to the same Zinio digital
editions they‘ve purchased for their PCs, and vice versa; users will be able to skim
through magazines using the now-familiar flicking gesture. Gentlemen‟s Quarterly is
already trying something like that with the $2.99 iPhone version of its December 2009
―Man of the Year‖ issue—or rather, part of the issue, as most of the articles seem to have
been omitted in favor of photos (including timeless ones of Paul Rudd in a pink bathrobe,
Twitter‘s Evan Williams and Biz Stone tweeting behind each other‘s backs, and the new
Captain Kirk flying a paper Starship Enterprise).
The only interesting twist in the GQ app is that if you hold the phone vertically, you
get a scrolling table of contents and Web-style article pages, and if you hold it
horizontally, you get tiny facsimiles of the corresponding pages in the print magazine.
You can double-tap the screen to zoom and navigate between pages by flicking. Alas, I
couldn‘t get all the way through the magazine, as the app kept crashing on me.
A few digital publications are getting slightly more creative. Jettison Quarterly, a
Flash-based online periodical focused on the Chicago arts and culture scene, still has the
goofy Zmags-style page turn animations, but it veers away from the literal magazine
metaphor in a few respects. For example, it uses fonts that are sufficiently large that you
don‘t have to zoom in to read the text. And it usually plasters words and images right
over the gutter, the area around the fold in a traditional magazine spread.
However, Jettison is inconsistent on this score—and the page turn animation is built
around the premise that there is a fold, meaning the readers are never quite sure what
they‘re meant to be looking at. Is it a print spread? A Web page? A billboard? It‘s odd to
see Jettison‘s designers confining themselves to the old paper metaphors when the
magazine doesn‘t even have a print edition.
Flyp, a multimedia publication based in New York, is taking more chances. It‘s still
guided by the magazine spread metaphor (and it‘s still got the goofy page-turn
animations!) but its tagline—‖More than a magazine‖—is accurate. Flyp‘s creators aren‘t
just churning out the standard text and photos. There‘s also audio, video, and animated
Flash infographics, and the feature packages come with Hollywood-grade video
introductions—the intro to Flyp's piece on end-of-life care is a nice example.
I like what Flyp‘s designers are doing so far because they‘re not slavishly imitating
print magazines. Rather, the publication uses new media to carve out a space analogous
to, not the same as, the one that magazines inhabit in the print world.
What do I mean by that? I think there are a few fundamental things that set
magazines apart from newspapers. One is the tone and intent of the articles: A little less
rushed and ephemeral, a little more synthetic, analytical, and writerly. Another is the
imagery—especially large-format photography and things like charts and maps. Then
there‘s the design, meaning the way text and images are juxtaposed, and the way
typography itself is used as a graphical element. In the hands of good editors and
designers, these things can be brought together to create an immersive experience that
makes full use of the possibilities of print—the ―affordances‖ of the paper medium, as an
interaction designer might put it.
But if you just transplant that same experience from paper onto a screen, the way
Zmags and Zinio do, you create something that immediately feels stunted and
incomplete, because digital environments provide different affordances from paper. (You
can scroll an online article up or down infinitely without ever having to ―turn‖ a page, to
name just one.)
Flyp‘s articles are well written and serious, and they integrate story material with
photos, videos, and animation in a way that feels inviting, not imposing or forced. (This
piece about Liz Diller and Ric Scofidio, the married architect/designer couple behind the
new Institute of Contemporary Art building in Boston, is especially good.) With Flyp,
you get to explore a subject at your own pace in a guided setting. You aren‘t
overwhelmed with data, but since the whole thing is running inside a browser window,
Google is only a click away. The publication hasn‘t moved completely beyond the
metaphors of paper—but perhaps you can only stretch readers‘ sensibilities so far before
you have to stop and let them catch up.
Unfortunately, it‘s not clear whether Flyp is a real business or just an experiment.
Its parent company, Flyp Media, is financed by Alfonso Romo, the mogul behind
Mexico‘s Indigo Media, and so far the publication hasn‘t been selling advertising or
producing other visible forms of revenue. Flyp‘s editor-in-chief, longtime magazine
journalist and editor Jim Gaines, calls the publication ―a proof-of-concept experiment in
terms of multimedia story telling‖ rather than a commercial product. I think the concept
has been proved; I hope the company can find a way to monetize it.
If you really want a sense of what I mean by the unique affordances of digital
media, and how magazine designers might use them, take a look at this concept video
produced by Bonnier, the Swedish holding company that owns magazines such as Field
& Stream, Popular Science, and Popular Photography. Take the video with a grain of
salt; it‘s just a demo, mocked up by a design consultancy in London called Berg, and it
will be years before the interfaces like the ones shown are working on real devices. But
what the video demonstrates is that someone, at least, is thinking deeply about the
―geography‖ of magazine content, as the Berg designer in the video puts it.
For example, even though text and images are, at some level, at odds with each
other—one is there to induce and immersive reading experience, and the other is there to
provoke amazement—they don‘t have to compete. Instead, the video shows how each can
be literally brought into focus when needed. (I love the Berg designer‘s observation that
the page-turn animations in most e-magazine readers are ―not terribly believable‖ and
that they ―don‘t feel very honest to the format of the screen.‖)
There‘s been a flurry of online discussion in the last couple of weeks about e-
magazines, especially with the announcement by a consortium of publishers, including
Condé Nast, Time Inc., Hearst, Meredith and News Corp., that they‘re working on joint
standards for some kind of digital magazine storefront. The details are still vague, but the
consortium members no doubt feel that they can‘t afford to let Amazon continue to make
the rules in the e-publishing world. (About 40 mainstream magazines are available so far
for the Kindle 2 and the Kindle DX, which actually make very credible e-magazine
readers.) And they probably want to do what they can to pre-empt Apple, which—unless
Steve Jobs has completely lost his touch—will try to use its rumored tablet device to
disrupt the publishing industry in the same way that the iPod and the iPhone have
disrupted the worlds of music and mobile applications.
Magazine publishers may finally be realizing that they need to greet the digital
future proactively, or risk going the way of the newspapers. Let‘s hope they also realize
that this may require moving beyond familiar concepts like pages, and thinking instead
about how to use the new tools at hand to tell more compelling stories.
76: Tablet Fever: How Apple Could Go
Where No Computer Maker Has Gone
Before
January 8, 2010

After a steady crescendo over the last several years, the talk in the mediasphere
about a new tablet computer from Apple has reached deafening proportions. With an
actual product announcement now expected on January 27 (at least, according to the Wall
Street Journal, which cites ―sources in a position to know‖), Apple may finally be on the
verge of providing some official data to quell the many and oft-conflicting rumors.
I‘m as curious as all of my tech-journalist colleagues about what Apple will reveal.
And my inner gadget freak is impatient, too. Speaking purely with my consumer hat on,
I‘ve long been budgeting mentally for an ―iSlate‖ purchase sometime in 2010. There‘s
only one company where I‘d commit sight unseen, years in advance, to dropping a grand
on the next new thing, and it‘s Apple.
But what‘s really been catching my interest, as we wait for news from the horse‘s
mouth, is the apparent strength of the market pull for Apple‘s hypothetical tablet.
Everybody, it seems, desperately wants the iSlate rumors to be true: bloggers, journalists,
publishers, mobile application developers, generic geeks, and even average consumers.
Indeed, the expectations have built up to such a pitch that if the January 27 event doesn‘t
materialize, or if it‘s not about a tablet device, Apple‘s PR team will have global-scale
disappointment to deal with.
The details don‘t seem to matter. Whether the device is called the iSlate or the iPad
or the MacBook Touch; whether its screen measures 7 inches diagonally or 9 or 11;
whether it costs $600 or $1,000; whether it‘s primarily designed as an e-reader or a
gaming pad or keyboardless netbook—most observers seem to agree that the Apple tablet
will be über-cool, that the company will sell millions of units, and that 2010 will be the
year of the tablet.
Whether or not you buy into that consensus (and I do, more or less, though there are
also a few dissenters), you have to admit that all this enthusiasm is a little strange, given
that the market has shown so little interest in tablet computers up to now.
Tablets are a very old idea—in fact, the first computer that can rightly be called a
PC, Alan Kay‘s 1968 Dynabook, was a tablet device. (The Dynabook concept evolved
into the Xerox Alto, which inspired the Apple Lisa and the Apple Macintosh, which
eventually spawned the Apple iPhone, which paved the way for the alleged iSlate—so in
a way, personal computing is now coming full circle.) But it‘s a product category that has
never quite matched up with an identifiable consumer need.
Apple‘s Newton was essentially a small tablet, and Steve Jobs himself killed the
product in 1997 after disappointing sales and embarrassments over the device‘s
suboptimal handwriting recognition capabilities. Full PCs with touchscreens and pen
interfaces have been on the market since 2001, when Microsoft introduced a tablet
version of Windows, but they‘ve never sold more than a few hundred thousand units a
year, and have never caught on outside a few specialized habitats, such as hospitals,
shipping and logistics operations, surveying and mapping, and the military.
So, what accounts for the dissonance here? Why are the same consumers who have
been so apathetic about the tablet form-factor in the past suddenly so excited about a
possible Apple version? I think there are several things going on.
First, as Pen Computing Magazine founder Conrad Blickenstorfer has pointed out,
most of the tablets built to date have suffered from the same set of fatal drawbacks. On
the input side, if you‘re going to dispense with a physical keyboard, then you‘d better
have either perfect handwriting recognition, an efficient virtual keyboard, or highly
accurate voice recognition—but tablet PCs have had none of these to date. On the
output/display side, pen and gesture-based interfaces allow users to interact with data in
all sorts of interesting new ways, yet Microsoft never fully explored these possibilities,
settling instead for an operating system (Windows) that had been designed for use with a
mouse. Above all, there‘s the cost issue: most tablet PCs have been priced in the same
league with premium laptops, which is a real show-stopper, given that most tablets are
less powerful and harder to use than standard PCs.
Second, there is now strong proof that the input/output problems plaguing tablets in
the past can be solved. That proof is the iPhone. The phone‘s virtual keyboard works
well, at least for entering short stretches of text such as search keywords, Tweets, or brief
e-mails. The device supports high-accuracy voice recognition, as apps from companies
like Google and Cambridge, MA-based Vlingo demonstrate. And most importantly,
Apple has finally figured out what touchscreens are really good for. The iPhone OS was
designed from the ground up to support now-familiar gestures like flicking with one
finger to move content around the screen or spreading and pinching with two fingers to
zoom in or out on a photo or web page. Apple‘s Cocoa Touch application programming
interface makes it easy for developers to build apps around such gestures.
These developers have built so many amazing iPhone apps (with some of the
coolest ones coming, ironically, from Microsoft—witness Photosynth and Seadragon)
that you can‘t help salivating over what they might create if they had more screen real
estate to work with. (An iSlate with a 10.5-inch screen would have seven times as much
touchable surface area as the iPhone‘s 3.5-inch screen, according to calculations by
Apple news site iLounge.) As Blickenstorfer opined to the New York Times, ―The sole
reason for the renewed interest [in tablet computing] is that with the iPhone, Apple has
shown that touch can work elegantly, effortlessly and beautifully.‖
But I believe there‘s also a third force at work here, separate from all of the specific
workings of tablet interfaces. The astonishing versatility of the iPhone—which is a cell
phone, a media player, a Web terminal, an e-mail and instant messaging device, a
camera, a GPS navigator, an e-reader, an audio recorder, a game pad, a remote control, a
drawing pad, and much more—has awakened consumers to the idea that a computer that
goes out into the world with you can be much more powerful than a computer that just
sits on your lap or on your desk, even if it doesn‘t pack quite as many gigahertz of
processing speed or megabits per second of connectivity.
The big picture is that the applications of computing have gone way beyond basic
number-crunching to encompass everyday communications—including both data
presentation (e.g., YouTube) and data capture and manipulation (e.g., camera phones). A
device that you can take with you everywhere, and that can both supply you with content
on demand and help you create and publish new content, can be a huge boon to personal
learning and creativity. It can make you into a universal student, an expert navigator, a
24/7 social networker, or a walking video/podcasting studio.
But today‘s tablet PCs aren‘t really portable enough to take everywhere. Most of
them are laptop-sized and weigh several pounds at a minimum. And the iPhone, as smart
as it is, is still just a phone. The small size of its screen limits the amount of data that you
can see or manipulate at any one time.
We need something in between: a device that is small and light enough to take
anywhere, but has a screen big enough to let you edit a complex video, watch a high-
definition movie, view a whole book or magazine page, or paint on a virtual canvas—
and, ideally, use multiple applications at once.
Right now, that sweet spot is still empty. It‘s as if there‘s a black hole there,
exerting a huge gravitational pull on our imaginations. And that‘s the hole where
consumers hope the Apple iSlate will fit.
If you‘ll indulge me, I‘d like to take a brief closing detour into the world of that key
cultural touchstone, Star Trek. On the U.S.S. Enterprise, there are only three kinds of
computing devices. On the big end of the size scale, there‘s the ship‘s computer, which
has a huge, immobile core hidden somewhere deep inside the vessel, and which interacts
with crew members primarily through spoken conversation. On the other end, there are
the mobile devices: tricorders—scientific sensing-recording devices used mainly on away
missions—and PADDs or ―personal access display devices,‖ used aboard ship as portable
reading devices or clipboards. Captain Picard‘s desk, in Star Trek: The Next Generation,
was usually cluttered with several of these gadgets.
You don‘t see anything in between these extremes: no desktops PCs, no laptops. I
think that‘s because the Star Trek writers were on to something important—a truth that‘s
only now becoming evident in real life. (Chalk up one more accurate prediction to
Roddenberry and company.) It‘s that big, important number-crunching jobs like aiming
the photon torpedoes or predicting the weather on Titan are best assigned to invisible, far-
away computing resources: the ship‘s computer, or what we call ―the cloud‖ in today‘s
world. More personal communications tasks, like reading the crew manifest or
composing an e-mail to Starfleet or editing a photo from your shore leave on Rigel, take
far less computing power and can be handled locally, on mobile devices.
In our world, the number of jobs that can‘t be accomplished in one of these two
ways—in other words, the number of tasks where you truly need a desktop or a laptop
PC—is rapidly dwindling. I‘ve been an iPhone convert since the beginning because I see
the device as a step toward the universal mobile computing device—part tricorder, part
PADD—that I think most people will be carrying around (and using as their main
computer) a decade or two from now. Unless all of the prognosticators are wrong, the
iSlate will be even closer to this vision.
At this point, of course, it‘s easy to project almost any hope or dream you want onto
the rumored Apple project. As John Murrell pointed out at SiliconValley.com this week,
the Apple tablet is ―still unseen and therefore perfect,‖ while other entries in the tablet
category—such as the Windows 7-powered Hewlett-Packard slate that Microsoft CEO
Steve Ballmer showed off at the Consumer Electronics Show on Wednesday—must
contend with the harsh light of reality. But we only have to wait a few more weeks to find
out what the future really looks like to Apple.
77: Entrepreneurship May Work Like A
Clock, But It Still Needs Winding:
Exploring the Kauffman Study on New
Firm Formation
January 15, 2010
Like others in the tech-journalism business, we here at Xconomy tend to pore over
the latest statistics about the entrepreneurial economy pretty obsessively: how much
money venture firms are raising and investing from quarter to quarter; how much they
dole out to each new startup in their portfolios; how much these portfolio companies
eventually return to their investors through mergers, acquisitions, or public offerings.
But what if none of this really matters? What if it turned out that the number of new
companies created by entrepreneurs is pretty much the same every year—and that things
like how much money venture firms are handing out, or how many companies are
achieving lucrative exits, or how many students are graduating from business school, or
how many startup incubator programs are springing up, make no difference whatsoever to
the nation‘s overall levels of entrepreneurial activity? Would this mean that all the
conferences and white papers and blog posts about the best ways to boost innovation and
entrepreneurship are, in the end, pointless?
Well, that‘s a serious question now—because the last three decades of data,
according to a new study from the Ewing Marion Kauffman Foundation in Kansas City,
MO, show that the number of new businesses incorporated in the United States holds
steady at about 700,000 per year, give or take 50,000. It‘s as regular as clockwork. In
fact, it‘s as if American entrepreneurs were programmed to start 700,000 new ventures
every year—in the same way that, say, American parents pass on the genes for red-
headedness to roughly 170,000 newborns every year.
You can read all about it in Exploring Firm Formation: Why Is the Number of New
Firms Constant?, by Kauffman Foundation senior analyst Dane Stangler and senior
fellow Paul Kedrosky. (Kedrosky, a San Diego-based investor, entrepreneur, and
essayist, is also an Xconomist, and to complete the disclosures, the Kauffman Foundation
is an underwriter of Xconomy‘s Startups Channel.) When I first met Stangler at a
Kauffman Foundation function last October, he and Kedrosky were still puzzling over the
numbers they‘d been digging up from places like the Census Bureau, the Small Business
Administration, and the Bureau of Labor Statistics, which all seemed to show the same
thing: Americans start the same number of businesses every year, come hell or high
water.
That is a remarkable and, at least on the surface, counterintuitive finding. As
Stangler and Kedrosky point out in their final report, which was published Wednesday, a
casual observer might guess that the number of new firms would fluctuate from year to
year in response to such major forces as economic recessions or expansions,
technological change, and the availability of capital and credit. (We certainly hear the
howls of local technology innovators every time venture firms scale back the number or
size of Series A rounds.) But these things don‘t seem to make any difference in the big
picture.
―It‘s a real puzzle, and it didn‘t appear as if anyone else had noticed it or written
about it,‖ Stangler told me by phone yesterday.
He and Kedrosky might not have noticed the phenomenon themselves if they hadn‘t
already been examining, for a different study, the question of survival rates for new
companies. The percentage of the companies founded in 1990 that were still in business
in 1995, they‘d found, is almost exactly the same as the percentage of companies founded
in 2002 that were still around in 2007. (It‘s about 50 percent.) ―That was interesting,‖
Stangler says, ―and one of the possible inputs to that is that the number of new companies
founded each year is remarkably similar‖—which turned out to be the case.
Nobody had noticed this fact before, Stangler speculates, because it‘s about
constancy, not change: ―You don‘t stop to think, ‗Why is there not a trend here? You
have to recognize the absence of something.‖
Being good scientists, Kedrosky and Stangler first checked to see whether there
might be something wrong with their instruments—that is, that the data might be wrong
or incomplete. But all the datasets they checked showed the same level of consistency in
firm formation over time. And even if the incorporation records were undercounting
some firms, and thus perhaps missing some level of fluctuation within the uncounted
ones, it wouldn‘t explain the clockwork consistency in the number of new firms that were
counted.
The two researchers also considered the possibility that the period for which they
had the best data, 1977 to 2005, was anomalous in some way, and that other eras of U.S.
history might show more tumult. But when they examined Census Bureau data covering
the period 1944 to 1959, they found that even though the sheer number of jobs created
then was lower because of the smaller population, there was still the same eerie
consistency.
The single exception was 1946, when more than 600,000 new firms were founded,
compared to the base level for the period of 400,000 per year. (Annual firm formation
grew between 1960 and 1977 to the 700,000 level and has stayed steady since then.) In
fact, 1946—when so many veterans were returning from overseas, and the wartime
economy was being converted back to consumer production—was the only year, out of
all the periods Kedrosky and Stangler examined, where they thought they could detect the
influence of an ―exogeneous factor‖ on the steady drone of new company formation.
So the phenomenon seems real. But what could possibly explain it? In their paper,
Kedrosky and Stangler run through about half a dozen hypotheses; I won‘t detail most of
them here. In the end, Stangler tells me, only two of the explanations seemed compelling
to him.
One is demographic stability. Between 1950 and 1975, the share of the overall U.S.
population that was of working age, meaning between the ages of 15 and 64, fluctuated
quite a bit. But around 1977, this number settled down, and stayed steady (at about 66
percent) throughout the entire 30-year period the Kauffman researchers examined. If a
country has a stable population of relatively young, working-age people, then you might
expect the number of new businesses they start every year to be roughly constant. (If
there‘s any truth to this hypothesis, Stangler points out, then we may be in for some big
changes on the entrepreneurial scene, since the approaching retirement of the Baby Boom
generation means the non-working population is about to get a lot larger.)
The other explanation that Stangler likes has to do with how we define startups. For
most Xconomy readers, the word ―startup‖ probably brings to mind a young company
innovating in some area like information technology, energy, or biotechnology. And the
rate of formation of those types of startups may indeed be sensitive to factors like
whether we‘re in the midst of a technological revolution (e.g., the PC or mobile
revolutions) or how flush venture capitalists and their limited partner investors are
feeling. But in fact, the vast majority of new companies formed every year may be much
more prosaic: restaurants, law offices, retail stores, bookkeepers, medical clinics. The
demand for such service-oriented businesses may be more or less consistent, tracking
only with population levels—which might, one could imagine, induce a consistent
number of entrepreneurs each year to act to meet the demand.
―If you just read the glossy magazines, you‘d think that all startups are software
companies, and if you just read the economics research, you‘d think that all startups are
in manufacturing, because that‘s the sector with the best data,‖ says Stangler. ―But there
is this huge gap between those two things and the reality, which is that there are a lot of
quote-unquote ‗normal‘ companies where people are taking a chance and pursuing an
opportunity. The research has not seen them as real entrepreneurs, but they are still going
to create jobs.‖
If this theory is right, you wouldn‘t think that it would be very difficult to test it.
But in fact, Stangler says the data on what kinds of companies get created every year is
very fuzzy. The Census Bureau breaks companies down into nine huge ―super-sectors,‖
with everything from healthcare to education to R&D getting lumped into one big sector
called ―Services.‖ So Stangler says he and Kedrosky will need to do more research to
disentangle technology-driven startups from others, and to figure out whether exogenous
factors have more impact on entrepreneurship in some sectors than in others.
There‘s a big caveat to all of this: The detailed firm formation data that Stangler and
Kedrosky located only covers the period 1977-2005, meaning we don‘t know yet what
effect the Great Recession of 2007-2009 had, or may still be having, on levels of
entrepreneurship. ―This may be an inflection point where we are going to see a
permanent reduction in new firm starts, or maybe a permanent increase as people decide
not to go back to big firms,‖ says Stangler. ―That‘s a question that won‘t be answered for
a long time.‖
And here‘s an even bigger caveat: The fact that new firm formation is so consistent
may be irrelevant to the nation‘s overall economic health. If other research going on at
the Kauffman Foundation is correct, then what really matters for economic growth is how
many of the firms spawned each year mature into ―high-growth‖ companies that hire lots
of people and change their industries. In any given year, just 5 percent of companies
account for two-thirds of the new jobs created, Stangler and Bob Litan, the Kauffman
Foundation‘s vice president of research and policy, found in a 2009 paper. For an
example of this kind of stratification, you need look no further than the search-engine
business: Silicon Valley entrepreneurs started scores of search-related startups back in the
1990s, but today only one, Google, really counts. In other words, it could be that ―the
process rather than the input is what matters,‖ as Kedrosky and Stangler write.
If all this is true, and if the number of new firms competing to become high-growth
firms stays constant no matter what, then how should we answer my original question?
(Which was, roughly, whether we really need to sweat the details—things like quarterly
swings in venture activity, or the resources going toward the promotion of
entrepreneurship on college campuses or through incubator programs like TechStars.) If
you want my opinion—and Stangler‘s—the answer is yes.
Look at it this way: the United States is doing something right, even though we‘re
not exactly sure what it is. Year in and year out, in good times and bad, Americans start
700,000 new companies, which, if you think about it, is sort of amazing. It may testify to
a certain level of resilience and drive in the American character, or it may have more to
do with social policies and cultural factors that encourage and reward risk-taking.
Whatever the answer, this is one tendency we don‘t want to mess with.
As Stangler puts it, ―It‘s difficult to prospectively tell which companies will
succeed/survive or which ones will be high-growth, so it‘s highly important that we have
thousands of people trying to do it each year.‖ So until we understand entrepreneurship
better, we can‘t afford to stop obsessing over it.
78: The Apple Paradox: How a Company
That’s So Closed Can Foster So Much
Open Innovation
January 25, 2010

Come Wednesday, we‘ll learn a lot more about Apple‘s presumed slate device.
What we know right now, first hand, is a big fat nothing. Apple keeps a famously tight lid
on its employees, suppliers, and partners, the only exception being the occasional
strategic leak designed to spur excitement around its product launches. Even after
products come out, the company controls who gets to see and monkey with them; I
remember my frustration back in the spring of 2008, in the months between the
announcement of the iTunes App Store and the actual launch, when I knew that dozens of
local developers were writing apps for the iPhone but none of them were allowed to show
their apps to journalists, on pain of ejection from the program. To this day, there‘s still a
rigorous and unpredictable process for getting an app into the store (though there are
signs of relaxation in that department).
And yet millions of designers, artists, musicians, writers, programmers, and other
creative professionals love their Apple products, myself included. The Apple brand is
almost synonymous with free-thinking creativity. The programs people are inspired to
write for the Mac OS X operating system are routinely more elegant and useful and less
annoying than their Windows counterparts. And the advent of the App Store, which
allowed thousands of third-party developers to exploit the iPhone‘s exceptional
capabilities, has fostered a stunning amount of experimentation in software design,
dramatically increasing the expectations we place on our mobile computing devices.
In short, there‘s a big gap between the way Apple sees the world and the way most
of its customers see things. This is especially true when it comes to the relationship
between power and knowledge. To all outward appearances, Steve Jobs believes that
knowledge and information confer power only if they are carefully guarded. But for most
of the creative types who use Apple products, the big rewards in life—the opportunity to
gain reputation, advance professionally, and earn money—come from sharing
knowledge. The reason I use Apple hardware all day long is not so that I can be like
Steve, but because the company makes the best technology I‘ve found for staying
informed, synthesizing what I learn, and passing it along to others.
A blog post this month by photographer, designer, and career coach Tasra Mar, who
spent a year working at Apple, puts the attitude gap in stark, visual terms. Mar shares
several photographs of a simple length of rope. In one picture, the rope is tightly coiled;
in another, one end of the coil is unfurled; in a third, the coil has been loosened into a
spiral, opening a path to the center.
The tight coil, for Mar, represents the belief many people hold ―that there is scarcity
of knowledge or that they will be harmed or impacted by sharing that knowledge.‖
Having worked at Apple, Mar writes, ―I know firsthand about the tight hold that is placed
on knowledge and information—basically everything is on a need to know basis. No
open discussions, forums or free conversations.‖
Not that there‘s anything wrong with that, Mar hastens to add: there are times, she
says, when guarding information is appropriate. That‘s why we have NDAs and laws
protecting trade secrets. Mar is absolutely right when she points out that this closed
philosophy has ―paid off handsomely‖ for Apple.
The paradox—and it may be one that goes to the heart of digital-age capitalism—is
that Apple‘s style of closed innovation results in technology that is so conducive to open
innovation. Even more conducive, in fact, than its makers may have intended. Shortly
after the iPhone was announced in January 2007, Steve Jobs told the New York Times:
―We define everything that is on the phone. You don‘t want your phone to be like a PC.
The last thing you want is to have loaded three apps on your phone and then you go to
make a call and it doesn‘t work anymore. These are more like iPods than they are like
computers.‖ By 2008, though, Jobs had apparently realized that in its quest to ―define
everything,‖ the company was leaving a lot of money on the table. The 120,000 apps
you‘ll now find in the iTunes App Store—with Apple collecting 30 percent of every paid-
app sale—are testimony to the wisdom of the shift.
Given the smashing success of the App Store, you have to wonder why Apple has
reverted to its black-ops secrecy culture for the iSlate (or the iPad, or whatever it‘s going
to be called). Presumably, Apple wants the device to be part of the larger ecosystem it‘s
building around digital content—music, movies, TV shows, apps, and soon books and
magazines, if all the reports of Apple‘s talks with publishers are to be believed. Wouldn‘t
the company have been better off working with its existing community of developers to
figure out what features a tablet-style device should have? Couldn‘t it have given iPhone
developers a few hints about how the iSlate will work, allowing them to start designing
apps that work well on both platforms? Instead, if the usual pattern applies, the iSlate will
emerge this week as if from the head of Zeus, and only then will Apple release a software
development kit, sending programmers scrambling off to see what they can come up with
in the scant months before the tablet‘s ship date.
On the other hand, it‘s hard to argue with success. The iPhone was closed when it
launched—leading Jonathan Zittrain, co-founder of Harvard Law School‘s Berkman
Center for Internet & Society, to decry it as one of the products threatening the
―generative quality‖ of the Internet—but that changed, and now we have a world with
120,000 iPhone apps. It‘s conceivable, though it‘s not very palatable to the ―open
culture‖ crowd, that a closed creative process, driven by a guiding genius like Jobs, is the
only way to build products as coherent and compelling as the iPhone. I‘m sure this would
be Jobs‘ own argument. After all, without the solid foundation provided by the phone and
its core features—the multitouch interface, the camera, the accelerometer, the GPS
chip—most iPhone apps would be nothing special.
Certainly, the opposite extreme of completely open development has yet to prove
itself in the mobile computing world. Google‘s Android mobile operating system is built
on a Linux kernel—and if you ask me, that‘s why the market penetration of Android
phones is somewhere around 2 percent, while Blackberry devices account for 40 percent
of the market, the iPhone for 30 percent, and Palm devices for 7 percent, according to
September 2009 data from ChangeWave Research.
Richard Stallman, creator of the GNU Project and founder of the Free Software
Foundation, says his current computer is a Lemote YeeLoong8089 netbook. The device is
billed by its Chinese manufacturer as ―the world‘s first laptop which contains completely
free software,‖ from the BIOS to the Linux operating system to the open-source drivers
and applications. But for all its free-ness, the device has an aura that might best be
summarized as rinky dink. As Antonio Rodriguez, chief technology officer of the
consumer printing group at Hewlett-Packard, commented on Twitter this weekend, the
image of a programmer of Stallman‘s fame bent over the 10-inch-wide YeeLoong ―is like
Leonardo with crayons.‖
So I am left feeling queasy. Apple products are both beautiful and functional, a rare
combination. I love my Mac and my iPhone, and in a few months you‘ll probably find me
in the line to buy an iSlate. But with every Apple purchase, there‘s a part of me that
rebels at handing my money over to a company that‘s so fanatically controlling. I can‘t
help wondering what Apple‘s customers and developers would do if another company
came along with a solid, elegant, open computing platform and a less suspicious, more
cooperative disposition toward its community. (Google, are you listening?) The next few
months, as we watch how Apple manages the iSlate‘s rollout, will be telling. I‘m hopeful,
but wary.
Author's Update, February 2010: Obviously, the "iSlate" is actually the iPad—this
column was written a couple of days before Steve Jobs unveiled the device at a press
event in San Francisco.
79: What’s So Magical About an
Oversized iPhone? Plenty—And There’s
More to Come
January 29, 2010

The Apple iPad is one of the most eagerly anticipated computing devices in history.
With all the heat and hype that preceded Wednesday‘s public debut of the device, it was
inevitable that the backlash from skeptical bloggers and Twitterers would be equally
ferocious. Still, even after you filter out all the bozos who keep repeating ―It‘s just a giant
iPhone,‖ or who dismiss all Apple customers as fey elitists, or who have a sophomoric
fascination with the hygienic overtones of the name ―iPad,‖ you‘re still left with a
surprising number of critics who seem inconsolably disappointed over inconsequential
details like the width of the iPad‘s bezel, or whether the device has USB ports, or Flash,
or multitasking, or cameras, or windshield wipers.
These cranky commentators are missing the point. They can‘t see the screen for the
stuff around its edges, as it were. There was never any chance that Version 1 of the iPad
would have all of the features that fanboys want, or even all of the features that Apple
wants. (More on that in a moment.) But it‘s already got the three things that really count:
1) a huge touchscreen, 2) an operating system designed around multitouch gestures, and
3) a development kit that will allow thousands of software builders to do amazing things
with #1 and #2.
Amidst the dozens of iPad reviews I‘ve read this week, two sentences have struck
me as particularly insightful. One was from David Pogue, writing for his New York Times
blog: ―Like the iPhone, the iPad is really a vessel, a tool, a 1.5-pound sack of potential.‖
The other was from writer and blogger Rory Marinich: ―The product is, simply put, a
magical screen that can do anything you ever want it to, no matter what that is.‖
―Magical‖ is a word so often abused by technology marketers that someone should
call Amnesty International. It‘s the word Apple itself is using in its central pitch for the
iPad: Our most advanced technology in a magical and revolutionary device at an
unbelievable price. It‘s practically the first word out of designer Jonathan Ives‘ mouth in
Apple‘s propaganda video for the iPad.
Nonetheless, I think it‘s a pretty good word for the feeling I got the first time I
played with an iPhone. The fact that the phone really did all the things that I had seen it
doing in the TV commercials astonished me. I couldn‘t believe that the little icons on the
home screen could be so bright and crisp; that they could so instantly respond to my
touch; that I could flick my way through a photo album or zoom in on a picture simply by
spreading my thumb and index finger.
Don‘t get me wrong. I‘d read about the basics of capacitive sensors and multitouch
interfaces, so I didn‘t think anything supernatural was going on. In fact, I violently
disagree with Ives‘ argument, in the Apple video, that something has to ―exceed your
ability to understand how it works‖ before it can seem magical. What impressed me was
that Apple had brought the technologies together in such a beautiful, graceful, and
convincing way, and made the package affordable to so many consumers (42 million so
far).
But the thunderbolt that was the iPhone hit three whole years ago. We grow jaded
quickly nowadays. As comedian Louis C.K. remarked so accurately to Conan O‘Brian,
―Everything is amazing right now, and nobody‘s happy.‖ (C.K. continued with an
obvious-but-stunning-when-you-really-think-about-it reminder for airline passengers:
―You‘re sitting in a chair in the sky.‖ Believe me, that‘s on my mind every time I fly.)
So yes, the iPad is a big honkin‘ iPhone (or iPod Touch, more precisely, since it
won‘t actually function as a cell phone). But that‘s exactly why it‘s amazing. The iPhone
gave us a taste of what multitouch can do, and broke open the first small fissure in the
WIMP paradigm (for Windows, Icons, Menus, Pointing device)—the apex of user
interfaces since the 1970s. But on a phone‘s little screen, you don‘t have enough runway
to make really intricate or dramatic touch gestures. On the larger screen of the iPad—45.2
square inches, by my calculations, compared to the iPhone‘s 5.9 square inches—
multitouch will find much fuller expression, and the fissure will become a serious crack.
It‘s so much easier to manipulate graphical content through touch that even three-
year-olds understand the iPhone. And the multitouch-intensive applications that Apple
showed off at this week‘s iPad event, like the photo album and the Brushes drawing app,
provided only a faint preview of the types of touch-driven apps that programmers will
brainstorm over the coming months. If recent history is any indication, we should plan on
being amazed.
After all, who could have foreseen back in 2007 that developers would come up
with iPhone apps like Panolab, in which multitouch gestures are used to rotate and align
multiple photos into huge panoramas and collages, or Ocarina, which makes the iPhone
into a four-holed flute, or Autodesk‘s Fluid, which lets you draw swirling smoke patterns
on your screen? Apple, which knows a good thing when it sees it, has already added
multitouch support to the glass trackpads on the latest MacBook laptops, and now it has
designers and illustrators and even its own competitors drooling publicly over the iPad.
My guess is that within a few years, we‘ll be wondering why all personal computers
don‘t work this way.
Now, it‘s true that Apple could have loaded more features into the iPad 1.0. Every
tech person I‘ve spoken with in the past couple of days has expressed surprise over one
omission or another—for example, the absence of a front or back camera. (The word I‘m
hearing is that this was a concession to AT&T, which doesn‘t want users clogging its
already-strained data network by uploading lots of high-resolution pictures or videos or
running iChat all day long.)
But I can answer all of the missing-feature complaints with a single word: Pro. You
can bet your iFanny that sometime in 2011, Apple will introduce the iPad Pro, and that it
will have cameras, more memory, a faster processor, and just enough other sexy features
to get diehard fans to put their first-generation iPads on eBay and re-up.
When I laid out this prediction to Chuck Goldman, the founder and CEO of Boston-
based iPhone development house Apperian, his reaction was, ―Of course. That‘s what
Apple always does, so why would this product be anything different?‖ He should know—
he spent eight years inside Apple, running the professional services division, and was
actually in meetings at Apple in Cupertino when I first reached him.
With so much competition in the computer business these days, Goldman says,
Apple is forced to get products to market faster and faster, which means they have to lock
in each machine‘s feature set before the technology is fully baked. ―I‘m sure that Steve‘s
edict for the iPad was that, ‗This thing absolutely has to launch in January,‘‖ Goldman
told me. ―There are 400 things that Apple wants to do, but they can only do four in the
time allowed, so they have got to decide what feature set is going to ship with Version 1.
And they usually do a pretty good of getting a product to market with enough features for
the Apple fanboys and the early adopters to want the thing. But you have to know that
someone in Cupertino has got the roadmap for this product pretty much planned out.
What they do is, they listen to customers, and they are really good at aggregating that
customer feedback and working it into the roadmap, and that‘s how they create versions 2
and 3 and 4 and 5.‖
But Apple has already gathered the most important piece of customer feedback: that
people love touch-based computing. That‘s why the 2nd-generation iPod (the one where
the capacitive track wheel replaced the moving scroll wheel) eventually evolved into the
iPhone, and that‘s why the iPhone has now evolved into the iPad. And no matter how
many new features the company adds to the iPad in the future, that magical screen will
still have the starring role.
80: Kindle Conniptions: How I Published
My First E-Book
February 5, 2010
This is the 80th edition of World Wide Wade since this weekly column on
technology trends began in April 2008. Since we have so much material piling up in the
archive, we at Xconomy decided to do something a bit different: We're collecting all of
the columns in the form of an e-book. It's called Pixel Nation: 80 Weeks of World Wide
Wade. Not only does the e-book bring all of the columns together in one easy-to-read
place, but it includes a new introduction and updates on many of the early columns,
which, frankly can benefit from some updating in the fast-moving world of high-tech.
We're offering both a free PDF version that you can read on your PC, and a $4.99 Kindle
version that you can read on your Amazon Kindle or your iPhone.
I really hope you'll check out Pixel Nation, because it took me a boatload of work to
assemble it. I'm not complaining—I learned a ton, and I took on the project mainly for
the experience. But what an experience it's been! I've discovered that it's damnably
difficult to publish your own e-book, at least if you want to get it onto Amazon's Kindle,
the dominant digital book platform (for the moment). The whole ordeal has given me
some new empathy for authors who have been complaining about the Kindle for years.
It all came as a bit of a surprise, considering that we're more than a decade into the
era of electronic book publishing, and new e-reading devices are popping up left and
right. Did you ever hear the phrase "write once, run anywhere?" It's the slogan Sun
invented to describe the idea behind Java, a computer language that's supposed to work
on any device or operating system. I had figured that by now, publishers too would be in
a "write once, read everywhere" world. Books and articles obviously start out in all sorts
of formats (Word documents, Web pages, etc.), but you'd think that there would be some
easy-to-use software capable of reformatting this material for any e-book device, right?
No such luck. Instead, there's still a welter of incompatible e-publishing formats,
each championed by different factions of the publishing world with conflicting business
interests, and you have to customize your book for each one by hand. If you had to
compare the current situation with e-books to the historical evolution of the Web, it's as if
we were stuck in 1996 or so, back when Netscape and Internet Explorer displayed Web
pages differently, and you couldn't publish a website without learning HTML and
learning how to tweak the code to make sure your pages looked the same in both
browsers.
If this is what the future looks like, I can understand why the big New York
publishing houses aren't dancing with joy about the e-book revolution. In addition to all
the traditional design and typesetting work that goes into creating the print versions of
their books, publishers who want to distribute their books digitally must now hire
production lackeys to pore through each book, paragraph by paragraph, reformatting
them for Amazon's digital bookstore—and Sony's, and Barnes & Noble's, and soon
Apple's.
It seemed only fitting to sum up my self-publishing experience in today's column,
which is also Chapter 80 in the book. It's a cautionary tale. E-publishing may be great for
independent authors from a financial point of view—especially once Amazon starts
offering 70 percent royalties this summer—but it's still a nightmare from a technical one.
My first step toward creating Pixel Nation was simply to gather up all of my old
columns, which meant copying and pasting them from the Web pages on Xconomy into a
Word document. I would never have attempted this task before November 2009, when
we added a single-page view option that lets you see an entire article on one page. Many
of my columns are fairly long, so they get broken into two, three, or four pages on the
site, and it would have taken forever to stitch them all together from these separate pages.
Next I deleted most of the pictures, as photos tend to add greatly to the file size of
an e-book. Then I went through all the old columns and added updates and wrote an
introduction. Using a graphics program and a photo that I staged on my dining room
table, I whipped together the cover image you see here and inserted it into the Word file.
That was the fun part.
Now I was left with a big, long Word file. On a Mac, it's easy to export a Word file
to PDF, so creating that version of the e-book was child's play. It was the Kindle version
that really gave me fits.
Now, I am very fond of my Kindle. I got it in May 2009, and use it every day. I
love the fact that it comes with an email address (like "barneyfife@kindle.com") that you
can use to e-mail Word and PDF files to Amazon; for only 15 cents per megabyte,
Amazon will then convert the files into the Kindle format (called AZW) and transmit
them wirelessly to your device.
In an ideal world, publishing an e-book—that is, getting it converted to AZW and
listed in the Kindle Store and the Amazon.com website—would be just as easy.
Unfortunately, Amazon's conversion software doesn't have much of a sense of style. The
Word files that you mail to yourself never look as nice as the e-books that you can
purchase and download. The conversion process tends to leave ridiculously large gaps
between paragraphs, for example. And the files lack all the pleasant conventions of
professionally published books, such as consistent chapter headings or a hyperlinked
table of contents.
It turns out that if you want that stuff in your e-book, you have to build it all
yourself. Did I learn this from Amazon? No, the company actually shares very little
information about how to format books for the Kindle. The meager scraps of information
that are available from the Help section of Amazon's Digital Text Platform, the site where
authors and publishers submit books for the Kindle Store, are cryptic and poorly
organized. Almost everything I now know about this subject, I learned from Kindle
Formatting: The Complete Guide to Formatting Books for the Amazon Kindle, by Joshua
Tallent.
Kindle Formatting is itself a $9.99 Kindle e-book, although you can also order a
paperback for $19.95. It's a worthwhile purchase either way, because it's chock full of
arcane little details that Amazon doesn't tell you about and that you'd never figure out on
your own.
For example, I learned from Mr. Tallent (a digital publishing consultant with a firm
called eBook Architects) that the only format that really looks right once it's converted to
AZW—and the only one that gives you the control you need over the book's final
appearance and behavior---is plain old HTML. And the easiest way to create a Kindle-
ready HTML file is to make sure that your initial Word file uses consistent styles for
elements like body text and chapter headings.
Since I use Word all day every day, I had thought I understood the program. But
Tallent's book forced me to figure out previously ignored features such as the styles
pallette, so that I could then spend a couple of hours going back through the book's 80
chapters and making sure that every headline was in "Heading 1" style, every dateline
was in "Heading 5" style, and so forth.
Once that was done, I could save the Word file as a Web page (being sure to click
the option for "Save only display information"---another arcane but crucial detail) and be
reasonably sure that the formatting would be consistent once the book was on a Kindle.
The next step, however, was to use a text editor to go back and remove the unfathomable
amount of cruft that Word leaves behind whenever it saves a file in HTML. This is pretty
much a manual process, though decent text editors—I used one called TextWrangler—
have global search-and-replace functions that can speed it up. I was able to complete this
part of my e-book project while I was stuck on a plane and had nothing better to do. But
it's a good thing it was a 10-hour flight to Alaska, or I would never have finished.
Oh, did I mention niceties like a hyperlinked table of contents? If you want one of
those in your e-book, it's a good idea to create it in Word before you convert it to HTML.
The best reason to make sure that all of your chapter headings are in the same style is to
that this makes it far easier to locate them when you're searching for the anchor text for
the internal hyperlinks—another tip from the talented Mr. Tallent.
Let's fast forward to the end—assuming you're still with me. What you get, with
enough endurance, is a stripped-down HTML file that will look nice almost anywhere,
whether in a Web browser window or on an e-reader. Amazon's Digital Text Platform
lets you upload this HTML file to Amazon's servers and specify a title, a price, and other
details. If you do everything correctly, your e-book shows up in the Kindle Store and on
Amazon.com within three days or so, and you can start your career as a best-selling e-
book author.
I'm a geek who is comfortable with, though not totally fluent in, HTML, and this
project severely tested my patience. I spent an estimated 10 hours trying to figure out
how to do this, and then another 30 or 40 hours doing the grunt work to make it happen.
So I think it's safe to say that until somebody comes up with a way to automate e-book
production, we won't see a huge flood of authors self-publishing for the Kindle or the
other e-book platforms. Perhaps I should have hired a consultant. Tallent charges a
reasonable $60 per hour, and says he can convert a typical novel to Kindle-ready HTML
in 1 to 3 hours and a longer non-fiction title in 3 to 6 hours. But then I wouldn't have had
this wonderful story to tell.
The situation is unfortunate, because part of the promise of the digital publishing
revolution was that it would finally give authors a way to bypass the old literary
establishment—the agents and editors and publishers whose job is to make sure that trees
get sacrificed, and costly marketing campaigns get mounted, only for the most
commercially viable titles. The way things are now, only big publishers, or maybe
authors with some money to burn (and how many of those do you know?), are going to
go to the trouble of getting their books onto the major e-book platforms.
Of course, all that's needed to solve this problem is for one clever programmer to
come along and build a usable e-book editing program. That's roughly what happened on
the Web back in 1995, when Cambridge, MA-based Vermeer Technologies introduced
FrontPage, the first successful WYSIWYG HTML editor. Microsoft eventually bought
Vermeer and FrontPage for $133 million, and I bet that the first company to build a
decent e-book editor would get snapped up by Amazon or Apple. Entrepreneurs, are you
listening?

Vous aimerez peut-être aussi