2011年6月30日 星期四

6/30 GigaOM — Tech News, Analysis and Trends

     
    GigaOM — Tech News, Analysis and Trends    
   
Where to watch the Tour de France 2011 online
June 30, 2011 at 3:01 AM
 

The Tour de France may be one of the most underrated sporting events in the U.S., especially given the track record of team USA. Still, cycling fans can hardly wait for the famous multi-leg race to start this weekend. The tour will be shown live on Versus, the Comcast-owned sports network, with repeats and highlights being aired on NBC.

But if you don't have cable or can't watch TV at work, don’t worry; the entire event is also available online. NBC is selling a Tour de France all access pass for $29.95, which will get you live HD video of every stage of the race all the way through the final leg, when cyclists reach Paris on July 24.

The network is also promising paying subscribers access to highlight video clips, interviews and other exclusive online content. All access subscribers can even personalize the race by tracking their favorite riders both via live GPS on Google Maps, as well as through a stats dashboard. All of this will also be available through dedicated iPhone, iPad and Android apps, which will be available in time for the race, according to NBC.

Speaking of timing: Versus will begin airing some of the pre-race activities starting this Thursday, but All Access subscribers will get their first live video feed Saturday July 2 at 7 a.m. EDT.

Related content from GigaOM Pro (subscription req'd):


Media Files
tour-de-france-e1309378965585.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Predictions: Gazing into the online video crystal ball
June 29, 2011 at 8:30 PM
 

I was recently asked to prognosticate about the next decade of TV and online video by an analyst. It was flattering and slightly bewildering, as I'm not exactly a visionary but I have been “in the biz” for a while. While the discussion was free-form, in retrospect, it focused around major trends in the video landscape and the fall-out from them:

The explosion of content

Clearly there has been an explosion of content over the past five years — a trend that shows no signs of abating. In the land of a million channels, the filter will be king. Value will accrue to those that aggregate and filter programming.

  • As with traditional television, there will be a handful of new video aggregators that emerge with sustainable businesses. The fact is that aggregating video content today is an expensive proposition. One must have deep pockets to buy the rights and distribution scale to justify the expenditure. In the US, we're seeing this play out with Netflix, Amazon, YouTube, Apple, AOL, Yahoo and Hulu contending for online rights alongside the cable companies. It will be increasingly difficult for new entrants to make inroads here. There is no shortage of startups trying to be the EPG of online and mobile video.  But the best filters rely on scale and leverage network economies (Amazon reviews, Netflix, Pandora), and so it will be a “winner take all” (or, at least, “most”) outcome.
  • YouTube will be spun out. Google will realize it could get more value from YouTube by spinning it out. YouTube is acting more and more like a traditional programmer of content – buying up rightsfunding original programming and so on –- and getting more “media DNA” will be as important for them as technical talent.
  • The plethora of available content will, paradoxically, mean that live events, especially premium sports with broad appeal (F1, World/Euro Cup, Superbowl, Olympics, IPL, major golf & tennis) will grow in stature and wealth. They will benefit from the scarcity of events with mass appeal given the time-shifted nature of video consumption. This lack of “supply” will result in concerted efforts to create more “tent-pole” events — there's too much money at stake not to try. The IPL is the best recent example of this but look for more here — World Cup Basketball anyone?

The emergence of the social graph

We are still coming to terms with the power and implications of the social graph. While Facebook was first seen as a pure social networking and communication utility platform, it is increasingly becoming a place to consume media. So I predict that Facebook will overtake YouTube as a video consumption destination in the next five years.

Facebook is already a major media consumption platform with all of the social gaming that currently occurs. Moving into other content categories such as music and video is not a big stretch. In fact they just appointed Reed Hastings to their Board – a signal of their media ambitions not to be ignored. Moreover, they have a music strategy afoot (which I think will be big).

Video content owners today program channels on Facebook but there is no aggregation across channels. This represents a market opportunity for Facebook or another aggregator that would take advantage of their social graph.

Mobile and the cloud

Media consumption on smartphones and tablets is increasing on an exponential basis. At the same time, the "cloud" is enabling on-demand access to software and media, and obviating the need to store and sync files locally. Given these two trends, it seems a smart bet that the smartphone/tablet will be the hub for accessing and displaying content with “dumb screens” such as TVs and computer monitors that will get the signal from them. Smartphone docks are already being built into car dashboards, which could make the radio tuner redundant.

New Players on the World Stage

We will see a challenge to the dominance of U.S. and Western European media companies coinciding with the growing economic power of emerging market economies. There are players in these so-called emerging markets that are already making a splash and this will only continue. Abu Dhabi Media, Naspers, Al Jazeera, Globo, Televisa, Reliance, Mail.ru, CCTV and others will be asserting more influence on the world stage — and on par with the Disneys, News Corps and Universals of the world. Look for a major U.S./Western European network to be bought by an emerging market player. It wouldn't surprise me to see one of them make a play for Hulu.

Finally, there's the “wild card.” The above predictions aren't necessary big leaps of faith to make. More significant will be the wild cards that aren't even on the radar. After all, YouTube, Facebook and the explosion of social networking and UGC were mere glimmers in the eye 10 years ago.  It will be fun to watch.

Rags Gupta is currently VP at Brightcove, based out of London. He can be found at www.twitter.com/ragsgupta and www.ragsgupta.com. All of the opinions expressed are his own and not that of any companies he is affiliated with.

Image courtesy Flickr user islandguy

Related content from GigaOM Pro (subscription req'd):


Media Files
crystalball.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Google+: A targeted response to Facebook's shotgun approach
June 29, 2011 at 7:30 PM
 

DartsThis week's news from Google was the most exciting thing I've heard from the company in years. After languishing with social and its utter failures with both Google Wave and Google Buzz, the company is putting an intriguing product to market: a social network.

For years, people across the tech industry have beat the same drum, almost taunting Google about the strategic value social could bring to the company if only they had the right answer to Facebook. And now as it seems that Google might just have the firepower to take on their Palo Alto rivals, we all want to know: will it work?

The broadcast social network

Facebook's strongest asset, in my opinion, is the social graph. And Facebook's biggest challenge is the social graph. And Facebook is aware of it. Users around the world are massively friending each other on Facebook, thus, spamming their social graph and making it weaker and more diluted.

Look at a typical Facebook user — a college student. On his friends list he has his dorm mates, his high school friends, his mother, his girlfriend and old teachers. He likely doesn't want to share the same updates, photos and videos with each group of people (say, for example, his silver-medal performance in the Beer Olympics). So he is either forced to block certain people from seeing his updates (sorry, mom!) or censor himself by not posting at all. While that might be wise for our young collegian, it's not good business for Facebook.

Facebook tried to solve for this problem by pushing Facebook Groups, which was designed to allow for sharing among smaller groups of friends, but it just didn't catch on with users. Why not? Facebook has built its brand, from the ground up, as a one-to-many social network. Facebook has features for sharing among smaller groups of people, but the status update is by far the most popular and most commonly used way to share. It has always been about broadcasting to the world, much the same as Twitter.

Many large, successful companies, including Facebook, refuse to accept that it is very hard to change your brand position in users' minds. Just as users don't see Facebook as a place to find jobs, it will be very hard, if not impossible, to expect users to see Facebook as the place for them to create circles of connections.

Keep it to your inner circle

The need for Google+ is obvious. Just as with real-life interactions, sometimes we don't want to broadcast to the world. Sometimes we want to show different personas to different groups of people. Google, in a shrewd move, is finally addressing a need that Facebook has been unable to fulfill. It is building its social brand — from the ground up — as a social network that allows users to place barriers between their friendship groups, so users can feel more comfortable sharing with those groups.

I think that when the Google brains sat down in a conference room to have their kick-off meeting for the new social strategy, the first question they asked was: What's Facebook's weakest point? Google is focusing its social network around the concept of "circles" since it wants to position its network as a more organized one-to-many social network from the get go. Users will still need to drag and drop friends into groups, but here they might actually be willing to do so.

Taking it one step further, I think an even more innovative approach to group social networking will ultimately emerge. Companies like Katango, which applies algorithms that add structure to social data, will eliminate nearly all of the friction associated with manually dropping friends into groups.

Clearly Google has a long way ahead of it. A good product and massive reach are not enough to overcome Facebook's dominant position. Google is solving for the persistant problems in the social graph and I do think it's on to something by challenging Facebook's shotgun approach to social networking. Where Google's two previous social endeavors failed, the third time might just be the charm.

Rooly Eliezerov co-founded Gigya and leads product there. The SaaS company provides plug-ins such as Social Login that helps make websites social. Follow him on Twitter: @rooly.

Image courtesy of Flickr user Bogdan Suditu.

Related content from GigaOM Pro (subscription req'd):


Media Files
2377844553_4154ba915b_z-e1309368909551.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Twitter to launch site for platform developers
June 29, 2011 at 6:55 PM
 

Twitter, the San Francisco-based social messaging service, is working to develop and launch a dedicated site for developers working on the Twitter platform. The new site could be launched sometime in July, company officials said.

The fast-growing company has had a hot-and-cold relationship with its developer community, to put it mildly. Many early developers built products such as Twitter clients and helped the network grow, but later found themselves in competition with the company itself, which ended up buying clients, including the popular Tweetdeck. Over the past year and a half, the company has come under considerable criticism for having a fractious relationship with its community.

Whether that relationship is genuinely fractious or is just mistakenly perceived that way remains a topic of some debate. Twitter, for its part, says it is trying to offer up as much information as possible to its developer community and partners, according to Ryan Sarver, who is director of platform with the company.

The integration of Twitter into Apple's iOS platform is going to open up many opportunities for the company, and will bring many new developers to the service. But Twitter needs to better communicate with these developers, and this new site will (the company hopes) help do that. The new site will have developer tips and tricks, a dedicated blog focused on platform and developer related topics, and will also have a robust forum where the Twitter team can engage directly with the developer community, the company said.

Related content from GigaOM Pro (subscription req'd):


Media Files
twitter_newbird_boxed_whiteonblue-e1306344904458.png?w=210 (PNG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Larry Summers to join VC firm Andreessen Horowitz
June 29, 2011 at 6:45 PM
 

Former U.S. Treasury Secretary Larry Summers is joining Andreessen Horowitz as a part-time special advisor, the Silicon Valley venture capital firm announced this afternoon.

One of the nation’s most respected policymakers, Summer will both advise the firm and work directly with its portfolio companies — from Foursquare to Rockmelt -— as they seek to restructure existing markets and expand globally.

“Companies today tend to want to operate globally, much more so than 10 or 15 years ago in both the developed and developing world,” said firm co-founder Marc Andreessen.”We are fortunate enough to be working with companies that are seeking to transform individual markets here and abroad in telecommunications, advertising, entertainment, real estate  and health care. And Larry’s deep insight into global economics and geopolitics will be highly useful.”

Summers was director of the National Economic Council under President Obama, and before that, president of Harvard University and President Clinton’s Treasury Secretary. This is the second high-profile Silicon Valley job Summer has taken on since saying goodbye to Washington D.C. in late 2010. Just last week, Summers announced he’d joined the board of directors at mobile payment company Square, which just today received a $100 million boost.

So why  is he doing this, exactly?

For Summers, it’s all about being where the action is. ”Anybody who studies the American economy when the history of our times written, technology and what’s going on here right now its going to be the major story,” he said. But Summers isn’t trading in his New England tweed for Silicon Valley hoodies; he’ll stay Boston-based. Though, ” judging by all the emails I’ve been getting from Marc every night before bed, I feel virtually present,” in the Silicon Valley, he said.

Speaking with Summers over the phone this afternoon, I couldn’t help but ask the economist to weigh in on all the bubble talk in social media. Ever the politician, he started by expressing total confidence with his new employer’s “financial prudence” in all  investments. As for the industry as a whole, he was a bit non-committal, but erred on the side of “no bubble.”

“Some of the statements that are made conjuring up images of the late 1990s rather misleading,” he said. Most importantly, the scale, intensity and magnitude of Internet connection had changed so much since the days of Pets.com, he said. ”I’m surprised when people treat it as an obvious judgment of bubbles in what's present today,” he said. “But I’ve seen too much to ever preclude possibilities.”

After all, the four most dangerous words in venture capital, he says are: “It’s different this time.”

Related content from GigaOM Pro (subscription req'd):


Media Files
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Google+ has great features — now it just needs people
June 29, 2011 at 6:31 PM
 

Unless you’ve been living under a rock for the past few days, you know that Google on Tuesday rolled out its biggest effort yet to crack the social-networking market. Google+ is an ambitious collection of social features and tools, all bundled into something the company is taking great care not to describe as a Facebook competitor. Whether Google wants to admit it or not, however, that’s exactly what Google+ is — and the biggest hurdle for the web giant is that a collection of cool features doesn’t make a network. People do.

In a nutshell, Google+ (which the company says is just the beginning of a gradual rollout of other related social elements across the network) is focused around the “stream,” which is a very Facebook-like collection of posts, comments, photos and other content from your social “circles,” as Google calls them. There’s also a separate news-based stream called Sparks, and a video-chat feature called Hangout (which Om says should have Skype worried more than Facebook), but the circles and your interaction with them is the core of the experience.

The silo problem

As Marco Arment of Instapaper notes in a blog post, this is where the big problem (or challenge) lies for Google: At the moment, the only people in its social network are those who already use a lot of Google services, and only those with a Google profile can really participate fully. As far as I can tell, there’s no easy way to pull contacts in from Twitter, and there certainly isn’t any easy way to connect to Facebook — which isn’t surprising given the history between Google and the Zuckerberg empire. As a result, your Google+ life feels a little like you’re living in a silo.

This is a point I tried to make in a previous post about Google and its social efforts: If you can’t extend your online activity to a broad selection of your actual social graph, it won’t be useful enough to keep you coming back and will ultimately fail (FriendFeed, a service I liked very much, was heading down this road until it was bought by Facebook).

If you’re a social network, you live or die based on the network effects you either create or take advantage of. It’s nice to have cool features, but features — broadly speaking — aren’t what make people keep using a social service (yes, we’re looking at you, Ping). To use a retail analogy, features and user interface can get people in the front door, but they can’t turn them into die-hard customers. What really makes them keep coming back are the people they can connect with, share with, comment to, and otherwise interact with.

Good design is nice, but not enough

That’s not to say Google+ isn’t good-looking, because it is. In fact, the user interface is extremely well done, as others have noted, and is substantially better than many other Google services, which usually go for what could charitably be called the “utilitarian” look. For Google+, the company apparently turned to former Apple designer Andy Herzfeld, and it shows. But the truth is most people don’t give a damn about good design, or at least not enough to choose a specific network based on looks (although Myspace  is arguably an example of how bad design can drive users away).

Google+ also does a lot of things right when it comes to the structure of the network. One of those is the whole idea of “circles,” or specific groups that you can add people too (the defaults are Family, Friends, Acquaintances, and Following but you can also add your own). If there’s a killer feature in Google+, it’s probably this: the idea that users don’t want to necessarily share everything with all the people in their network, but want to share specific things with certain groups. Facebook has lists, but they are cumbersome to set up and use, while Google makes creating “circles” incredibly easy.

Circles are great, but who’s in them?

Ironically, this idea of having different groups in which users can share different content originally came from Paul Adams, a former Google designer who published a highly regarded Slideshare presentation about the concept last year, describing it as the missing element of most social networks (i.e., Facebook). It’s ironic because Adams is now working at Facebook (he said in comment on Twitter that seeing Google+ being used in public was “like bumping into an ex-girlfriend”).

Unfortunately for Google, adding people to circles is still kind of kludgy, because if they don’t already have a profile on Google then you can only add them as an email contact, which hardly seems that appealing. Google may have millions of users, but that doesn’t help if I don’t know that many of them — I checked when I got Google+ and I only know a handful of friends who have a profile.

The bottom line is that Facebook has the one killer feature of a social network: namely, almost all of my friends and family are using it. That’s a mountain for Google to climb, and it could mean that Google+ becomes like a grown-up version of FriendFeed: a niche network for techie types. Of course, Myspace — which used to be the world’s leading social network with a value of $580 million and was just sold for $35 million — is a great example of how even network effects can turn against you if your business model is flawed. So even Facebook probably shouldn’t get too complacent.

XKCD cartoon used by permission

Related content from GigaOM Pro (subscription req'd):


Media Files
google-plus-screenshot.png?w=210 (PNG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Next up in Amazon's tax war: California
June 29, 2011 at 5:41 PM
 

Amazon is prepared to take its sales tax war to one of the biggest arena of all: California.

In an email sent Wednesday, Amazon warned that it will shut down its Amazon Associates Program for California-based participants if the state passes a proposed bill that would impose new taxes on online retail sales.

As online shopping has ticked up, more and more states have begun passing laws that require online retailers to charge sales taxes.  Amazon has been battling with state legislatures over whether or not its transactions should include the same sales taxes required of brick-and-mortar retailers. Amazon has already terminated its affiliate programs in such states as Arkansas, Colorado, Connecticut, Illinois, North Carolina, Rhode Island and Texas due to the battle over sales tax.

The warning to California is in line with what Amazon CEO Jeff Bezos’ reported remarks at a ShopSmart Shopping Summit held in New York in May: “We will continue to drop states who pass those affiliate laws, from the affiliate program.”

If nothing else, Amazon’s showdown in California proves that Bezos is a man of his word. But California is well known to be a major — and uniquely important –economy in and of itself. Time will tell whether the decision to go to the mat with the Golden State will turn out in Amazon’s favor.

Here is the email sent by Amazon to its California associates program affiliates:

Hello,

For well over a decade, the Amazon Associates Program has worked with thousands of California residents. Unfortunately, a potential new law that may be signed by Governor Brown compels us to terminate this program for California-based participants. It specifically imposes the collection of taxes from consumers on sales by online retailers – including but not limited to those referred by California-based marketing affiliates like you – even if those retailers have no physical presence in the state.

We oppose this bill because it is unconstitutional and counterproductive. It is supported by big-box retailers, most of which are based outside California, that seek to harm the affiliate advertising programs of their competitors. Similar legislation in other states has led to job and income losses, and little, if any, new tax revenue. We deeply regret that we must take this action.

As a result, we will terminate contracts with all California residents that are participants in the Amazon Associates Program as of the date (if any) that the California law becomes effective. We will send a follow-up notice to you confirming the termination date if the California law is enacted. In the event that the California law does not become effective before September 30, 2011, we withdraw this notice. As of the termination date, California residents will no longer receive advertising fees for sales referred to Amazon.com, Endless.com, MYHABIT.COM or SmallParts.com. Please be assured that all qualifying advertising fees earned on or before the termination date will be processed and paid in full in accordance with the regular payment schedule.

You are receiving this email because our records indicate that you are a resident of California. If you are not currently a resident of California, or if you are relocating to another state in the near future, you can manage the details of your Associates account here. And if you relocate to another state in the near future please contact us for reinstatement into the Amazon Associates Program.

To avoid confusion, we would like to clarify that this development will only impact our ability to offer the Associates Program to California residents and will not affect their ability to purchase from Amazon.com, Endless.com, MYHABIT.COM or SmallParts.com.

We have enjoyed working with you and other California-based participants in the Amazon Associates Program and, if this situation is rectified, would very much welcome the opportunity to re-open our Associates Program to California residents. We are also working on alternative ways to help California residents monetize their websites and we will be sure to contact you when these become available.

Regards,

The Amazon Associates Team

Photo courtesy of Flickr user Ken Lund.

Related content from GigaOM Pro (subscription req'd):


Media Files
california.jpg?w=186 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Pottermore: Future of publishing or Club Penguin for Potter fans?
June 29, 2011 at 5:30 PM
 

Last week, Harry Potter author J.K. Rowling announced Poettermore, an ambitious new online property. While details are still scant, the site is guaranteed to be successful in at least one respect: As the exclusive retailer for Potter e-books, Pottermore will no doubt do a mean business as an e-book storefront.

But Pottermore, at least on the surface, looks to aim much higher than a glorified digital bookstore. While a virtual world based on an existing media property is by no means new, this combination of social elements, avatars and and interactive reading "experiences" looks like it has the potential to be something fairly unique.

So, will Pottermore be something revolutionary for the book world, or will it simply be a Potterized version of Club Penguin, a site aimed mainly at hawking Potter merchandise?

It may be too soon to tell, but a Harry Potter virtual world that allows fans to undergo similar experiences the characters in the books do — being sorted into a house, learning spells and competing for a "house cup," to name a few — could be the start of a new shared social, transmedia world for books, a living, breathing online experience that goes significantly beyond an individual's reading experience.

And it's exactly this new world where many in publishing may have a problem. The publishing industry is full of people already bemoaning the arrival of enhanced e-books, those digital books that combine multimedia elements. And if some write off enhanced e-books as not being books, but something akin to CD-ROMs, how will those parties view something like Pottermore, which takes a book's world, puts it online and makes it a fully interactive experience that may — or may not — require a whole lot of actual reading?

The reality is that the book industry is in the throes of a huge tranformational shift, one where the transition to digital is moving at a much faster pace than many anticipated, headed towards an uncertain future where traditional distribution models are being dismantled and new, more efficient ones are being erected. New incumbents are rising up by creating offerings like Pottermore, and J.K. Rowling may be giving them a roadmap to do so.

But as many have already said, not everyone can create a Pottermore. That may be the case, but who's to stop a publisher, or a collective of publishers, from creating their own immersive online experiences where their authors' books can come alive, and where they can engage directly with fans? For example, why couldn't a publisher with a big mystery imprint make a gritty, online watering hole for  its readers? Or why couldn't Harlequin, say, make a "romance-ville?"

Some would argue that the reading demographic is too old, that 50-year-old women wouldn't want to spend time in a Harlequin online world. But that type of thinking assumes that these same books won't ever make the generational leap to 20- or 30-something readers.

Instead, maybe what publishers need to do to bring in new readers is go to where future ones are spending their time nowadays: online. And maybe, just maybe, Pottermore shows a way they can do just that.

To read more on the implications of Pottermore and what it means for the future of publishing, see my weekly update at GigaOM Pro (subscription required).

Image courtesy of flickr user KitAy

Related content from GigaOM Pro (subscription req'd):


Media Files
pottermore.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
The afterlife: When founders leave
June 29, 2011 at 4:30 PM
 

Twitter co-founder Biz Stone officially announced Tuesday his plans to step away from Twitter’s day-to-day operations. While it’s a move that many folks in the industry have seen coming for months now — Stone has spent more time outside the office as Twitter’s public face in press interviews and vodka commercials — his official departure has still garnered its fair share of headlines and a few raised eyebrows.

That’s partly because of how important founders are. As GigaOM’s Katie Fehrenbacher has put it, the “founder” title is an especially weighty one here in the tech industry. I talk daily with startup executives who refer to their companies as “my baby.” Naturally, a lot of people see walking away from a startup as a bit of an abandonment– especially before the company has seen an exit.

But Stone’s decision is not at all unique among his peers. Twitter’s other co-founders, Evan Williams and Jack Dorsey, have scaled back their involvement with the site to focus on their own respective projects. And the guys at Twitter are certainly not alone in taking a step back from the company they’ve helped build.

A founder’s nature is essentially creative. He or she often enjoys the fast-paced life of a bootstrapped, pizza-and-beer, tiny startup. But when employee numbers start doubling, more money gets raised, and things like HR and 401Ks become necessities, founders often times yearn to jump ship for the more creative early-stage life again. At the same time, there is the rare founder that can make the transition to lead the bigger company as the CEO.

Here’s a look at where founders from some of the tech industry’s biggest players have ended up:

  • Apple
    Apple was co-founded in 1976 by Steve Jobs, Steve Wozniak, and Ronald Wayne. Wayne reportedly gave up his Apple shares fewer than two weeks after the company was founded for $2,300. Wozniak ended his full-time work with Apple in 1987. Jobs resigned from Apple in 1985 amid corporate power struggles, and rejoined the company 11 years later.
  • Google
    Google was co-founded in 1998 by Larry Page and Sergey Brin. Both continue to hold full-time executive and board roles at the company.
  • Groupon
    Groupon was launched in 2008 as a spin-out of ThePoint, a website co-founded in 2007 by Andrew Mason and Eric Lefkofsky. Mason continues to serve as Groupon’s CEO; Lefkofsky, who has several other ventures, serves as the company’s chairman. Both founders have been criticized for using funding rounds to cash out portions of their stakes in the company.
  • Facebook
    Facebook was co-founded in 2004 by Mark Zuckerberg, Eduardo Saverin, Dustin Moskovitz and Chris Hughes. Zuckerberg has remained at the company as CEO; Saverin stopped working day-to-day with the company in 2004; Moskovitz left Facebook in October 2008 to co-found Asana, and Hughes left Facebook in 2007 to lead Barack Obama’s online campaign efforts.
  • LinkedIn
    LinkedIn was co-founded in 2003 by Reid Hoffman, Konstantin Guericke, Allen Blue, Eric Ly, and Jean-Luc Vaillant. Hoffman stepped down from the CEO role in 2007, and he remains the company’s executive chairman. Guericke and Ly both left their full-time roles at LinkedIn in 2006. Blue works full-time role as LinkedIn’s VP of product management, and Vaillant has retained his role as the company’s CTO.
  • Microsoft
    Microsoft was co-founded in 1975 by Paul Allen and Bill Gates. Allen stepped away from day-to-day operations at the company in 1982 after being diagnosed with Hodgkin’s lymphoma, which he treated with radiation therapy. Allen stepped down from Microsoft’s board of directors in 2000. Gates stopped being CEO in 2000, and announced plans to step down from day-to-day operations in 2006; he remains Microsoft’s chairman.
  • Yahoo
    Yahoo was co-founded in 1995 by Jerry Yang and David Filo. Yang announced he would step down from a short stint as CEO in 2008, but remains on the executive roster as a “Chief Yahoo” and has a spot on the company’s board. Filo also continues to work with the company as a “Chief Yahoo.”

Do you have any thoughts on whether a founder’s presence is crucial to a company’s success? If so, let us hear them in the comments.

Image courtesy of Flickr user clicksense.

Related content from GigaOM Pro (subscription req'd):


Media Files
walkingaway-e1309310907343.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
We have smart phones, but do we want dumb screens?
June 29, 2011 at 3:49 PM
 

Almost two-thirds of Americans are using more than one computing device — defined as a smartphone, tablet, computer or netbook — according to a poll released this week. Unsurprisingly, the poll, which surveyed 2,000 Americans, found that 83 percent of people want access to their documents in the cloud. Of course they do. When 63 percent of the population has multiple computers and one-third has more than three, keeping them synced is a pain best left back in the early ’00s and late ’90s where it belongs.

The survey, conducted by Harris Interactive on behalf of a company that provides presentation software in the cloud, helped crystallize a question: Do we only want dumb screens? By dumb screens I mean the ability to get whatever content and services you want over the web as opposed to stored on a hard drive or locked to a device. So far today, the answer is we want it both ways, but in the future I lean toward dumb terminals with one exception: the smartphone.

Right now, the high cost of mobile broadband access and slow speeds (plus intermittent Wi-Fi) make the idea of dumb terminals impractical for most people. But As Wi-Fi becomes more pervasive and LTE networks roll out, I think we’ll see those barriers drop. So it makes sense to think about what should be dumb and exactly how dumb it should be. I think a television makes a great dumb screen. On my laptop, I’d give a hearty plug for a dumb screen (look at the Chromebook for example), and tablets are an area where I lean toward dumb screens as well. Smartphones are the big outlier.

Most of my interaction with my Android handset centers around the web, email and a few apps. On occasion, I take photos and share them from my phone and yes, I still use it for voice calls. So today my smartphone isn’t a dumb screen, but here’s why it never will be:

App Stores. I wish this particular reason would disappear, but I doubt that will happen. Thanks to Apple’s ability to get people to buy into apps, we now have a $14 billion app economy. As someone with iOS and Android devices, as well as a general worldview that wants a unified platform, I wish HTML5 apps would get going in a major way so I can just get what I need on the web as opposed to downloading them from OS-specific or device-specific app stores. It drives me crazy that I can’t get some apps on my Android handset that I use on the iPad, and that if they are offered, I have to buy them twice. So I’d love for apps to stick around, but want the barriers between installing them on any device to fall thanks to HTML5 and permissions to access the hardware on devices.

Smartphones as your link to the digital world. As the most portable and soon-to-be most ubiquitous of the computers folks own, smartphones are increasingly becoming the sensor that connects the real world to my digital one. Thus, I want it packed with sensors, cameras and enough intelligence to ensure these things all work together to upload not just files, but context on my day-to-day wanderings back up to the web.

There’s still a strong argument for different dumb screens to have different interfaces depending on their size and perhaps position in the home. Smaller screens require touch while larger ones should use gesture. As a writer, my laptop needs a keyboard, while my tablet and phone don’t. The debate between smart and dumb screens used to have a component linked to how one would interact with them, but increasingly, I think it’s less a keyboard that makes something “smart” than what kind of information that it needs to store and process. Thanks to web services, I think there’s little we’ll want to store and process on televisions, laptops and even tablets, but smartphones will still require more brains than screen real estate and a good set of radios to ensure image processing, the interaction of the sensor and yes, those darn apps.

As we overload our homes with computers and connected gadgets — 15 percent of Americans use four or more a week according to the Harris poll — the idea of dumbing down the device and relying on web services has strong appeal. Sure, offline access to documents and other services is a stumbling block, but that’s becoming less and less or a problem for those willing to pay for mobile broadband access. How dumb should our devices get?

Related content from GigaOM Pro (subscription req'd):


Media Files
phones.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
LinkedIn's post-IPO offerings and opportunities
June 29, 2011 at 2:45 PM
 

A month after its debut as a public company, Wall Street seems to love LinkedIn, as evidenced by analysts predictions this week. Nor is the professional social network resting on its laurels post-IPO. The company has introduced several new initiatives over the past six months, and in June, it made moves that include testing a new, social ad format, hiring a journalism pro to help with content initiatives and adding syndication features to its jobs platform.

These moves are geared to increase member usage and value for advertisers, and here's what each might bring to LinkedIn's overall growth strategy:

  • Social ad format: Borrowing an idea from Facebook ads, LinkedIn is testing a new format that uses information from a user's network connections and activities to serve them more targeted ads. An example of this might be a recruiting ad that includes contacts at the company that could refer the user. With this initiative, the company is smartly tapping into its professional contact information to raise its advertising value — and presumably price. LinkedIn is, however, being pretty sensitive about potential privacy concerns here: It enables users to opt-out of these ads and it hides the data from the advertiser.
  • News and content: LinkedIn hired Daniel Roth, Fortune's digital editor, to work on LinkedIn Today as well as other content projects. A few savvy editors can help with LinkedIn's news curation and aggregation strategy more efficiently than hiring a lot of original content creators. Similarly, the company is adding sharing integration with SlideShare and creating a SlideShare section for Today as it hopes to leverage user-generated content and professional connections toward its aim to increase usage frequency and duration.
  • Job application plug-in: Sites will soon be able to add an "Apply with LinkedIn" button to job postings on their websites. This is a much more logical LinkedIn platform plug-in than product or content recommendations. It could encourage wider LinkedIn sign-in service adoption, and contribute to LinkedIn's objective of housing a user's professional identity: the LinkedIn profile as identity and resume.

Despite these new offerings, LinkedIn nonetheless faces fierce competition from Facebook, Google and Twitter when it comes to leading in the social media space. Those sites have better access to information about people's interests, which is more widely useful for advertising and shopping. But LinkedIn's best chance to establish its APIs and technology platform is as a professional-identity authenticator, meaning LinkedIn could give its members the digital equivalent of a business card, but with the trusted authority of the LinkedIn name behind it. Besides professional relationships — hiring, of course, but also buying, selling and sales leads — such an authenticated identity could play a role in contracts and payments systems.

LinkedIn has its work cut out for it competing. It will have a hard time adding yet another recommendation button on sites, so its sign-in feature could be the most appealing part of its platform, especially for companies offering information about business content and services, travel or technology. I go deeper into LinkedIn's post-IPO strategies in a new research note at GigaOM Pro (subscription required).

Image courtesy of flickr user nan palmero

Related content from GigaOM Pro (subscription req'd):


Media Files
linkedin1.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Zynga, Groupon to the rescue for cleantech bets
June 29, 2011 at 2:15 PM
 

The future of clean power and greener transportation is being carried by FarmVille and online coupons — at least in terms of venture capital portfolios.

Earlier this month, I wondered if huge returns for VCs from social media IPOs could help make up for large bets on capital-intensive greentech companies that haven’t (yet?) delivered returns. My example was NEA and its potentially massive Groupon win (NEA’s Groupon share could be worth $2 billion on paper, or a 135 x multiple), in contrast to some of its capital-intensive green investments like fuel cell maker Bloom Energy, solar company HelioVolt, smart grid company GridPoint, and solar firm Konarka.

Connie Loizos at peHUB digs into this theme, too, in a column Tuesday night pointing out how a potentially giant return for Kleiner Perkins from an IPO for game startup Zynga could help make up for its tied-up greentech investments.

“[T]here's no question that Kleiner was losing its footing, and that it's about to be saved by Zynga,” writes Loizos.

Zynga reportedly could raise up to $2 billion in an IPO as soon as this week, valuing the company as high as $20 billion. That’s twice Zynga’s valuation when it raised its most recent round of $300 million last Summer. If Kleiner owns even 3 percent of Zynga, its stake could be valued at $600 million, writes Loizos.

Kleiner’s previous return that was that solid was its nameplate Google win where Kleiner put in $25 million in 1999 for a 20-percent stake. And we all know the end of that story, or the start of Kleiner’s fame: an eventual $2 billion return for Kleiner’s investors.

Kleiner has some weak, some OK, and some decent greentech bets. Smart grid company Silver Spring Networks was supposed to IPO last year, but still hasn’t, though I’ve been hearing that it might go sometime soon. Electric car maker Fisker Automotive is also supposed to IPO shortly, though hasn’t yet started selling its first car, the Karma. Kleiner made money on biofuel company Amyris’ IPO, and will make money on the IPO of solar inverter maker Enphase Energy.

Oh, those were some of the more promising ones. Bloom Energy is one of the most capital-intensive companies in the Valley and recently started offering an Energy-as-a-Service model, which seems like it would require more money raised (from banks likely) for installations. Alta Rock has struggled with its geothermal projects, and I haven’t heard anything from carbon capture and recycling company GreatPoint Energy in a couple of years. Biofuel company Mascoma delayed its commercial-scale cellulosic ethanol plant, though finally got an investment from Valero.

As Loizos puts it:

“The reality is that Zynga represents the first — and potentially only — tangible opportunity for Kleiner to hit the cover off the ball as it did with Google.”

Related content from GigaOM Pro (subscription req'd):


Media Files
farmville.gif?w=210 (GIF Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Samsung asks ITC to ban the import of Apple devices
June 29, 2011 at 1:45 PM
 

The back-and-forth in the patent dispute between Samsung and Apple continues, with Samsung filing a request for a U.S. import ban against the iPhone, iPad and iPod, FOSS Patents reports. The complaint was filed with the International Trade Commission (ITC) on Tuesday.

The ITC is a government regulatory body, which acts independently of the courts. Apple seems to be gearing up for a preliminary injunction request in its legal case against Samsung in the U.S. District Court for the Northern District of California, but it hasn’t sought any action from the ITC, unlike in previous cases against competing smartphone manufacturers such as HTC. Samsung’s move is a clever way to beat Apple to a potentially hobbling import ban, since the ITC’s decision is independent of the ongoing court case, and a final decision is reached within a fairly set time frame of 16 to 18 months, once the ITC agrees to investigate.

Apple is likely to respond with a complaint of its own, according to FOSS Patents’ Florian Mueller. Mueller says that Apple has to do so, “because otherwise Samsung might obtain an import ban against Apple long before Apple wins an injunction against Samsung.” The only reason that Apple hasn’t filed with the ITC already, Mueller says, is because doing so in its initial complaint with the U.S. courts “raises legal issues that go beyond the scope of an ITC investigation.”

Even though this action appears to raise the stakes in the ongoing patent dispute between the two companies, Mueller says it shouldn’t affect the likelihood of a settlement being reached by the two companies. It’s a common step in this kind of confrontation, and one which Apple is probably prepared for. Still, even though it isn’t very likely, there’s a small chance we could be living in a world where both Apple and Samsung smartphones aren’t available in the U.S. in a year or two. It’ll be like 2006 all over again.

Related content from GigaOM Pro (subscription req'd):


Media Files
apple-legal.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Paris power play feels like deja vu all over again
June 29, 2011 at 12:46 PM
 

Groundhog DayJust over a month ago, the great and the good were gathering in Paris to discuss the future of the Internet at a closed meeting of powerful international interests. This week, senior officials from around the world are gathering again, in the same city. And the subject up for debate? The future of the Internet.

Sounds familiar, doesn't it?

On the surface these two events — one organized alongside the G8 summit on behalf of French president Nicolas Sarkozy, the other put together by the Organization for Economic Cooperation and Development (OECD) — look very similar. But the OECD has been trying desperately to put daylight between them, not least because Sarkozy's event was roundly criticized (including by me) as a stitch-up between governments and corporate interests.

As a result the OECD has been trying to put distance between its event and its predecessor — promoting the idea that its discussions are something more loose, broader and with less of a controlling agenda. Just the other day it got the message across in a New York Times story where OECD official Sam Paltridge explained what the point of the debate was.

"We're trying to get the message across that if you hamper the flow of information, you are shooting yourself in the foot in terms of the economic benefits of the Internet. If someone comes along and threatens that openness, that's a real problem for economic growth."

It's a fair point — and one worth making. After all, the system is coming under pressure from all directions. Dictatorships want their own levels of control. Businesses have a profit-based agenda. Developing nations don't want to be at the mercy of their larger cousins. And many countries, notably Russia, are lobbying for greater centralized powers to control the network through "establishing international control over the Internet using the monitoring and supervisor capabilities of the International Telecommunication Union".

(There's a deep irony in the fact that Russia, which harbors a vast industry of online criminals and rarely helps international investigators track down fraudsters within its borders, is doing this, of course — but the point is simple: if control is centralized, they can use soft power and indirect pressure more effectively than with a fragmented system of governance spread across ICANN, service providers, national courts and companies like Google)

The trouble with the OECD's attempt to paint its event as a friendly, broad consensual approach to the future of the net is that it's still largely invitation-only. Even those “civil” groups who were part of the talks — including established names such as the Internet Society and the Electronic Frontier Foundation — have poured water on it, announcing yesterday that they were withdrawing their support for the event's draft proposals.

CSISAC, the OECD's advisory group representing civil society organizations, says it cannot agree with the general tone of the communique put forward by the event's organizers:

"CSISAC believes that the Communiqué which was presented today at the OECD's High Level Meeting on the Internet Economy in Paris, could undermine online freedom of expression, freedom of information, the right to privacy, and innovation across the world," it said. But it "was not able to accept the final draft's over-emphasis on intellectual property enforcement at the expense of fundamental freedoms."

It's great that somebody is standing up here. After all, there are so few bodies out there that get a chance to represent the most important group that has a stake in the future of the network — the users.

As I've argued before, most of the decisions are left to unaccountable bodies who do not necessarily have your best interests at heart: governments, corporations, technology platforms, international bodies and powerful individuals. They are all subject to lobbying and vested interests, and they all have reasons for wanting to stop the Internet from growing unfettered.

The real problem is that, as the events in Paris over the last month show, all of this gets us nowhere. We're stuck in the same arguments, again and again, going nowhere.

There's so much going on in the online world, yet discussions about the future of the Internet come down to an ever-tightening, ever-more-shrill argument about copyright. One side says greater controls are needed to keep the creative economy afloat; the other side says it's a Trojan horse: let them monitor what you do in the name of protecting copyright, and soon they'll monitor your every move. Whether either of those positions are accurate is up for debate, but the truth is that real users are simply left to be kicked around between opposing poles like a ball.

I don't know whether we'll ever get past this, but here's a suggestion: the next time the great and the good gather in Paris, I think they should simply put aside the question of copyright and focus on what they can do to enhance the way that people — real people — use and access the Internet. Maybe then we'll see a real consensus, rather than being stuck in our own version of Groundhog Day.

Related content from GigaOM Pro (subscription req'd):


Media Files
groundhogday.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Apture HotSpots makes the whole web like Wikipedia
June 29, 2011 at 12:00 PM
 

Apture, the San Francisco-based startup, has made a very useful addition to its “contextual exploration engine” technology, which allows content publishers to add rich media features to their web pages.

The company announced Wednesday a new feature called “HotSpots” that automatically creates new visible hyperlinks within online content based on what readers are likely to want to know more about.

HotSpots is a natural progression of the Highlights feature Apture rolled out a year ago, which lets readers highlight any phrase on the page and learn more about the topic through a small pop-up window that includes information such as Wikipedia entries, related images and videos. HotSpots tracks the phrases that readers have highlighted most often and turns them into visible hyperlinks to let users know that there are particularly interesting topics they may want to learn more about.

With HotSpots, Apture is essentially “rewiring the web based on where missing links should be,” CEO Tristan Harris (pictured here) told me in an interview this week. Publishers such as Scientific American, the Denver Post and the Nation like Apture because it keeps readers engaged with their content without needing to open a new tab and navigate away from the page to learn more about a certain topic, Harris said. To enable Apture on their websites, publishers just have to add one line of JavaScript code. HotSpots is also available as a browser extension — so if you love it as a reader, you can take it with you practically anywhere you go on the web.

“We want the whole web to be as rich and interconnected as the most engaging sites on the web like Wikipedia or Facebook,” Harris says in a blog post announcing the HotSpots feature. “On those sites, everything you want to know about is already cross-referenced and linked together.”

I’m a very curious person so I go through the “highlight, copy, search in new tab” rigamarole probably hundres of times per day. I tested out the HotSpots feature, and I must say I was impressed. The HotSpot link is clear but unobtrusive, and the pop-up appears when you hover over it. Apture’s information is relevant and spam-free — which might be the opposite of the traditional hyperlinks with ads made by contextual advertising companies. I can see HotSpots being especially useful for older readers who may not be as used to highlighting and searching for terms they’re interested in learning more about.

What’s especially impressive is that Apture built out this technology — and signed up big-name publishers — with just 12 full-time employees. The company has raised $4.1 million since its inception in July 2007, and already performs some 900 million page loads a month with its Highlights product. Apture provides its technology to publishers in a free ad-supported version and a paid version with no ads. Harris declined to provide the company’s current revenue figures, saying that so far Apture has been focused more on getting the product out there than on charging a lot for it. “We’re still really optimizing for distribution, and you don’t want to inhibit your growth early on.”

Here’s a screenshot of what HotSpots looks like:

Image of Tristan Harris courtesy of the company.

Related content from GigaOM Pro (subscription req'd):


Media Files
tristanharris2-e1309362587777.png?w=210 (PNG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Battle on: MapR, Cloudera pimp their Hadoop products
June 29, 2011 at 11:00 AM
 

The fight for Hadoop dominance is officially on. The unveiling of Yahoo’s Hadoop spinoff Hortonworks will undoubtedly be the talk of today’s Hadoop Summit, but its not the only game in town. In fact, while Hortonworks is busy answering questions about its product strategy, Cloudera and MapR will demonstrate new versions of their distributions overflowing with bells and whistles.

I wrote yesterday about the importance of new tools designed to improve the Hadoop experience at a level above the distribution layer, but the distribution — the underlying code base that defines Hadoop’s core architecture and capabilities — is still king. Apache Hadoop is a set of open source tools designed to enable the storage and processing of large amounts of unstructured data across a cluster of servers. Chief among those tools are Hadoop MapReduce and the Hadoop Distributed File System (HDFS), but there are numerous related ones, including Hive, Pig, HBase and ZooKeeper.

Most vendors try to distinguish their Hadoop distributions with MapReduce and HDFS. Some will try to tweak the core Apache features and architectures, while others will replace one component — generally HDFS — altogether.

EMC and IDC released their Digital Universe study this week, estimating that we’ll create 1.8 zettabytes of data this year and that data growth is outpacing Moore’s Law. Now that we’ve realized there’s value in all that information, we’re anxious to capture, analyze and use it, and that requires more and better big data technology. As this diagram from Karmasphere illustrates, Hadoop is a very large part of the big data stack, which means we’re just getting started.

So many distributions, so little time

Cloudera: Cloudera, whose CDH was the first commercial Hadoop distribution, takes the approach of taking the full complement of available open source components and integrating them into an enterprise-grade product. Its value isn’t so much in “improving” Hadoop as it is in making everything from Hadoop MapReduce to its own Sqoop (SQL to Hadoop) tool work well together out of the box.

Cloudera actually released CDH version 3.5 recently, but today it released a bunch of new features for its Cloudera Enterprise product, a suite of management tools designed to make it easier to operate CDH clusters. The coolest has to be something called SCM Express, which makes getting started with Hadoop easier. Cloudera’s Charles Zedlewski explained that SCM Express is a free tool that lets users provision and launch up to a 50-node Hadoop cluster “in about six clicks.”

MapR: However,Cloudera has lots of company, including the brand new MapR. That startup just released its first two products today — a free Hadoop distribution called M3 and a paid distribution called M5. MapR takes the Cloudera approach of integrating the entire spectrum of Hadoop tools into its distribution and including management functionality, but it also has made a number of significant changes to the MapReduce and HDFS components to improve performance.

MapR’s Jack Norris says the result is “probably the most comprehensive distribution,” which performs two to five times faster than the standard Apache Hadoop. A majority of MapR’s changes are to the storage layer, which it has reworked to be faster, easier, more reliable and more scalable than HDFS.

You can’t talk about MapR without talking about EMC, which announced last month that the Enterprise Edition of its Greenplum HD Hadoop distribution will be “powered by MapR.” Norris explained to me that the product, available later this year, will utilize MapR’s M5 version, which includes advanced storage capabilities around high availability and data protection. However, EMC’s line of Greenplum HD distributions, which also includes a free Community Edition, is actually centered around the specialized Hadoop code developed by and running within Facebook.

Of course, Hortonworks isn’t to be discounted, nor are DataStax with its Cassandra-based Brisk distribution or IBM, which has been promising its own Big-Blue-style Hadoop distribution for some time. But the most interesting thing about all this Hadoop activity might be the pace of it: as of mid-March, Cloudera stood alone as a commercial Hadoop provider. Now it has four competitors with more likely to come.

Feature image courtesy of Flickr user Joi.

Related content from GigaOM Pro (subscription req'd):


Media Files
hummer.jpg?w=208 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Hipmunk knows you're addicted to the web
June 29, 2011 at 10:00 AM
 

Hipmunk, the hot flight finding service whose user experience has launched a thousand blog posts, has added in-flight Wi-Fi as a new metric for travelers. So now, when I’m searching for one-stop, on time flights between Austin and San Francisco, I know which ones offer Wi-Fi (none do). Sure, this is an indication of how deep the web has sunk its talons into us, but that ship has sailed (or perhaps that flight has taken off) and working while on flights has become pretty much the standard in many offices.

So viva the in-flight Wi-Fi indicators, and let’s get Jet Blue on board with this trend. Also, for those who travel often and have a Boingo subscription, check out their deal with GoGo Internet. Now on flights offered by American Airlines, Delta, Alaska Airlines, U.S. Airways, and others, Boingo users can get online using their Boingo log in, but they can’t pay Boingo prices. In-flight Wi-Fi is still a luxury item it seems.

Related content from GigaOM Pro (subscription req'd):


Media Files
screen-shot-2011-06-21-at-12-13-53-pm-e1308683753294.png?w=210 (PNG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
The iPhone Effect: how Apple's phone changed everything
June 29, 2011 at 9:32 AM
 

Apple’s iPhone debuted four years ago, and we sometimes take it for granted how much has changed since then. The phone altered the smartphone landscape and ushered in the modern era of intelligent, connected devices. Apple has not cruised to the top and in fact continues to trail nemesis Google’s Android in smartphones sales. But it shook up the industry and forced changes and upheaval among many competitors.

Here’s a look at some stats on how things have changed over that period, both for Apple and for other companies operating in the same space.

Shifting stock fortunes:

  • Apple’s stock price at the close of June 29, 2007, the day of the iPhone launch: $122.04 a share. Tuesday: $335.26 with a market cap of $310 billion.
  • Research in Motion’s stock price on June 29, 2007: $66.66. Tuesday: $28.24 per share with $14.7 billion market cap.
  • Nokia’s stock price on June 29, 2007: $23.63. Tuesday: $6.11 with $22.7 billion market cap.
  • HTC’s stock price on June 27, 2007: 361.01 (Taiwan), Tuesday: 1040.

Worldwide smartphone market share shifts since 2007:

  • Q2 of 2007: Symbian 65.6 percent, Windows Mobile 11.5 percent, RIM 8.9 percent, according to Gartner.
  • Q1 of 2011: Android 36 percent, Symbian 27.4 percent, iOS 16.8 percent, RIM 12.9 percent.

Apple revenues shift toward the iPhone:

  • In Q1 this year, iPhone revenues hit $12.3 billion, 49.8 percent of Apple’s revenues, ahead of Macs at $4.9 billion and iPod at $1.6 billion.
  • In Q2 of 2007, Mac revenue was $2.3 billion, 43 percent of the company’s revenue, while iPods brought in $1.7 billion, or 32 percent of revenue.
  • As of March, Apple said it has sold 108 million iPhones, 60 million iPod touches and 19 million iPads.

Notable changes since the introduction of the iPhone:

  • Google introduces the Android operating system on Oct. 21, 2008.
  • HP announces acquisition of Palm on April 28, 2010.
  • Nokia announces partnership with Microsoft  to run Windows Phone 7 on upcoming smartphones on Feb. 11, 2011.
  • Motorola spins off Motorola Mobility Holdings on Jan. 4, 2011.

Affect on carrier competition and data use:

  • In 2006, Verizon Wireless had 7.7 million new subscriber additions compared to AT&T’s 7.1 million. After the iPhone launched, AT&T outpaced Verizon in net subscribers adds for the next three years, according to FCC figures. By 2009, AT&T had 8.1 million new adds while Verizon had 5.9 million.
  • AT&T’s earnings before interest, taxes, debt and amortization went from 34.4 percent in Q4 2006 to 40.7 percent by Q4 2009, outperforming all the other major carriers.
  • AT&T’s postpaid integrated 3G devices have grown steadily since the launch of the iPhone. It went from 8.5 million in Q4 2008 to 29.7 million by Q2 2010.
  • Data revenue in 2006 for all carriers was just 7.5 percent of total revenue. By 2009, it was up to 26.8 percent.

The rise of smartphones:

  • Smartphone adoption in Q1 of 2008 was 10 percent according to Nielsen. Nielsen predicts that smartphones will outnumber feature phones by the end of this year.
  • Worldwide smartphone sales will hit 468 million this year and reach 1.1 billion by 2015, according to Gartner.

Smartphones open the door for tablets:

  • Apple has sold about 19.5 million iPad through the first quarter of this year.
  • Gartner estimates there will be 294 million tablets in 2015.

App ecosystems thrive:

  • Apple’s App Store now boasts 425,000 apps, with 14 billion app downloads and $2.5 billion paid to developers to date.
  • The Android Market has 200,000 apps and has had 4.5 billion downloads as of May.
  • IDC now expects 182.7 billion mobile app downloads by 2015 across all platforms.
  • Canalys estimates mobile app revenue will hit $14.1 billion next year and rise to $36.7 billion by 2015.

Related content from GigaOM Pro (subscription req'd):


Media Files
iphone_introduction_photo.jpg?w=196 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
angrybirds_big1-e1309299639930.jpg?w=286 (JPEG Image)
   
   
Acunu's quest to make big data run faster
June 29, 2011 at 9:00 AM
 

Tim Moreton CEO of Acunu

Big data can sometimes mean big infrastructure to run everything on, which can mean slower performance as the hardware struggles to read from or write to a database at the same speed it can process the data. Some companies are tackling these issues with even more specialized databases, while others are trying to speed up access to data by adding more memory. Acunu offers a mix of both, which is why we chose it as one of our Structure 2011 Launchpad companies.

The startup wants to help improve data stores by providing an underlying file system that makes the databases run faster. It’s a layer between the database and an SSD optimized to boost performance of a particular database. The London startup, founded in May 2009 raised about 3.1 million pounds ($4.9 million) in a seed and Series A round from Eden Ventures, Oxford Technology Management and Pentech Ventures LLP. The alternative to Acunu is to add more hardware or perhaps better tune your applications — tasks that can be both expensive and time-consuming.

Tim Moreton, the CEO, explained to me that the company’s first file system optimization works for Cassandra with a second version aimed at better Hadoop integration. However, it won’t stop there because other big data stores are used for other applications, and Acunu doesn’t want to be a niche service to boost the performance of one data store. So expect support for new data stores in the coming months — although Moreton was tight-lipped about which ones — as Acunu strives to become a real platform.

Related content from GigaOM Pro (subscription req'd):


Media Files
imag0143-e1309297184147.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Animoto raises $25M to invest in mobile video creation
June 29, 2011 at 7:30 AM
 

Cloud-based video creation startup Animoto is ready to invest heavily in its growing business with a new round of funding it’s announcing Wednesday. The company has raised $25 million, led by Spectrum Equity Investors with participation from existing investors Madrona Venture Group and Amazon.com.

Animoto runs a freemium service that lets users create short video slideshows from collections of pictures and videos that they have on their computers or mobile phones. If they want more features, such as custom templates or longer videos, users have the option to pay a small one-time fee for individual slideshow creation or a subscription for an unlimited number of slideshows.

So far, that model has been working pretty well: Animoto has more than 3 million users, who together have created about 15 million videos. But it sees an opportunity to grow faster and to attract more users, particularly through the development of mobile apps and by striking partnerships with third-party photo- and video-hosting sites. Thanks to the Animoto API, Kodak  Gallery, American Greetings' Webshots and Aviary.com are now also offering the web-based video creation service to their customers.

But the key to Animoto’s future might be in mobile, according to CEO Brad Jefferson. Mobile users are increasingly using their smartphones to shoot pictures and videos, and Animoto wants to make it easy for them to create and share slideshows with friends. At the same time, Animoto recognizes that it needs to invest heavily not just to keep pace with the competition but to beat it.

“It’s very easy for companies around mobile and photos to fall behind,” Jefferson said. “We want to be the [mobile] leader and accelerate development through this period of innovation.”

Right now Animoto has just an iPhone app, but it Jefferson said it’s evaluating other mobile app platforms. With that in mind, and with a view to also invest in its core product, Animoto is aggressively adding to its headcount. It has 60 employees now, compared to about 34 employees at the start of the year, with plans expand that to 90 employees by the end of 2011.

This round of financing is the first since 2009, when Animoto raised $4.4 million. In addition to the funding, Animoto has announced that Spectrum Equity Investors Managing Director Ben Sphero will join the company’s board of directors.

Related content from GigaOM Pro (subscription req'd):


Media Files
animoto.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Why lithium-ion batteries die so young
June 29, 2011 at 3:00 AM
 

The death of a battery: we've all seen it happen. In phones, laptops, cameras, and now electric cars, the process is painful and — if you're lucky — slow. Over the course of years, the lithium-ion battery that once powered your machine for hours (days even!) will gradually lose its capacity to hold a charge. Eventually you'll give in, maybe curse Steve Jobs, and then buy a new battery, if not a whole new gadget.

But why does this happen? What's going on in the battery that makes it give up the ghost? The short answer is that damage from extended exposure to high temperatures and a lot of charging and discharging cycles eventually starts to break down the process of the lithium ions traveling back and forth between electrodes.

The longer answer, which will take us through a description of unwanted chemical reactions, corrosion, the threat of high temperatures, and other factors affecting performance, begins with an explanation of what happens in a rechargeable lithium-ion battery when everything's working well.

Lithium-ion Battery 101

In a typical lithium-ion battery, we’ll find a cathode, or positive electrode, made out of a lithium-metal oxide, such as lithium cobalt oxide. We'll also find an anode, or negative electrode, which today is generally graphite. A thin, porous separator keeps the two electrodes apart to prevent electrical shorting. And an electrolyte, made of organic solvents and lithium-based salts, allows for the transport of lithium ions within the cell.

During charging, electric current forces lithium ions to move from the cathode to the anode. During discharging (in other words, when you use the battery), ions move back to the cathode.

Daniel Abraham, a scientist at Argonne National Laboratory leading research into how lithium-ion cells degrade, compared this process to water in a hydropower system. Moving water uphill requires energy, but it flows downhill very easily. In fact, it delivers (kinetic) energy, said Abraham. Similarly, a lithium cobalt oxide cathode, "does not want to give up its lithium," he said. Like moving water uphill, it requires energy to take lithium atoms out of the oxide and load them into the anode.

During charging, ions are forced between sheets of graphene that make up the anode. But as Abraham put it, "they don't want to be there. When they get a chance, they'll move back," like water flowing downhill. That's discharging. A long-lasting battery will survive several thousand of these charge-discharge cycles, according to Abraham.

When Is a Dead Battery Really Dead?

When we talk about "dead" batteries, it's important to understand two performance metrics: energy and power. For some applications, the rate at which you can get energy out of the battery is very important. That's power. In electric vehicles, high power enables rapid acceleration and also regenerative braking, in which the battery needs to accept a charge within a couple seconds.

In cell phones, on the other hand, high power is less important than capacity, or how much energy the battery can hold. Higher capacity batteries last longer on a single charge.

Over time, the battery degrades in a number of ways that can affect both power and capacity, until eventually it simply can't perform its basic functions.

Think of it in terms of another water analogy: Charging a battery is like filling a bucket with water from a tap. The volume of the bucket represents the battery's energy, or capacity. The rate at which you fill it—turning the tap on full blast or just a trickle—is the power. But time, high temperatures, extensive cycling and other factors end up creating a hole in the bucket (dear Liza, dear Liza…).

In the bucket analogy, water leaks out. In a battery, lithium ions are taken away, or "tied down," said Abraham. Bottom line, they're prevented from going back and forth between the electrodes. So after a few months, the cell phone that initially required a charge only once every couple of days now needs a charge every day. Then it's twice a day. Eventually, after too many lithium ions have been tied down, the battery won't hold enough of a charge to be useful. The bucket will stop holding water.

Why does this happen? Well, in addition to the chemical reactions that we want to happen in the battery, there are also side reactions. Barriers arise that impede the motion of lithium ions. So the electric car that went, say, zero to 60 in five seconds off the lot, will take eight seconds after a few years, and maybe 12 seconds after five years. "All the energy is still there, but it can't be delivered fast enough," said Abraham. The ions run into roadblocks.

What Breaks Down and Why

The active portion of the cathode (the battery's source of lithium ions) is designed with a particular atomic structure, for stability and performance. When ions are removed, sent over to the anode, and then inserted back into the cathode, we ideally want them to return to the same spot, in order to preserve that nice stable crystal structure.

Problem is, the crystal structure can change with each charge and discharge. An ion from Apartment A doesn't necessarily come home but could instead insert itself into Apartment B next door. So the ion from Apartment B finds her place occupied by this drifter and, not being one for confrontation, decides to take up residence down the hall. And so on.

Gradually, these "phase changes" in the material transform the cathode to a new crystal structure, with different electrochemical properties. The particular arrangement of atoms, which enabled the desired performance in the first place, has been altered.

In hybrid vehicle batteries, which only need to provide power during acceleration or braking, noted Abraham, these structural changes occur much more slowly than in electric vehicles, because only a small fraction of lithium ions in the system move back and forth in any given cycle. As a result, he said, it's easier for them to return to their original locations.

Problem of Corrosion

Degradation can occur in other parts of the battery as well. Each electrode is paired with a current collector, which is basically a piece of metal (typically copper for the anode, aluminum for the cathode) that gathers electrons and moves them to an external circuit. So you have slurry made from an "active" material like lithium cobalt oxide (which is ceramic and not a very good conductor), plus a glue-like binder painted over this piece of metal.

If the binder fails, the coating can peel off the current collector. If the metal corrodes, it can't move electrons as efficiently.

Corrosion within the battery cell can result from an interaction between the electrolyte and electrodes. The graphite anode is highly "reducing," which means it gives up electrons easily to the electrolyte. This can produce an unwanted coating on the graphite surface. The cathode, meanwhile, is highly "oxidizing," which means it easily accepts electrons from the electrolyte, which in some cases can corrode the aluminum current collector or form a coating on the cathode particles, Abraham said.

Too Much of a Good Thing

Graphite — the material commonly used to make an anode — is thermodynamically unstable in an organic electrolyte. What that means is the very first time our battery is charged, the graphite reacts with the electrolyte. This forms a porous layer (called a solid electrolyte interphase, or SEI) that actually protects the anode from further attacks. This reaction also consumes a little lithium, however. So in an ideal world, we would have that reaction occur once to create the protective layer, and then be done with it.

In reality, however, the SEI is a sadly unstable defender. It does a good job protecting the graphite at room temperature, said Abraham, but at high temperatures or when the battery runs all the way down to zero charge ("deep cycling") the SEI can partially dissolve into the electrolyte. (At high temperatures, electrolytes also tend to decompose and side reactions accelerate.)

When friendlier conditions return, another protective layer will form, but this will eat up more lithium, giving us the same problem we had with the leaky bucket. We'll have to recharge our cell phone more often.

Now, as much as we need that SEI to protect the graphite anode, there can be too much of a good thing. If the layer thickens too much, it actually becomes a barrier to the lithium ions, which we want to flow freely back and forth. That affects power performance, which is, as Abraham emphasized, "extremely important" for electric vehicles.

Building Better Batteries

So, what can be done to make our batteries last longer? In the lab, researchers are looking for electrolyte additives to function like vitamins in our diet, enabling the battery to perform better and live longer by reducing harmful reactions between the electrodes and electrolyte, said Abraham. They're also seeking new, more stable crystal structures for the electrodes, as well as more stable binders and electrolytes.

Engineers at battery and electric car companies, meanwhile, are working on the battery pack and thermal management systems to try and keep lithium-ion cells within a constant, healthy temperature range. The rest of us, as consumers, can avoid extreme temperatures and deep cycling, and for now keep grumbling about those batteries that always seem to die too soon.

Image courtesy of Argonne National Labs, felixtsao, warrenski, MitchClanky2008, bizmac.

Related content from GigaOM Pro (subscription req'd):


Media Files
lowbattery.jpg?w=186 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Square gets $100 Million, Mary Meeker joins the board
June 29, 2011 at 1:05 AM
 

One thing is clear investors are in love with Square, the payments company co-founded by Jack Dorsey, the co-creator of Twitter. The company today announced that it has received yet another big slug of funding. The veteran venture capital firm of Kleiner Perkins Caufield & Byers is leading the $100 million investment in the company. The company is said to be valued at $1 billion.

Square CEO & Co-Founder Jack Dorsey

In comparison, Square was valued at $240 million in January 2011 when it did its last round of financing and raised $27.5 million. Square so far has attracted about $30 million in funding. Visa, the credit card giant, made an undisclosed strategic investment in the company recently.

Former star wall street analyst Mary Meeker is going to join the board of Square which also includes Vinod Khosla of Khosla Ventures and Larry Summers, economic advisor to President Obama. The high-powered board, a charismatic founder, a seasoned operator (Keith Rabois) and a fast growing product offering has made Square one of the hottest new companies in Silicon Valley. It doesn’t surprise me — Square always felt like a game changer to me.

According to the Wall Street Journal, Square is processing nearly $4 million in payments each day. It will processa billion dollars in payments with a year, Rabois told the newspaper. Rabois, formerly of PayPal and Slide believes that Square is going to be worth more than PayPal.

Square, based in San Francisco, is the latest among the fast growing Internet startups to attract massive slugs of money from investors. Foursquare, Twitter, DropBox and Air BnB are some of the startups that have taken investments in order to create momentum for their business.

 

Related content from GigaOM Pro (subscription req'd):


Media Files
square_tabs-e13061737826321.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Rockmelt gets $30M to double down on social browser
June 29, 2011 at 12:00 AM
 

RockMelt said Tuesday it has raised $30 million in a series B funding round led by Accel Partners, Khosla Ventures and existing investor Andreessen Horowitz.

The Mountain View, Calif.-based startup plans to put the money toward building out its flagship product, a web browser with built-in social features, CEO and co-founder Eric Vishria told me in an interview this week. As part of the new funding, Accel’s Jim Breyer and Khosla Ventures’ Vinod Khosla are joining Rockmelt’s board of directors. This round brings the company’s total venture capital investment to approximately $40 million.

Today, Rockmelt has about 40 full-time employees, and the company plans to double its staff in the next year, Vishria said. Just nine months after its private beta launch, Rockmelt’s browser is available on the desktop and iOS mobile devices, and currently has several hundred thousand active users and over a million installs.

Rockmelt has not started generating revenue, however — and the company does not have any immediate plans to focus on doing so. “We are entirely focused on building a great product and on distribution right now,” said Vishria. But the company does have an idea of how it will eventually bring in cash: namely, search. “Over time, we’ll make money through search, just like other browsers. And as we build in commerce and gaming features, that will bring in revenue as well,” the CEO said.

While Rockmelt has already seen interest from larger potential acquirers, Vishria said the company has decided to “double down” on the opportunity to grow independently. “We’ve been clear from the beginning we’re here to build a big company,” he said. “You can look at any company at any stage and you won’t see a board like this. I don’t feel it as pressure; I feel it as opportunity.”

The money and heavy-hitting board members will likely come in handy. It bears mentioning that such startups as Flock have tried — and failed — to shake up the web browser space with social offerings before. As a newcomer in a field currently dominated by big players such as Microsoft (Internet Explorer), Google (Chrome), Mozilla (Firefox) and Apple (Safari), Rockmelt will certainly need more than just a can-do attitude to make it big.

Related content from GigaOM Pro (subscription req'd):


Media Files
screen-shot-2011-06-28-at-8-16-44-pm-e1309317481276.png?w=210 (PNG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Video gamers: The secret stars of live streaming
June 28, 2011 at 8:00 PM
 

Last weekend, hundreds of thousands of people tuned in online as some of the most competitive athletes of our times faced off against each other. And no, we’re not talking about Wimbledon or the Women's World Cup, but the Dreamhack festival in Sweden, which is one of the largest video gaming events in the world.

A Dreamhack contest around the popular game League of Legends alone attracted more than 200,000 simultaneous viewers, according to live streaming video game provider own3D.tv. The Austria-based company is only one of a number of services specialized on broadcasting video game contests — sometimes also called e-sports events — online. All of them are seeing explosive growth, as live webcasts are taking competitive video gaming to the next level, translating it from a sport for insiders to one that is watched by millions.

own3D.tv began streaming video games online in 2010, and the company now sees more than four million unique viewers per month for its live streams. own3D.tv CEO Zvetan Dragulev told me during a phone conversation today that it saw some 250Gbps of traffic throughput during the Dreamhack event alone, which he attributed in part to the fact that 95 percent of the audience watching its games in HD, with most streaming at a rate of 1.8Mbps.

U.S.-based Major League Gaming (MLG) has also seen record audiences in recent months, attracting 117,000 peak concurrent stream views at the Columbus Pro Circuit gaming competition earlier this month. During the weekend of that competition, MLG clocked more than 22.6 million streams.

And live streaming provider Justin.tv has seen e-sports streaming grow so fast that it decided to dedicate an entire site to it. The company launched TwitchTV earlier this month after gaming accounted for around 3.2 million monthly uniques on its main site. Justin.tv Marketing & Communications VP Matthew DiPietro attributed much of this growth to the fact that competitive gaming for the first time ever has the chance to reach its full potential audience thanks to live streaming. "We now have a distribution platform that turns e-sports into a truly live sporting event," he said.

Sites like TwitchTV and own3D.tv see most of their traffic around events, but also feature dozens of live video feeds of people playing popular games at any given time. Some of these are just amateurs that like to show off their skills to fellow gamers, but others are actually professional teams preparing for the next big tournament. "It's like watching a training session of FC Barcelona," said Dragulev.

Gamers are increasingly monetizing their matches, with live video streaming offering another way to make a living. Live streaming sites have revenue sharing deals with their professional gaming partners, and DiPietro said that advertisers love the fact that they can cater to such a well-defined audience. Dragulev explained that his company gets 90 percent of its video views from professional teams and events, which attract much higher CPM rates than traditional user-generated live streams. "We want to compete with real TV budgets," he said.

Part of the secret sauce of video game live streaming has to do with very particular technical issues. DiPietro said that gamers require capabilities to stream from various consoles, and Dragulev told me that his company broadcasts feeds from competitions with a few seconds delay to prevent competing teams from gathering intelligence about potential targets on the live stream.

But a bigger piece of the puzzle has been for own3D.tv to distribute the vastly growing traffic of e-sports live streaming across a number of CDNs, simply because this really is a global phenomenon: Only between 15 and 18 percent of all viewers that tune into streams from own3D.tv come from the U.S., and almost as many viewers come from Russia. Dragulev believes that this international focus will be key to taking live streaming of competitive gaming to the next step. "The gaming market is just in its infancy," he told me.

Image courtesy (CC-BY-SA) of Flickr user ECL X-Series Liverpool.

Related content from GigaOM Pro (subscription req'd):


Media Files
4846275832_7c4c81e851_z-e1309303310786.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
mc10: Stretchy electronics for better devices
June 28, 2011 at 7:16 PM
 

Electronics that can break out of their rigid boxes, and be embedded into stretchy, even wearable, materials — that’s the goal of startup mc10, which packages up semiconductors, like silicon, so they can bend, twist and wrap around other structures. The company has just raised $12.5 million led by longtime energy investors Braemar Energy Ventures.

One of the novelties of mc10′s technology, is that it has so many applications. Medical devices (flexible sensors and surgery tools), clothing with embedded electronics for soldiers or fashion, automotive lighting, or energy and solar technology, like flexible solar panels and movement-powered materials. Yep, picture joggers of the future charging their iPods via their hoodies, and mc1o has an R&D partnership with Reebok for athletic wear.

I first heard about mc10 when the startup won backing, along with researchers from University of Illinois, from the Department of Energy’s Advanced Research Projects Agency (ARPA-E), which is a DOE program modeled after the Department of Defense’s DARPA program. The idea of ARPA-E is to award small grants to early stage, high-risk projects. mc10 and its research group received $1.71 million to make flexible nano-structured electronics that can convert waste heat into electricity.

At the time of its ARPA-E grant, mc10 said the project involved technical risks, making it hard for them to raises VC funds at that stage. Looks like either those risks were abated, or Braemer and mc10′s other investors, which include North Bridge Venture Partners, Osage University Partners, and Terawatt Ventures, are now willing to swallow the risks. The market size is large enough — scaling up and getting costs low will be the hurdle.

As devices get smaller, and sensors get embedded on everything from pill cap bottles to industrial machinery, new form factors for electronics will be needed. Batteries are going through the same stage of innovation, and ultra-thin and uniquely shaped batteries are being developed.

Image courtesy of pt.

Related content from GigaOM Pro (subscription req'd):


Media Files
wearableelectronics1.jpg?w=184 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
It's Obvious: Ev Williams and Biz Stone, together again
June 28, 2011 at 6:13 PM
 

Twitter founder Evan Williams

When Evan Williams announced several months ago that he was stepping back from day-to-day involvement at Twitter — the company he co-founded, financed and was the CEO of until last year — it wasn’t clear what he was going to do. That still isn’t clear, but now at least we know who he’s going to be doing it with: his Twitter co-founder Biz Stone announced Tuesday that he’s also stepping back from his involvement with the company, and joining Williams at a relaunched Obvious Corp., along with former Twitter lead developer Jason Goldman. One thing is for sure: whatever kind of third act this group comes up with, expectations are going to be pretty high.

Eagle-eyed readers might have noticed a tiny clue about the new venture in Evan Williams’ blog post in March, which was entitled “An Obvious Next Step.” But for many, the obvious part seemed to be the fact that the former Twitter CEO was leaving the company — by that point, his involvement had declined so far that insiders said he was barely spending any time at all at Twitter headquarters, even though he was supposed to be directing product strategy. Williams had (apparently) voluntarily stepped into that position when Dick Costolo took over as CEO in October of last year.

After he left Twitter (although he remains on the board, and said in March that he would still be advising the company), Williams was replaced as director of product development by another former co-founder: Jack Dorsey, the man who came up with the original idea for Twitter and sold Williams on the concept while at Odeo, a podcasting startup. Odeo was later shut down and Williams took over Twitter, something Dorsey — who also happens to be the CEO of mobile-payment startup Square — said was like a “punch in the stomach” in an interview earlier this year with Vanity Fair.

As for Obvious Corp., that name has a long history, not all of which is positive: after Odeo failed to get much traction, Obvious was the vehicle Evan Williams and his partner Biz Stone used to buy back shares of the company from the venture capital investors who originally financed it. And it was also the vehicle that Williams used to effectively take control of Twitter from Dorsey in 2008, after it became clear that this idea of a short-form information network had some potential (there’s a Quora thread with some more detail on these early years from an Odeo staffer).

As for Jason Goldman, the former vice-president of product for Twitter, he and Evan Williams go back even farther: Goldman was the product lead at Google for Blogger — the early blogging platform that Evan Williams and partner Meg Hourihan created and later sold to the search giant in 2003 (Biz Stone later worked on Blogger at Google as well). Much like Twitter’s early years (which have been the subject of some controversy over the ousting of Dorsey and some other events at the time) the rise of Blogger also led to some criticism of Williams and his abilities as co-founder and CEO.

For his part, Williams has admitted in interviews that he is not really cut out to be a CEO, and that he has learned a number of important lessons from the creation and failure of Odeo. Presumably he plans to apply those to his new venture — and his partners likely know him as well or better than anyone at this point. As for what Obvious Corp. plans to do now, that much is still a mystery. All the Obvious website says is that the company “makes systems that help people work together to improve the world” and that it is devoted to “developing products that matter.”

For all their flaws and tortuous growth histories, there is no question that Blogger and Twitter have been part of a communications and information revolution unlike anything seen since the web was first invented. Will the Obvious team be able to match that kind of track record with their new project? One thing seems pretty obvious: they shouldn’t have any difficulty raising venture financing, if they want it.

Post and thumbnail photos courtesy of Wikimedia Commons.

Related content from GigaOM Pro (subscription req'd):


Media Files
evanwilliams.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
With SkypeKit, Skype wants to be everywhere
June 28, 2011 at 5:23 PM
 

Jonathan Christensen, Skype's VP of Emerging Opportunities

Skype is opening up its development platform to all comers on Tuesday, with the launch of SkypeKit program. Skype wants to be everywhere so it can grow its usage and users. SkypeKit is its means to ubiquity but will it work? Like many companies trying to build an app ecosystem, Skype is trying to navigate making money and making developers and users happy.

Currently about 2,000 developers are part of the SkypeKit program, but now that its open to all, the more than 10,000 that signed up initially can join. The kit is optimized for all platforms and devices, and will allow folks to build Skype connectivity into their gizmos and services. With so many devices getting internet access, folks are now only an app away from getting Skype video conferencing on their connected watch for example.

Jonathan Christensen, Skype's VP of Emerging Opportunities notes that so far the dominant use case of the SDK has been around set-top boxes and bringing video calls to televisions. However, developers participating in the program will have to do so because their users will value and pay them for the functionality — there is no revenue share from Skype. “We don’t have a program for that right now,” said Christensen. As for encouraging developer efforts on areas of Skype that actually generate money for the company, such as the Skype Out minutes or adding functionality to video conferencing, Christensen says a revenue share to act as an incentive might be a possibility.

Now that Microsoft said it will pony up $8.5 billion for Skype, a focus on revenue may not be as pressing an issue as it was when the company was planning its IPO. But its focus on growing users and getting developers on board to innovate using the core Skype functionality without thinking about how the SDK program will help revenue seems a bit short-sighted.

“We are increasingly diversifying our revenue streams, and the attraction of Skype to our users is beautiful Skype-to-Skype voice calls and video, which are free,” Christensen said. “And to the extent that more users are coming in we have more and more opportunity for revenue.”

And Skype clearly believes that developers (and the awesome new services they build which will require Skype) will be key to bringing in new users. The pressure to grow may also be behind Skype’s decision to add XMPP support to the most recent beta version of its Windows client, letting it interoperate with other IM networks. Skype needs to grow especially as it faces more competition from Facebook, Google, Apple and any number of other companies building out video conferencing and chatting capabilities into their services and devices.

Skype used to have the lock on that market thanks to its base of voice users that upgraded to video, but its users have also signed up with other networks that are now providing similar services, leaving Skype defending its turf. For example, I still use Skype, but I now also use Google Talk, Facetime or startups like Tango to video chat with friends and family (I don’t do this often however).

But in order to diversify those revenue streams Skype needs to keep users coming back. Christensen says, “One of the key pillars of the communications network is that it follows Metcalfe’s Law, and so to date with things like Facetime we’re talking about services with limited support for devices, while our strategy is to be on Android and everyplace else we can.”

Looks like before Skype thinks deeply about making money, it wants to ensure it makes itself ubiquitous.

Related content from GigaOM Pro (subscription req'd):


Media Files
jonathan-christensen-p6.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Yahoo Connected TV: Old and busted, or the new hotness?
June 28, 2011 at 5:00 PM
 

In many ways, Yahoo Connected TV seems a contradiction: On the one hand, it’s one of the oldest connected TV platforms around and so already has a decent install base. On the other, some of its existing consumer electronic partners seem to be interested in exploring new TV app development platforms. And then there’s this new broadcast interactivity feature, which promises to offer new features for consumers and revenue opportunities for broadcasters. But will it be enough to convince OEMs to stick with it? And are consumers actually interested in what Yahoo has to offer?

Yahoo Connected TV was one of the first connected TV platforms to gain wide adoption among consumer electronics manufacturers, getting its system embedded on TVs and other devices from companies like Vizio, Samsung and Toshiba. But it’s not stopping there: Yahoo TV is available on more than 8 million units that have been sold so far, and it expects that to grow to 16 million by the end of the year. And later this year D-Link will introduce a set-top box powered by Yahoo Connected TV to compete with streaming devices like Apple TV, Roku and the Logitech Revue.

But some of its existing CE partners have either introduced their own app platforms or begun experimenting with competing operating systems, like Google TV. Samsung, for instance, has invested heavily in growing its own TV app marketplace, while first Sony and Vizio have agreed to put the Google TV OS on some of their broadband-capable TVs.

Despite all this, Russ Schafer, senior director of worldwide product marketing for Yahoo Connected TV, remained confident that his company’s platform would outlive a lot of the newer connected TV platforms. “I think those platforms will die over time,” he said. The reason? It all comes down to money.

While Google is betting on its ability to offer better universal search and discovery of content, as well as an Android-based app store, Yahoo’s big value prop to OEMs seems to be that it has already built out a full-fledged platform for advertising. And through its broadcast interactivity technology, Yahoo Connected TV aims to surface ads and additional information connected to content people are watching online.

Broadcast interactivity, which Yahoo showed off at CES, gives broadcasters and advertisers the ability to offer up detailed info and product offers in a sidebar that can be launched from the remote control. It can also tie in with other second screen devices. Theoretically that could let broadcasters give more context to their shows, with additional info about actors on the show they’re watching, loop in games and trivia and maybe even offer up some additional multimedia content.

And for advertisers, broadcast interactivity gives them the ability to target offers at viewers or allow consumers to request more information about products that appear on-screen. It’s basically about creating a whole new performance-based inventory that Yahoo believes could drive significant revenue to CE manufacturers that allow the ads to appear on their TVs.

But will that system click with consumers?

For some use cases, it makes sense. For instance, when watching Home Shopping Network , it makes sense to be able to click a button and make a purchase from the TV rather than calling in or looking for a featured product online. And clicking through ads to get a coupon sent to your phone could eventually resonate as a way to redeem on-air promotions.

But Yahoo may also have to improve the overall user experience to make this work. While the sidebar that comes up (thankfully) doesn’t obscure the action, like some connected TV apps do, the widget bar that runs along the bottom of the screen feels antiquated, and the search and navigation aren’t as good as one might like. There’s also a question of performance: NewTeeVee alum Chris Albrecht bought a Vizio TV with Yahoo’s platform built in and found it slow, difficult to navigate and prone to crashes.

Finally, there’s the question of whether or not Yahoo Connected TV as a development platform will be able to stick around long enough to fulfill its promise. Despite its target of 16 million connected devices by the end of the year, it seems the industry is shifting away from the platform for the future. With many of its partners pursuing other options, and more flexible TV app platforms like Google TV popping up, Yahoo Connected TV might find it difficult to continue growing the number of TVs it’s available on and consumers advertisers can actually reach with the platform.

Related content from GigaOM Pro (subscription req'd):


Media Files
showtime-broadcast-interactivity.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Wall Street's LinkedIn forecast: Sunny. Very sunny.
June 28, 2011 at 4:00 PM
 

Analysts at Wall Street’s major financial firms initiated wholesale coverage of LinkedIn on Tuesday, putting forth their opinions of the company’s future growth prospects and estimates for how the stock should perform. Their predictions for LinkedIn? In a word: Bullish.

At its May 9 initial public offering, LinkedIn’s stock priced at $45 per share, but the shares quickly surpassed that within seconds of its market debut. For the past couple of weeks, LinkedIn’s stock price has generally hovered between $65 to $75, giving the company an average valuation of around $6.5 billion — not bad at all for a firm that made $2 million in net income on $250 million in gross revenue last year.

But some of Wall Street’s most powerful analysts think the stock should be trading at significantly higher valuations. In reports issued Tuesday, Bank of America Merrill Lynch analysts priced LinkedIn’s 12-month price target at $92 per share; UBS analysts’ 12-month price target is $90 per share; Morgan Stanley’s 12-month target is $88; and JP Morgan’s 12-month price target is $85. It bears mention that Bank of America Merrill Lynch, Morgan Stanley, and JP Morgan all acted as underwriters for LinkedIn’s IPO; however, the SEC mandates that banks maintain total separation between their equity research and investment banking operations.

The street heard the analysts’ predictions loud and clear. LinkedIn’s stock closed Monday at $76.38 per share, and on Tuesday, the price popped: The stock opened at $81.40 and jumped up to $86.03 within the first 40 minutes of trading. At 2pm EDT Tuesday, the stock was holding steady at $84.06.

Why the bullish outlook? The analysts pointed to several key factors:

  • Big opportunities for user growth.
    With 100 million members, LinkedIn is far from reaching its saturation point, according to analysts. JP Morgan’s report reads, “LinkedIn’s 100 million member base implies just a 16% penetration rate of the worldwide professional market… We are projecting LinkedIn’s member base to reach more than 250 million by the end of 2015, which would suggest a 42% penetration rate off the current professional market.”
  • Diversified revenue streams.
    Wall Street hates it when a company has all its eggs in one basket. All the analysts expressed satisfaction in the fact that LinkedIn gets its money from three distinct verticals: Hiring solutions (commissions on jobs sourced through LinkedIn), Marketing solutions (online ads), and premium subscriptions.
  • Online advertising is poised for a boom.
    And analysts see LinkedIn as well-positioned to benefit from it. The UBS report reads: “We are bullish on the online advertising opportunity for LNKD, and note that user time spent online is disproportionate to the overall online ad spend. Roughly 28% of time is currently spent online, but the online ad spend accounts for only ~13% of the total spend, representing a roughly $50B global opportunity.”
  • LinkedIn is generally destined for greatness.
    At least according to Morgan Stanley’s report, which waxed rhapsodic about the company’s potential: “Every once in a while, a company comes around that transforms an industry in such a way that investors have difficulty grasping just how big it may one day become. Amazon.com, the $85B book retailer; Google, the $150B blue link company; eBay, the $40B beanie baby company, and Netflix, the $13B DVD-by-mail company … we believe LinkedIn can be one of these companies.”

Related content from GigaOM Pro (subscription req'd):


Media Files
sunny.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Skype adds XMPP support, interoperability coming next?
June 28, 2011 at 3:30 PM
 

Skype has quietly added XMPP support to the most recent beta version of its Windows client, according to a report from Skype Journal. Skype for Windows 5.5, which was released last week, added Facebook integration to the VOIP service. A look under the covers reveals that this is done through XMPP, the open standards protocol that's at the core of Google Talk and supported by a number of other IM patforms.

Skype's most recent Windows beta allows its users to have IM conversations with Facebook users as well as integration of Facebook contacts and news feeds. The folks at Skype Journal took a closer at the application's network traffic, which revealed that it uses XMPP to communicate with Facebook's servers.

This type of integration could in the future be used to achieve interoperability with other IM platforms. Microsoft's own Windows Live Messenger currently doesn't support XMPP, but companies like Yahoo and AOL have been offering some support for the protocol. Facebook added XMPP support to its platform in 2009 to make its chat service more widely available as well.

By far the biggest supporter of XMPP is Google. Not only is the company's IM service based on the protocol, but it's also one of the main financial backers of the XMPP Standards Foundation. Just last week, the company announced that it was transitioning elements of its VOIP technology to Jingle, a signaling protocol based on XMPP.

Speaking of VOIP: The fact that Skype has started to support an open standards protocol doesn't meant hat you'll see voice or video chat interoperability any time soon. Skype has been using it own, proprietary encryption and communications protocols for its VOIP functionality, and it's unlikely that Microsoft would abandon these technologies in favor of Google-backed open standards.

Related content from GigaOM Pro (subscription req'd):


Media Files
skype-11-e1305033898444.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Big Switch and the coming networking bonanza
June 28, 2011 at 3:00 PM
 

Guido Appenzeller (left) and Kyle Forster of Big Switch

Last week at Structure 2011, networking was a hot topic, not just because it’s a big business, but because as virtualization runs rampant and the number of devices and servers out there increase, the underlying premises of networking has shifted. As Jayshree Ullal of Arista network pointed out in a chat with me, networking traffic is moving between servers now as opposed to between clients and servers, which changes the way networks are designed.

In addition, the proliferation of virtual machines and the need for agility are driving the search for technologies to help split networking hardware such as switches and routers from the software-derived actions such as load balancing and even allocating physical resources. One such method of doing this is via OpenFlow, a protocol pioneered at Stanford. But OpenFlow is just one aspect of this shift to network virtualization, and may not even become the enabling protocol behind it.

However, Big Switch Networks, a networking company created by Guido Appenzeller and Kyle Forster that launched at Structure 2011, hopes OpenFlow does become the basis for this shift. The company, which raised $14 million from Index Ventures and Khosla Ventures since its founding in May 2010, wants to use OpenFlow to offer network virtualization. It’s hoping to offer controllers that allow for software-defined networking based on the OpenFlow protocol, and is taking a less-enterprise focused approach when compared to rival firm Nicira (which is in stealth mode). Appenzeller worked to develop the OpenFlow protocol, so is familiar with both the technology and the problem set.

As more and more of these firms come out of stealth mode, we’ll see exactly what the new split in networking has to offer, but in the meantime, check out this panel from Structure with Big Switch’s Appenzeller, Nicira’s CTO Martin Casado and Dante Malagrinò, the CEO of Embrane discussing the opportunity.

Watch live streaming video from gigaomstructure at livestream.com

Related content from GigaOM Pro (subscription req'd):


Media Files
bigswitchdudes-e1309284496384.jpeg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Facebook's Open Compute project both friended and poked
June 28, 2011 at 1:45 PM
 

After launching an open server and data center design in April, Facebook is prepping for version 2.0 of its hardware, and huge server buyers are playing along. Within three months, a foundation supporting the program will be created, according to Frank Frankovsky, Facebook's director of hardware and supply chain. From Rackspace to major financial services companies, big hardware buyers are getting into Open Compute — testing the hardware and suggesting changes to help fuel its adoption.

In April, the social network teamed with with big names from the hardware ecosystem and showed off two years of work developing a greener, leaner data center and server design for its Prineville, Ore., facility. But a week and half ago, it invited engineers and folks from large IT shops to its campus to figure out how to improve that design and suggest modifications that include doubling the compute in a server and the introduction of an open storage box.

The changes are outlined in a Facebook posting from Wednesday, but I also discussed the meeting with Frankovsky at our Structure 2011 event, as well as with Bret Piatt, director of corporate development from Rackspace, which presented on some of its findings on the original Open Compute server. While the testing process is still incomplete for Rackspace, the company says it’s encouraged that the next version of the server will have broader applicability, which was a complaint soon after the original launched.

“The Facebook team presented the 2.0 design for a broader set of use cases,” Piatt said. “They designed for a cloud use case, but we have public and private cloud, and server platforms to support.” For Rackspace, features such as connecting to storage and having an ability to run dedicated apps on the gear, as opposed to running servers with operating systems or even virtual machines on them, are important.

Part of my conversation with Piatt was about understanding how to think of the give and take between companies when discussing open hardware. Unlike with open source code, Open Compute will result in thousands of boxes delivered to whomever uses it, which means the adoption cycle is likely longer and the testing more rigorous. “When you order 10,000 servers, you end up with 10,000 servers,” noted Piatt. “It’s not like something you can just uninstall.”

Lew Moorman of Racksapce (far left) and Frank Frankovsky (second from right) discussing open everything at Structure 2011.

The end goal also isn’t really a free — or cheaper — end product that someone is able to innovate on if they so desire and don’t mind forking the code. Instead, the end goal is about smoothing the acquisition and operations of a lot of hardware. Much like some restaurants rely on Sysco prepared food to serve their customers and cut prep time, large IT buyers are hoping Open Compute lets them focus on the customer experience and perhaps a few signature dishes. Piatt explains that if Rackspace suddenly needs a few thousand servers today, it can blindside a vendor who may not have the production capacity set up.

However, with Open Compute, vendors can have lines dedicated to that specification and should theoretically be able to meet demand with less of a problem. It also helps large data center operators by keeping certain features and sizes standard across platforms, thus reducing the parts inventory that companies have to maintain. That also cuts down on employee training because systems administrators don’t have to be trained on a variety of new parts. It’s a lot easier to maintain one make and model of a car than to learn how to support and maintain 20 models.

Those advantages are attracting interest from beyond the cloud computing world. Frankovsky said several large banks are looking at contributing to the Open Compute specifications and said they may even come up with their own version of it to serve the financial services community. Many features would stay the same in order for the financial services firms to accrue the same benefits to their IT operations, but they may ask for a few modifications for their industry. Piatt compared such an effort to compiling software for different instruction sets — a software vendor keeps most of the program the same but has to make changes depending on the platform it is running on.

Of course, this brave new world of making servers a true commodity cuts the margins to the bone for large vendors, especially if some of their more lucrative clients in the financial services and cloud arenas adopt it. Now that Facebook is working on the second iteration of the Open Compute gear, keep an eye on Dell, HP and IBM for their input and reaction.

Related content from GigaOM Pro (subscription req'd):


Media Files
1z5o267121.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Why Google Plus won't hurt Facebook, but Skype will hate it
June 28, 2011 at 1:00 PM
 

Google launched its much awaited and highly anticipated social networking platform today. Dubbed Google Plus, the service may take its cue from social networking giant Facebook, but in reality it is about Google saving and enhancing its core franchise — Google Search. It is search (and, by extension, advertising) that made Google a company that has run afoul of the Federal Trade Commission because of its huge size and influence.

At the time of Google’s founding, search was broadly defined as a sifting through a directory of websites. As the web grew, search became all about pages. Google, with its PageRank, came to dominate that evolution of search.

Today, search is not just about pages, but also about people and the relevance of information to them.

Google’s senior executives — long dismissive of the idea of importance of social to search — were contrite during their briefing earlier this week. “It is about time we have come to the realization,” said Bradley Horowitz, vice president of product with Google, “If you don’t know people, then you can’t organize the information for people.”

Google’s realization — however late – that it needs to use social, location and other signals to enhance its core search platform is welcome. “Google needs to understand these relationships and basically use those to make search better,” said Vic Gundotra, Google’s senior vice president for Social in an hour-long briefing earlier this week.

Why? Because the the internet (and information) are expanding with such rapidity that there is no room for assumptions, and as such our systems need to adapt to this world of no (or alternatively infinite) assumptions. Google needs to adapt, and getting social and location signals is important for the company. Search is now search relevant to you in the context of your world — and that is where Google Plus comes in.

What is Google Plus?

Is Google Plus a destination like Facebook.com? Is it a social network? Is this an identity play? The answer to those questions is yes and no. Google’s Gundotra said that this is the first step by the company in its long social journey, which is going to evolve.

Today, you can get to Google Plus by visiting a website – Google.com/plus. But it also travels with you across different Google web properties, thanks to a Google Toolbar. The toolbar is personal to you and allows you to share and send photos, videos, links or just simple messages. A notification icon informs you if others have shared stuff with you.

Google, Gundotra says, has leveraged its infrastructure to offer an array of services, and at the same time the company is attacking Facebook’s noticeable shortcoming — granular privacy that average folks can understand. More importantly it is trying hard to not be compared with Facebook.

Some of Google Plus Features:

In order to use Google Plus, you need to have a Google account, though it doesn’t necessarily mean you need to have a Google Mail account. Once you set-up your Google account, you can use your address book to invite people to your network and use that as a starting point.

Circles: Google has come up with the concept of circles — you can create a circle of contacts that are family, friends, work friends, former co-workers and so on. With these groups or circles you can define who gets to see what kind of updates. Facebook currently doesn’t offer the ability to control who sees what goes in our life that we share online.

Hangout: This just might be the killer feature of Google Plus effort. It is essentially group video chat done right. You click on the Hangout button and invite members of a certain group by sending them a notification. If there is no one around, all I could do is hang about without much drain on the system waiting for someone to show up. So theoretically I could invite all members of team GigaOM circle and have a quick video chat. In the demo at least, Hangout felt intuitive and easy to use (Google uses its own video codec and not Adobe Flash for this feature).

Huddle: This is a mobile group-chat service that is very much like Beluga, the fast-growing service that was snapped up by Facebook weeks after it was launched and is now said to be part of a major new communications push by Facebook. I think this is a great little feature and frankly, if Google was smart they should be rolling this out to all Google Apps for the Enterprise customers.

Instant Uploads: It has also come up with a new approach to mobile photos & videos. Google calls it Instant Uploads. Take a photo and it uploads to your Picasa or YouTube account and then you can share those videos via Google Plus to specific “circles.”

Sparks: It is a new feature that allows you to create topics of interest and use them as source of information and then share it with various different groups. For instance, I could share results of Top Gear with my “petrol head” friends. These “interest” or “topic” packs offer a lot of content and not surprisingly YouTube videos. Circles, Hangout and Huddle are about personal sharing and personal communications. Sparks on the other hand is devoid of that connection and stands out as a sore thumb.

Google Plus + Chrome + Android

A few months ago, I wrote about how Google could beat Facebook, pointing out that it was not going to be on the web, and instead on the mobile.

I've always maintained Google has to play to its strengths – that is, tap into its DNA of being an engineering-driven culture that can leverage its immense infrastructure. It also needs to leverage its existing assets even more, instead of chasing rainbows. In other words, it needs to look at Android and see if it can build a layer of services that get to the very essence of social experience: communication.

However, instead of getting bogged down by the old-fashioned notion of communication – phone calls, emails, instant messages and text messages – it needs to think about interactions. In other words, Google needs to think of a world beyond Google Talk, Google Chat and Google Voice.

To me, interactions are synchronous, are highly personal, are location-aware and allow the sharing of experiences, whether it's photographs, video streams or simply smiley faces. Interactions are supposed to mimic the feeling of actually being there. Interactions are about enmeshing the virtual with the physical.

The ability to interact on an ongoing basis anywhere, any time and sharing everything, from moments to emotions – is what social is all about. From my vantage point, this is what Google should focus on.

I am glad to see Google is thinking along these lines and is building products with a mobile-first point of view, a concept that former CEO Eric Schmidt has often talked about.

While I was given a demo by the Google executives on a notebook computer, the heavy use of HTML5 makes Google Plus an experience that could easily work on Android tablets and Android phones. Instant Uploads, Circles, Huddle and Hangout can work on these mobile devices without much textual input, making them easy to use on the touch-centric mobile platforms. Google at the same is also making  Google Plus available as an app – for Android and the iPhone platform – ensuring that it is getting the experience right.

Facebook Has Nothing To Worry About

I don’t think Facebook has anything to worry about. However, there is a whole slew of other companies that should be on notice. Just as Apple put several app developers on notice with the announcement of its new iOS 5 and Mac OS X Lion, Google Plus should give folks at companies such as Blekko, Skype and a gaggle of group messaging companies a pause. I personally think Skype Video can easily be brought to its knees by Google Plus’ Hangout. And even if Google Plus fails, Google could easily make Hangout part of the Google office offering.

One of the reasons why I think Facebook is safe is because it cannot be beaten with this unified strategy. Theoretically speaking, the only way to beat Facebook is through a thousand cuts. Photo sharing services such as Instagram can move attention away from Facebook, much like other tiny companies who can bootstrap themselves based on Facebook social graph and then built alternative graphs to siphon away attention from Facebook. Google, could in theory go one step further – team up with alternative social graphs such as Instagram, Twitter and Tumblr and use those graphs to create an uber graph.

Build it, But Will They Come?

In the past, I have been pretty skeptical of Google’s social ambitions, mostly because of company’s DNA. Based on a briefing and a demo, I am not yet ready to change my opinion.

Google needs this social effort to work — it needs to get a lot of people using the service to create an identity platform that can rival Facebook Connect. It needs the people to improve its search offering. Of course, the Google’s biggest challenge is to convince people to sign-up for yet another social platform, especially since more and more people are hooked into Facebook (750 million) and Twitter. I don’t feel quite compelled to switch from Facebook or Twitter to Google, just as I don’t feel too compelled to switch to Bing from Google for Search.

I can easily see services such as Hangout and Huddle get traction, but will that be enough to get traction with hundreds of millions of people? Doubtful, though I am happy to be proven wrong, for it would surely be nice to have a counterbalance to Facebook.

Related content from GigaOM Pro (subscription req'd):


Media Files
gplus_circleeditor.png?w=187 (PNG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Sponsor post: Business basics: Back up your company's critical data off-site
June 28, 2011 at 12:59 PM
 

What is on the computer in your office? Invoices and transaction records? Maybe photo archives of past work or your inventory database. Taking a few minutes to set up a proper cloud-based backup solution from IDrive could easily save you thousands of dollars and hundreds of hours in disaster recovery.

IDrive's award-winning online backup software securely encrypts and stores your files in their state-of-the-art cloud on an automatic schedule or continuously as you work. Unlike many other backup services, you'll also enjoy full access to your files from any standard web browser or by using the IDrive smart-phone app available for iOS and Android – even send a file to anyone with an e-mail address – now that's something different!

Installing and first-time setup is a breeze, and even during your initial backup you'll hardly notice IDrive working in the background. If disaster should strike your office, the IDrive software will help you quickly restore your data – if you need extra support our team is available by phone, live chat or e-mail. Customers can also have a USB drive shipped to them pre-loaded with their files for a quick restore of large amounts of data.

More than 750,000 users and businesses protect themselves with IDrive and use it to make their files instantly available whenever and wherever they need them. Personal plans start at $4.95/month and business plans start at $9.95/month; a free-for-life 5GB plan is also available. Learn more and sign-up at IDrive.com.


Media Files
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Microsoft takes on Google Apps, finally launches Office 365
June 28, 2011 at 12:45 PM
 

At a press event in New York on Tuesday, Microsoft CEO Steve Ballmer officially launched Office 365, the Redmond software giant’s suite of online collaboration and office tools. It includes Office Web Apps and hosted versions of SharePoint Online, Exchange Online and Lync Online. It also has a feature set that aims to take on Google Apps for Business. But with a product that costs more than Google’s offering and that’s coming much later to market, will Office 365 be a success?

Office 365 is not Microsoft’s first attempt at offering this kind of service; it has previously offered hosted Exchange and SharePoint services with BPOS (Business Productivity Online Services). But by including Office Web Apps in Office 365, the company now has a much more rounded product that enables users to do their work anywhere, on any device, and to easily collaborate with others.

Office 365 vs. Google Apps for Business

One of Office 365′s main advantages over Google Apps is the huge existing installed user base of Office products. Office is entrenched in the majority of businesses worldwide, and Office 365 offers an easy pathway for those users to migrate to cloud collaboration while using familiar tools. Office 365 also has a greater range of features than Google Apps, incorporating office productivity (Office and Office Web Apps), collaboration and intranet tools (SharePoint Online), email and calendars (Exchange Online) and instant messaging and web conferencing (Lync Online).

Unlike some previous Microsoft releases, Office 365 works cross-platform, so it can be accessed equally via Mac and PC and on mobile devices — although there are reports that mobile access from some devices is limited. Office Web Apps, in particular, is an impressive suite of products, and while they aren’t complete cloud-based replacements for the desktop Office apps — they don’t offer the full range of functionality that desktop apps do — Microsoft obviously invested a lot of effort in making the user experience very similar. The interface is familiar, and documents look identical in Office Web Apps and in the desktop applications. By enabling seamless round-trip working between Office Web Apps and Office desktop applications, Office 365 can also work when users are offline, something that can’t be said of Google Apps.

Of course, Google believes that its product is superior. On Monday, in a post titled “365 reasons to consider Google Apps” on the official Google Enterprise blog, Google Apps Product Manager Shan Sinha aimed a few barbs at Office 365, saying that it is designed for usage by individuals, not by teams; that its pricing is complex; and that Office 365 doesn’t have proven cloud reliability, while Google Apps has a record of 99.9 percent uptime. Some of Sinha’s points are debatable: Office 365 does enable co-editing and collaboration, for example, and Microsoft has plenty of experience in offering cloud-based services, even if Office 365 itself is new.

Easy migration to cloud productivity for existing Office users

With its higher price point, Office 365 might not tempt existing corporate users of Google Apps for Business away, particularly as migrating between the two services is unlikely to be straightforward. However, that’s probably not the market that Microsoft is aiming at. Rather, it wants to keep hold of the huge numbers of business customers with existing investments in the Office product line. For them, Office 365 is a well-designed product that offers an easy migration route to cloud-based office productivity at a reasonable price point with products that will feel very familiar to their users. I think that will make Office 365 a compelling proposition for many business customers, in particular smaller businesses that would like to offer their employees the ability to work and collaborate remotely using familiar Microsoft tools but don’t want to have to make an upfront investment in, and then maintain, their own SharePoint and Exchange servers.

Office 365 is available on a number of different plans, starting at around $6 per user per month for small businesses with less than 25 users; enterprise customers have access to plans including dedicated support. For comparison, Google Apps for Business costs around $4 per month.

Related content from GigaOM Pro (subscription req'd):


Media Files
webappsheader_web.jpg?w=210 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Is Tumblr the new Facebook or the new MySpace?
June 28, 2011 at 12:15 PM
 

Tumblr, the combination blogging platform and social network, has been growing rapidly of late — so rapidly that it is now racking up about 8.4 billion pageviews a month, according to a blog post from president John Maloney. One of the big drivers for this growth appears to be teenagers, who are using the site as a kind of combination of Facebook and Twitter, to share photos and other Internet “memes.” But will Tumblr ever figure out how to make money from this massive user base? Other social networks have grown just as large and still failed.

Hitting the 8 billion pageviews mark would be a milestone for any Internet company, since that puts Tumblr in the top 25 websites in the world, according to Quantcast. By comparison, the popular classified site Craigslist gets more than 20 billion pageviews a month. And Tumblr’s growth continues to accelerate: according to founder and CEO David Karp, the network posted more than 400 million pageviews in a single day last week, which amounts to about 5,000 every second. If that kind of pace continues, it would put the company at around 12 billion pageviews a month.

Of course, pageviews alone are not a great benchmark for websites, since they can be inflated in any number of ways. When it comes to actual visits by unique individuals (as measured by Quantcast’s tracking), Tumblr is seeing about 355 million a month. By comparison, Facebook gets almost that many every day.

In other words, Tumblr isn’t going to overtake Facebook any time soon. But the pace of its expansion is still incredible: the site has grown by more than 50 percent in the last couple of months alone, and its traffic is now double what it was just six months ago — and that’s despite a massive outage six months ago that had some wondering whether the network would be able to recover. The outage doesn’t seem to have even caused a blip in usage.

Tumblr President Maloney says that one of the big drivers behind the network’s rise is teenagers, and that can be seen in the Quantcast data as well. About 16 percent of the site’s users in the U.S. are between 13 and 17 years old, and more than half are between 13 and 34 years of age. My own anecdotal experience confirms this: for my two daughters, both of whom are in that key 13-to-17 age group, Tumblr has almost overtaken Facebook in terms of the role it plays in their online lives. My 17-year-old spends hours sharing photos and Internet memes with her friends, and “Tumbling” has become a verb in our house.

As I’ve written before, Tumblr seems to have found a sweet spot between traditional blogging platforms like WordPress (please see the disclosure below) and social networks or “micro-blogging” platforms like Facebook and Twitter. While it’s relatively easy to set up a WordPress blog, creating and using a Tumblr blog makes that process seem complex by comparison. But even more important than that is the Twitter-style “following” that the site allows, and the fact that content can be “reblogged” on your own site with a single click — both of which can drive content on the network to “viral” levels in the blink of an eye.

This is why so many media outlets have begun experimenting with Tumblr blogs, including the New York Times (and GigaOM, which has started a blog we use to share interesting items we come across during the day). Newspapers like the National Post have seen incredible traffic from a single post that got reblogged and commented on thousands of times, and media advisor Steve Rubel recently nuked his WordPress blogs and moved everything to Tumblr to take advantage of the real-time nature of the platform.

All of this growth is wonderful for Tumblr, which was started by David Karp four years ago, when he was just 20 years old — but the big unanswered question is whether the network can actually bring in revenue to match that growth. Everyone wants to be the next Facebook, which many early observers doubted would ever find a way to make money and now brings in revenues estimated at $2 billion. But MySpace also grew to massive levels, with more than 76 million users at its peak, and was bought by News Corp. for $580 million, only to rapidly decline after it failed to figure out how to make money.

So far, Tumblr has experimented with selling custom themes and other features, but it hasn’t shown any signs of being able to turn on the revenue tap in the way Facebook has — in fact, not that long ago David Karp was saying he had no interest in putting ads on the network. With $40 million in funding, he may have the luxury of not having to worry about funding for awhile, but Tumblr will have to answer that question at some point or risk becoming the next former red-hot growth story.

Disclosure: Automattic, the maker of WordPress.com, is backed by True Ventures, a venture capital firm that is an investor in the parent company of this blog, Giga Omni Media. Om Malik, founder of Giga Omni Media, is also a venture partner at True.

Post and thumbnail photos courtesy of Flickr user Gabrie Coletti

Related content from GigaOM Pro (subscription req'd):


Media Files
4267923219_de64e2e942_z.png?w=210 (PNG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
   
   
Silicon Valley should wake up to clawback culture
June 28, 2011 at 10:45 AM
 

There's been a storm around Skype ever since former employee Yee Lee let rip and accused the company's investors, Silver Lake, of screwing employees over in its $9 billion sale to Microsoft. Specifically, Lee was upset by the discovery that his stock options suddenly become subject to "clawback" deals that made his stake in the company (which he had left before the sale) worthless.

With Silver Lake on the ropes, everyone has been jumping in to get their punches in. Michael Arrington almost popped at the prospect of such sneaky clauses; Reuters super-blogger Felix Salmon got (rightly) angry about the way the fine print worked. All in all,this bare knuckle brawl sends a simple message: clawbacks are bad, and they should be fought wherever possible.

A Silicon Valley state of mind.

But is it possible to see this any other way? Is it possible that these arguments about clawbacks are just a remnant of Silicon Valley's cash-rich, founder-happy culture? Evidence from elsewhere would suggest that, at the very least, the entrepreneurs just don't know how good they've been getting it. Let me explain.

In the startup business, clawbacks are usually seen as a feature pushed by private equity, used by investors to exert pressure on staff and maximize returns. In fact, writing on the Financial Times techhub blog, Richard Waters characterizes it precisely this way — as a conflict between the culture of Silicon Valley venture capital and the clubby world of private equity:

If workers are tempted to cash in after a year or two to move on to the next opportunity – something more likely to happen when prices in the private market are rising quickly, as they are now – then options  contribute to an "exit strategy" culture, not one geared to building long-term value. There are always people who are tempted to "double dip", says Christos Cotsakos, the founder of ETrade and now chief executive of EndPlay.

Venture capitalists frown on this behaviour but haven't tried to prevent it. But private equity firms like Silver Lake, which led the Skype buy-out, clearly feel differently – hence the unusual and controversial clawback that caught out Yee Lee.

The truth is, however, that clawbacks are much more common than the reporting may suggest.

Scenes from a European startup.

For starters, it's seen as fairly typical procedure in Europe (where venture capital has traditionally been thin on the ground) and there are a large number of European startups that have clawbacks written into their employment contracts. Of course, that doesn't mean it's right — in fact, the terms can often be onerous.

What is certainly very common across European businesses is the so-called "good leaver/bad leaver" provision, which ultimately values an employee's rights against the reasons they left the company. In general terms, a "good leaver" is somebody who ceases employment because of death, disability or being unfairly dismissed; a "bad leaver" is somebody who chooses to quit of their own accord, breached their contract or was legally terminated.

Doug Monro, a serial entrepreneur in Britain who's currently working on job site Adzuna, has experience with companies based on both sides of the Atlantic. He told me that these provisions, which often amount to clawbacks, were a strong European phenomenon:

"In my experience more EU startups have clawback or ‘discretionary bad leaver’ than in US," he said, adding that the details of such clauses can be a "big issue for hires to look at."

They didn’t start the clawbacks.

But it's is not just in European businesses that clawbacks are common: according to this BusinessWeek report from last year, fully 70 percent of America's largest companies say they have clawback provisions of some form. There are also plenty of examples of other restrictions on employees who leave the business — such as being forced to sell options within a few months of leaving. British computer researcher Lyndsay Williams, who spent 11 years working at Microsoft before being made redundant in 2007, says she was surprised when she realized she had to sell her options quickly.

"I had vested stock options that I was required to sell by Microsoft within three months of being made redundant," she told me. It turned out well, she says, since the stock value was only getting worse and it gave her money to invest in her own company, Girton Labs. "Given the very disappointing performance of Microsoft share price [being forced to sell] was not really a hardship."

And remember, too, that this isn't just about investors versus staff — it can be founder against founder. As British law firm Taylor Wessing outlines the argument in a a briefing document on private equity deals, not having clawbacks or bad leaver provisions can leave those who stay behind stuffed: "From the continuing founders' point of view, why should the departing founder get market value if he has left the company?".

And let's not imagine this is brand new, either. Two years ago Fred Destin, a partner at Atlas Venture who was based in Europe at the time (he's now in Boston) highlighted such clauses as a potential minefield for startups and founders. Although he said he "would not do a deal without some form of reverse vesting", he suggested the details of are something that founders should watch carefully, and be prepared to negotiate hard over.

"Make sure there is a good leaver/bad leaver clause," he wrote. "You get fired for cause, you lose some. You decide to leave, you lose some. The company decides it does not want you around any more, you keep it."

That’s just the way things are.

So there you have it. Clawbacks are not new, not rare and not necessarily about screwing over founders, and not new. The real difference is that they are new to the Valley, and so some people are being exposed to them for the first time.

The truth is that whether it's restrictions on bonuses paid to executives at failed banks, or a stick that encourages employees to stick with a business, clawbacks are less unusual than you may have been led to believe. In fact, it's worth considering that Silicon Valley is the exception rather than the norm — that its tendency towards founder-friendly terms means that it has been largely insulated from what's commonplace in the rest of the world. Are clawbacks right or not? That's for entrepreneurs and investors to argue between them, but perhaps they should start by realizing it's time to wake up to what's really going on out there.

Related content from GigaOM Pro (subscription req'd):


Media Files
dsc00968.jpg?w=186 (JPEG Image)
ad516503a11cd5ca435acc9bb6523536?s=96&r=PG
Private_Equity_Brochure_02.pdf (Adobe Acrobat Document, 485 KB)
   
     
 
This email was sent to venturescitechcapital@gmail.com.
Delivered by Feed My Inbox
PO Box 682532 Franklin, TN 37068
Account Login
Unsubscribe Here Feed My Inbox
 
     

沒有留言:

張貼留言