Categories
Uncategorized

Pity the Poor VC

Yes, pity the poor East Coast VC, far from the summit of Mount Web 2.0, forced to go homeless in the cruelest city of all.

The Observer's Max Abelson reports that venture capitalist/blogger Fred Wilson has gone to contract on selling his 55-foot-wide 1847 townhouse at 11 West 10th Street. No word on the price, but it was listed through Sotheby's for a whopping $37.5 million. Yowza!

Consider this my open invitation to any and all VC firms in search of a new partner. I'm OK with selling out.

Link

Tags: , , ,

Categories
Uncategorized

Web 2.0 Summit: Hacked?

Dan “Technology Chronicles” Fost has the story of one plucky company (and their go-getting VC) end-running the gatekeepers of this week's Web 2.0 Summit in San Francisco:

So a lot of people who don't necessarily want to attend the sessions, or who couldn't get a ticket, can still show up and roam the halls and meet just about everyone they'd need to meet.

One company in particular is taking that concept to an art form. Mashery, led by CEO Oren Michels and investor Josh Kopelman, cleverly booked the Palace's Sonoma conference room as soon as the conference dates were announced.

“This is a guerilla launch,” Michels said. “We're not part of the conference.”

Instead, he had a room right at the center of the action, for “a fraction of the cost of a sponsorship.”

The room became a de facto party spot, with Mashery pouring 15 gallons of free margaritas for anyone who wandered in.

I dunno…is it really hacking or social engineering when 15 gallons of booze (Mash-aritas?) are applied to the situation? That's just shooting fish in a barrel.

Here's Mashery, in their own words (presumably written while sober). Cheers!

Link

Tags: , , , , ,

Categories
Uncategorized

Web 2.0 Summit: Riya Launches Like.com

Facial recognition? Feh! Try fashion recognition. I suppose the idea is that the latter tack will aid in some financial recognition for Riya.

Visual search engine Riya has used the Web 2.0 Summit to launch their Like.com service, a visually-driven shopping search engine. The premise is pretty simple: when you see something you like (say, a handbag) Like.com does its best to find similar items. There are a number of ways into the search: you text search for something (“bag,” or “D&G“), you can browse their categories (for example, watches) and find an example of an item you like, or you can browse through their celebrity pictures (if I wanted to take my style cues from my virtual identical twin, Brad Pitt).

Like.com currently offers a few ways to refine those searches once you get closer to what you're looking for (such as focusing on the details that really matter, or the color of the item). It's nifty, as it stands, but not a home run. The way Riya's talking about their future plans, however, gives me some hope for their prospects. Dan “Between the Lines” Farber has more:

Like.com offers several search capabilities, including the ability to search by image instead of text; finds items that have specific features, such as a watch bezel; find color variants of the item via a color picker; find clothing, shoes and accessories similar to those worn by your celebrities (Like.com includes 100,000 celebrity images); and in the near future the ability to upload photos. Like.com will also have a browser extension to initiate likeness searches from any site as well as pages to save searches and a recommendation engine. After launch Like.com will also have a cross-matching feature. “If you have a hat and want shirt to with it, you drag a slider and search on new category,” Shah said.

The keys for me: matched cross-selling (ie, “Show me shoes to go with that handbag”) and the ability to initiate likeness searches from any page on any site. Far more than image upload, that seems critical.

The user case for this kind of shopping has been around for a long time. I remember when Time Warner was conducting their interactive TV trials in Orlando back in the early '90s, one of the tired examples of interactive multimedia commerce that constantly got trotted out was the ability to freeze the show you were watching and highlight any item in, say, Jerry Seinfeld's apartment and go shopping for it. The ability to take advantage of serendipity, impulse, and context will be important to Like.com's success.

I'm starting to speculate now (and I have no basis for this other than it seems pretty obvious to me), but another logical avenue for Riya to pursue, in addition to the browser plug-in, would be affliliate relationships with media outlets like People, Gawker, E! Online, etc. Those sites could benefit financially from driving buyers to the merchants in Like.com's stable, and Like.com would gain both relevant content and wide distribution for their search engine. The merchants, of course, would get the traffic. The celebrity news sites could provide Like.com with a properly-structured and tagged image feed, allowing Like.com to keep their index relevant and fresh.

This is all possible because Riya has taken the hard road of automating visual search, as opposed to relying on human-supplied metadata (and as my friend John Henson was fond of saying around the Opencola offices “Real data is better than metadata.”). Mike “TechCrunch” Arrington notes how Riya's approach, difficult though it is, is unique:

There are lots of other image search engines on the web today. But all of them only take queries as text, and compare those text queries to the meta data attached to an image file. This data is notoriously thin, and companies like Google are resorting to using human labor to attempt to add descriptive keywords to images stored on their servers. Even specialty image search engines like Pixsy have fairly thin meta data for images. And all of the existing search engines allow only text for search queries.

The Like.com engine takes both text and images as queries, something no one else does. To return results based on an image query, Like.com compares a “visual signature” for the query image to possible results. The visual signature is simply a mathematical representatioin of the image using 10,000 variables. If enough variables are identical, Like.com decides the images are similar.

How much heavy lifting is involved? I throw to Farber for the facts:

The core technology is even more complex than face recognition technology, Shah said. Like.com crawls target merchant sites and retrieves the highest quality images. It takes about 20 seconds per image to preprocess, creating a visual signature and indexing the image.

Search results are returned in under a second–the server farm consists of 250 quad-core servers, each loaded with 16 to 32 gigabytes of memory. Like.com converts every picture into a visual signature, a 10-kilobyte vector image consisting of about 5,000 numbers. The “likeness” algorithm determines the order of results based on shape, color and pattern.

“We are extracting and computing the visual signatures and pulling out pieces for comparison,” Shah said. “The results will never be worse than a text search. We index all the metadata and even normalized some of it.”  Currently, Like.com only indexes the merchant sites.The soft goods vector images are more detailed than faces, which are encoded as 3-kilobyte vectors, and include about 40 elements, including shape densities, color histograms broken into quadrants and other properties, such as glossiness and sheen (analyzing color changes in the middle of objects).

So the Like.com carries roughly 3X the data per item than Riya does for their facial recognition search.

Rob “Scobleizer” Scoble shares a few interesting facts about Riya's new Like.com service:

1) The URL cost $100,000. In the interview [Riya CEO Munjal Shah] explains how they bought it. It involved finding the guy who owned it, jumping a fence, and leaving a bottle of wine with a note on it (he wouldn’t answer his email).
2) Riya was pretty close to being sold to Google. If it had been, they never would have worked on this search engine. So, by getting turned down by Google Riya came back with a much better business.
3) Just the jewelry set takes 20GB of RAM.
4) Munjal still believes in blogs, but for this launch Riya talked with fashion bloggers, and journalists outside the tech world like at People magazine. Why? Well, this site — in its current incarnation — will be most interesting to women and non-geeks. If you’ve looked at who participates here, it’s heavily male.
5) Why not keep working on face detection? Because they learned through user testing that they’d never be able to make it good enough. They found that by focusing on visual image searches they can get a much more satisfied user base.

Link

Tags: , , , , ,

Categories
Uncategorized

Web 2.0 Summit: Google, Day One Roundup

I'm not there, but everyone else seems to be.

Search Engine Watch has a nice rundown of some of the coverage of the big interviews from the Web 2.0 Summit:

What have we got? YouTube's growth made it a necessary purchase. No, money's not set aside to cover YouTube legal claims. Yes, you can have your data if you want it, users. No, Google's not trying to take out Microsoft Office.

One very interesting tidbit from Google CEO Eric Schmidt's interview with John “Searchblog” Battelle was noted by Dan “Between the Lines” Farber:

“We build a very good targeting engine and a lot of business success has come from that. We run the company around the users–so as long as we are respecting the rights of end users and make sure we don’t do anything against their interest, we are fine,” Schmidt said. He noted that history has shown that the downfall of companies can be doing things for their own self interest. “We would never trap user data,” he said.

Schmidt was asked if users could get all of their search history and export it to Yahoo. “We would like to do that, as long as it is authenticated….If users can switch it keeps us honest.”

It's perhaps the best articulation yet of what “Don't be Evil” tangibly means to users and customers, and speaks directly to the idea that Tim O'Reilly's been trying to articulate around “open (source) data.” It becomes more and more important once the Google-envisioned era of searchable, shareable data in the cloud (presumably at least partly in the data centers they operate) becomes a more pervasive reality.

Unless, of course, it's just lip service, and trapping your creation, collaboration, and search history becomes the new vendor lock-in. But that would be pretty evil. So far, it looks like we can take Google at their word. They've been good at providing programmatic and/or otherwise standards-based access to the data in their services for individual users (think POP access to Gmail, or OPML export from Google Reader). Being able to take your personalized search history would be a bold move on their part, and one that might not make sense if it were undertake unilaterally (shouldn't Yahoo! and Microsoft reciprocate?), but it's heartening to hear Schmidt articulate the sentiment as clearly as he did.

Also in Danny “SearchEngineWatch” Sullivan's roundup: paidContent's @ Web 2.0: Day One Highlights: Ad 2.0; Google CEO; Skype Content, and Valleywag's Web 2.0 Con: Liveblogging the “Conversation with Eric Schmidt”

Link

Tags: , , ,

Categories
Uncategorized

CBS Digital Has a Self-Esteem Problem

Combine the agility of a Big Content giant with the depth of a Wall Street operator, and you get this:

The important thing is that we get confidence internally on a set interactive strategy that will immediately change and be organic as we grow. From there we can think about how we can execute on this as a team.

That's the new prexy of CBS Interactive, Quincy Smith, demonstrating many things:

  • What passes for insight on Wall Street
  • The decline of the English language
  • The clear fact that most business journos won't dig more than skin-deep when confronted with this kind of substance-free blather
  • Big Content's palpable desperation at figuring out how to compete in the digital world

Matthew Ingram savages this quote in the manner it thoroughly deserves.

Link

Tags: , , , , , ,

Categories
Uncategorized

MSN Music, Customers Zuned by Microsoft

With Microsoft's impending Zune launch, it's time to bid MSN Music adieu. According to the BBC, the MSN Music store will take the digital dirt nap the same day Zune is launched:

Microsoft has said it will stop selling music from MSN music from 14 November, when Zune goes on sale in the US.

But in a move that could alienate some customers, MSN-bought tracks will not be compatible with the new gadget.

If you've bought music from MSN Music, it'll still play in Windows Media Player 11, and on device it has always played on, of course. If you're lucky, it's a PlaysForSure-shaped object, because once November 14th rolls around, you'll have to get music for your Windows Media device elsewhere.

In order to continue to buy your music from Microsoft (hey, no laughing—I'm sure someone did), you'll have to buy yourself a Zune player, since the new stuff Microsoft's selling isn't PlaysForSure compatible. Perhaps that might be a good time to consider buying an iPod, too.

Everybody's been saying how Zune might not ding Apple's iPod franchise, but it would hurt the ecosystem Microsoft tried to establish around Windows Media 11 and PlaysForSure. It appears everyone was correct, and that the first online music service (and, thus, their customers) to get Zuned is Microsoft's own.

Link

Tags: , , ,

Categories
Uncategorized

Adobe Donates Tamarin Code to Mozilla Foundation

Frank Hecker, the Excecutive Director of the Mozilla foundation has a comprehensive post on this announcement, which has been burning up the blogosphere (kind of).

By now the press release has gone out announcing Adobe's contribution to the Mozilla project of open source code for their ActionScript Virtual Machine (AVM2), and Brendan has blogged about it.

Adobe's Flash player executes applications written in ActionScript, a programming language that (in its current version, ActionScript 3.0) is based on the ECMAScript language specification and is therefore a sibling to JavaScript. As part of Flash Player 9 Adobe introduced a new virtual machine (AVM2) for executing ActionScript applications; among other things, AVM2 features a Just In Time (JIT) compiler that can convert ActionScript bytecode (the form into which ActionScript is initially compiled) into native machine instructions for much faster execution of ActionScript 3.0 applications.

Adobe has now taken the code for that AVM2 virtual machine implementation and released it as open source through the Mozilla project as Tamarin. Adobe will continue to develop the Tamarin code, working with other developers from the Mozilla community, and will be using it as the basis of the ActionScript virtual machine in future versions of their own products. The Mozilla project will use Tamarin as part of future versions of SpiderMonkey, the C-based JavaScript engine used in Firefox and other applications, and will include it in future versions of Firefox (beyond Firefox 3) that are built using Mozilla 2 technology.

The upshot is that future versions of the JavaScript engine built into Firefox and other projects built around Mozilla Foundation code will have access to the Tamarin/AVM2 virtual machine, including the JIT compiler, meaning JavaScript-based applications will run much faster. The current virtual machine in "SpiderMonkey" also converts JavaScript into bytecode, but doesn't, in turn, create platform-specific machine code at runtime, so this will be a significant speed enhancement that users of JavaScript-intensive applications (such as Google Maps) will definitely feel.

Many people are taking great pains to point out, of course, that this does not represent the open-sourcing of the Flash player. As one Flex engineer puts it: [link courtesy John Dowdell]

This is a major contribution from Adobe to the open-source community, but let me try to clarify what it is and what is isn't. The code being open-sourced is for the core AS3 language, not for anything specific to Flash. The contributed engine is able to execute a program that uses core classes of the language like Array, Date, RegExp, and XML. It is not be able to execute a program that use Flash-specific classes like Sprite, TextField, SharedObject, or URLLoader. In particular it supports no Flash graphics.

Mozilla will use this engine by adding browser-DOM classes such as Window, Document, Form, Anchor, etc., which are the "domain objects" that a browser manipulates, in the same way that Flash uses this engine by adding classes for its domain objects such as Sprites. Once this is done, webapp developers will be able to use AS3/ES4 as a fast, type-checked, object-oriented "JavaScript" if they want to.

So, beyond faster JavaScript performance in the future (which, admittedly, is a gross simplification of this whole thing, but is, after all, what users will actually feel as a tangible outcome), what's the significance of this? What, for example, does Adobe gain? Hard to say for sure, but I imagine their strategy to break down the barrier between local and network resources will be served by this move. Another way of looking at Adobe and the Mozilla Foundation working together would be as a defensive maneuver against Microsoft's strategies for rich internet applications (RIA). Anything Adobe can do to shore up open standards for RIA will help when Microsoft tries to woo developers to develop their applications using proprietary client-side technologies tightly bound to Windows Presentation Foundation/Everywhere

Link

Tags: , , , , , , , , , ,