If you’ve ever used a Wii control or played an iPhone game, you’re familiar with the kind technology that Hillcrest Labs makes, and they make some of the best. In this interview, we talk about their decision to open-source their library and the benefits this brings back from their developer community.
- Freespace products and the air pointer market
- Bringing order to the expanding world of digital content in the living room
- Building new devices for human-computer interaction
- Distributing open source libraries to spur innovation
- The math and alorithms behind motion tracking and control
- The Loop: a next-generation pointing device
Scott Swigart: To get us started, can you please take a moment to introduce yourselves?
Chad Lucien: Sure. I’m the vice president of Freespace products here at Hillcrest Labs, and my responsibility is the general business management of the Freespace division. Hillcrest Labs has two different product groups. Ours is centered around air pointing and motion control technology, and the other business unit is centered around our application software platform for consumer electronics devices.
I’ve been in this role for about 18 months, but I’ve been at Hillcrest for just over five years, in a mix of strategy and business roles.
With respect to the Freespace division, as the general manager, I’m responsible for setting the direction of the business unit, setting the product strategy, and engaging with our customers and partners at various levels.
Parag Sheth: I am the vice president of corporate marketing at Hillcrest Labs. I have been at Hillcrest for about three years now. I’m primarily responsible for marketing, but I also run business development in Europe.
Scott: Give us a little bit of background about what the Freespace products are.
Chad: Freespace is a six degrees of freedom motion-control solution. The primary application that we have been marketing toward is air pointing remote control. We have customers such as Universal Electronics, which has designed and is in production with a couple of different remote controls that use the technology. They’re marketing their remote controls to cable, satellite, and IPTV operators as well as consumer electronics companies.
We also license that technology to Logitech, and so Freespace is being used in the MX Air mouse, which is effectively a motion sensing piece of software that interacts with off-the-shelf MEMS inertial sensors and creates one of the best air pointing solutions available on the market.
It’s incredibly accurate, and it’s very stable from a user perspective. As a user, you can use a device enabled with Freespace in any orientation. So if you’re laying down on the couch using a Freespace remote control to watch television, you don’t necessarily have to point it at the screen or hold it in any particular position.
Whenever you move your hand up towards the ceiling, the cursor will move up on the screen, and every time you move left, it’ll move left on the screen. You can be holding it upside down, and it will still work the same way.
Freespace is also licensed by other companies that use the technology in their products, including Kodak, who use Freespace in their Kodak HD Theater Player, and ZillionTV. Freespace is also used in our own Loop pointer, an in air mouse for the television, which we should make sure to discuss at more length.
Scott: Talk about some use cases. I think Wii remotes really made this type of thing mainstream. It seems like before that, this technology was a little more special-purpose.
You mentioned cable operators rolling it out to their customers, so I could see it being used to navigate menus and things like that, but talk a little bit about some of the things that somebody like me might be using it for versus maybe some of the more exotic uses.
Chad: Absolutely. The real impetus for developing the technology was that we saw that there was going to be a need, which has since played out, to control vast libraries of content on your television set sitting in your living room.
As the team here thought through what the consumer interaction was going to be with those large content libraries, we realized that the traditional up/down/left/right control mechanism for remote controls coupled to text lists and hierarchical and grid based menuing systems were not going to give consumers a good experience.
When you start to present tens of thousands or even hundreds of thousands of different pieces of content, you need to optimize the efficiency of the interaction between the user and the content source.
We chose pointing coupled to a visual interface as the optimal method. That’s because with visual imagery, you can actually put much more information on a screen at one time than with text.
Imagine if you put 50 DVD covers on a screen. The amount of information you’re conveying relative to what’s typically in a VOD interface of maybe eight or 10 at the most text-based titles is quite significant.
But then if you put 50 different selectable items on a screen, trying to wade through those arrow by arrow, click by click, you may have a dozen different button clicks to get to one particular piece of content.
Abstracting that interaction and using a pointer, one simple movement and a single click can allow a user to access any piece of content on the screen. We really see that as the optimal way to bring Internet delivered and video on demand content into the living room.
Scott: I can relate to that. Spelling out the name of a show on a TiVo, you have to go up, down, left, right, letter by letter, and it’s very cumbersome.
Like you said, the home entertainment experience is moving in the direction that, year after year, there’s more and more content at your fingertips available through your TV. There’s really an explosion in the amount of stuff that you have to navigate.
Parag: Right, and the other part of that is the advent of Internet content coming to the television. Because Internet content is largely nonlinear, having an up-down button on the remote would not be sufficient. The presence of many different kinds of Internet content plays a big role in making pointing a more appropriate solution for the TV screen.
Scott: Are there other, more exotic uses? I am thinking about the computing platforms in the movie “Minority Report,” where people are directly interacting with a lot of information. Are there any specific case studies that you can talk about?
Chad: There’s a company called Kopin that has announced a product called Golden-i, which is a head-worn computing device running a full PC inside. They’re using Freespace as a module embedded into the headset as a head tracker. The user looks into a monocular display, and they can actually use their head movements to pan around the display, as one application example.
There have been several other companies that are interested in integrating our modules into other wearable computing systems. One area where we’ve seen a lot of interest is by the military, which wants to build wearable computers for Special Ops, where people may not have their hands free to touch a computing device.
They’re using multiple motion trackers on the lens so the wearer can interact with the computer and translate hand signals, for example, back to the rest of their group. We’ve seen plenty of interest in other areas, as well, including the medical community and the fitness community.
Once you have a full six-axis inertial tracking system, there are a lot of things that you can do with that information. Our product strategy is to make integrating the technology into any target device as easy as possible.
One of the ways that we sell Freespace, as I mentioned, is as a module, which is actually about the size of a US quarter. You can purchase that module and integrate it into your own device and software.
Scott: It looks like you’ve developed an open source library to make it easier to develop against your hardware technology. Talk a little bit about that piece of it.
Chad: Our business model is to license the Freespace technology, so the more remote controls, game controllers, wearable computers, medical devices, etc. that are produced using our technology, the better off we are. Initially, we launched Freespace as a pointing solution, and it was, effectively, a mouse replacement. It’s relatively easy from a development standpoint to integrate a USB HID mouse into any range of platforms.
But, over the last several years, we’ve added a lot of new functionality to that. For example, we’ve added linear acceleration, angular velocity, and angular position measurements. We’ve also enabled the ability to remotely upgrade firmware in the field.
These additional functions are not specified within the typical USB HID, so we’ve created this open source library to include all of the unique Freespace-oriented functions, as an enabling measure.
Any developer who’s used to working with libraries for Linux or drivers for Windows platforms should be able to take that information, incorporate it into their driver environment, and quickly integrate it into their systems.
Scott: Why did you take an open source route versus releasing a proprietary SDK for developers to use, which is probably a more conventional approach, in many ways?
Chad: We started to see an increase in the interest from companies doing lots of different and unique things. I think Kopin’s a perfect example; we had never really envisioned a device like their Golden-i product.
We felt that releasing this into the open source community would enable us to really construct and capture the benefit of developers who are out there creating new things that we had not yet conceived of.
As with most open source projects, our hope is that this will give our customer base access to a lot of the creative thinking in the broader development community. We hope that if somebody brings up a new application for Freespace, they’ll go ahead and contribute that code back, and the next customer will be able to pick up on it and use it.
We are also seeing that the device maker and the application software producer are generally not the same company, so helping application developers integrate with various devices was interesting to us.
Scott: It sounds like you’re saying two things. First, you can’t envision every use case, and letting your customers get in there and tinker with the code lets them either enhance it or even do something entirely new with it. Open source gives them the ability to get in there and tinker, and hopefully, if what they’re doing is a general purpose use case, they contribute it back. It makes it into the library, and everybody benefits.
The other piece is that makers of general-purpose hardware aren’t necessarily going to be building the really innovative applications. A lot of that work is going to be done by other people using that hardware. In that scenario, open source gives you a way for those developers to work freely against these devices, regardless of who the ODM is in the middle.
Chad: That’s absolutely right. The thing I’d add is that we don’t want developers to have to rely on Hillcrest to incorporate new features into an SDK. If there’s a company or an independent developer who has an application idea for the PC, and they think it would be a great application to use with their Loop pointer that they just bought on Amazon or the Hillcrest website, then we don’t want to stop them from creating that application.
If only 100 people are using that application, that’s still good for us. But if our SDK is expected to support every nuanced application that somebody may conceive of, then we don’t really have a way of knowing which are going to be the winners and which aren’t, because this is really a new marketplace.
We’d rather for the world to have development freedom, to avoid inadvertently dismissing the one application that turns out to be the biggest of them all.
Scott: This is off on a tangent, but what’s the resolution of these devices? What’s the smallest movement that they can detect?
Chad: From a user standpoint, it’s within two pixels, which is obviously a very small degree of movement. The interesting thing about that question is that we have a software algorithm that actually monitors your hand tremor and removes that tremor from the signal.
We have a unique way of doing that so it doesn’t have a negative impact on latency. It’s not just a standard filtering approach, but actually removing the tremor from the signal. So if you hold your hand straight out in the air, you’re going to see the cursor remain dead still, even though your hand actually might be moving a centimeter or so in each direction.
But when you actually move your hand a centimeter in each direction on purpose, you’ll find that the cursor will move with it. We’re able to distinguish between intended and unintended motion.
Scott: That must be a thorny mathematical problem to solve.
Especially as the majority of the screens today are HD resolution, you’ll see with other pointing devices in the marketplace that the cursor jitters a lot when you’re holding your hand out. With ours, the cursor’s dead still on the screen, and it really tracks precisely to your movements. It’s that sort of combination in these technologies that creates the ultimate user experience.
Scott: Does that same algorithm work for head tracking versus hand tracking?
Chad: Yes, it does. We take the raw motion data from standard, off the shelf MEMS inertial sensors–the gyro, angular rate sensors, and accelerometers–and the Freespace software runs that through a pretty significant level of processing to remove anomalies and inconsistencies in the data, calibrate it for various circumstances, and then it produces a motion report that’s based on HID. That processed motion data can be used to produce whatever application you want.
We supply XY coordinate data for folks who just want to create a mouse replacement, but as I mentioned before, we also provide linear acceleration, angular velocity, and angular position data, which can be used for motion-based gaming, head tracking, and other body tracking applications.
Scott: You’ve had an SDK and an API for a while, but what’s new in your recent open source announcement?
Chad: We have bundled all of our new features into a single library that’s available to the public and accessible to various versions of Windows, Mac, and Linux. We also have a tool kit that we’re extending across all of those operating systems that will enable you to do many different things.
Scott: What’s been the initial feedback for the open source release?
Chad: So far, it’s been very positive. We’ve had a couple of companies working with the library for two months or so, and it’s been helpful for folks who want to go beyond a mouse replacement.
For example, there’s a set-top box company that’s partnered up with a gaming software company to produce a set of set-top box games targeted toward the IPTV world. They rely on some of this extended data and use Freespace to access it much more quickly than they would if they had to write the libraries from scratch.
We’re seeing a majority of interest from companies that are looking to do more than just pointing. Fortunately, there’s a lot of interest in motion-based gaming, and once you have a six-axis or five-axis motion sensing solution available for a set top box or a TV, it’s relatively easy to add gaming applications as well.
Scott: Do you have any closing thoughts about interesting things that we haven’t talked about?
Chad: We haven’t talked about the Loop, which is effectively an in air motion remote control for controlling the Internet and Internet video and personal media content in your living room. As more people start to connect PCs to their televisions, both permanently and on a temporary basis, we found that they have difficulty navigating those systems from the couch.
People try to cobble together a solution using wireless keyboards and trying to use wireless mice on the arms of their couches, and it gets to be tedious. We’re seeing many more content oriented sites launch applications and web presences that are really geared toward the ten foot living room experience.
Hulu is a great example; they have an application that works fantastic on the TV, and when you sit back and use your Loop on your couch, you can just point and click and navigate through that very easily.
YouTube has a sister site called XL, which is a ten foot version of the YouTube site. So again, we’re starting to see more content being presented in a way that’s suitable for the living room, and the Loop is effectively a controller to make access to that easy.
Scott: What’s your impression about the rate at which this is moving from early adopter technophile to more of a mainstream scenario?
Chad: For the last few years, I’d say it’s gone slower than expected. Where we’ve seen the real ramp up is this year, in 2009. It’s a combination of a lot of different things, but largely it has to do with the availability of content.
Sites like Hulu and YouTube are making it easy to browse video in your living room, Netflix is offering streaming movies, and there are a ton of flash games available on the Internet, that, particularly, we find kids using.
The predominant user is still the early adopter techie, but we have a lot of examples and a lot of experience with families connecting up the laptop to download a movie from Netflix or kids connecting their parents’ laptop to the PC to go to Club Penguin and play some games and chat with their friends.
We’re seeing it start to hit the mainstream, and I think there’s still a little way to go before it’s a fully mainstream product. I’d say the last six or nine months have been really interesting.
Scott: It sounds as if we caught you guys at a good time, then.
Scott: Well, thanks for taking the time to talk today.
Chad: Thank you.