Showing posts with label information. Show all posts
Showing posts with label information. Show all posts

Monday, July 2, 2012

Google Glass Team: ‘Wearable Computing Will Be the Norm’


By Steven Levy

Even though I followed Google’s I/O Conference from across the country, the event made it obvious that a company created with a strict focus on search has become an omnivorous factory of tech products both hard and soft. Google now regards its developers conference as a launch pad for a shotgun spread of announcements, almost like a CES springing from a single company. (Whatever happened to “more wood behind fewer arrows”?)

But the Google product that threatened to steal the entire show probably won’t be sold to the public until 2014. This is the prosthetic eye-based display computer called Project Glass, which is coming out of the company’s experimental unit, Google[x]. Announced last April, it was dropped into the conference in dramatic fashion: An extravagant demo hosted by Google co-founder Sergey Brin involved skydivers, stunt cyclists, and a death-defying Google+ hangout. It quickly attained legendary status.

Even before people got to sample Glass, it was popping their eyes out.

Google wouldn’t provide a date or product details for Glass’ eventual appearance as a consumer product — and in fact made it clear that the team was still figuring out the key details of what that product would be. But Google made waves by announcing that it would take orders for a $1,500 “explorer’s version,” sold only to I/O attendees and shipped sometime early next year. Hungry to get their hands on what seemed to be groundbreaking new technology, developers lined up to put their money down.

Meanwhile, I just as hungrily bit at the opportunity to do a phone interview with two of the leaders of Glass. Google originally hired project head Babak Parviz from the University of Washington, where he was the McMorrow Innovation Associate Professor, specializing in the interface between biology and technology. (One relevant piece of work: a paper called “Augmented Reality in a Contact Lens.”)
The other Glass honcho, product manager Steve Lee, is a longtime Google product manager, specializing in location and mapping areas. Here is the edited conversation.

Wired: Where are you now with Glass as compared to what Google will eventually release?
Babak Parviz: Project Glass is something that Steve and I have worked on together for a bit more than two years now. It has gone through lots of prototypes and fortunately we’ve arrived at something that sort of works right now. It still is a prototype, but we can do more experimentation with it. We’re excited about this. This could be a radically new technology that really enables people to do things that otherwise they couldn’t do. There are two broad areas that we’re looking at. One is to enable people to communicate with images in new ways, and in a better way. The second is very rapid access to information.

Wired: Let’s talk about some of the product basics. For instance, I’m still not clear whether Glass is something that works with the phone in your pocket, or a stand-alone product.

Parviz: Right now it doesn’t have a cell radio, it has Wi-Fi and Bluetooth. If you’re outdoors or on the go, at least for the immediate future, if you would like to have data connection, you would need a phone.

Steve Lee: Eventually it’ll be a stand-alone product in its own right.

Wired: What are the other current basics?

Parviz: We have a pretty powerful processor and a lot of memory in the device. There’s quite a bit of storage on board, so you can store images and video on board, or you can just live stream it out. We have a see-through display, so it shows images and video if you like, and it’s all self-contained. It has a camera that can collect photographs or video. It has a touchpad so it can interact with the system, and it has gyroscope, accelerometers, and compasses for making the system aware in terms of location and direction. It has microphones for collecting sound, it has a small speaker for getting sound back to the person who’s wearing it, and it has Wi-Fi and Bluetooth. And GPS.

This is the configuration that most likely will ship to the developers, but it’s not 100 percent sure that this is the configuration that will we ship to the broader consumer market.
Wired: How much does it weigh?

Lee: It’s comparable to a pair of sunglasses. You can stack three of these up and balance a scale with a smart phone.



Wired: What was your thinking when you embarked on the project, and how did that thinking evolve?

Parviz: We did look at many, many different possibilities early on. One of the things that we looked at was very immersive AR [Augmented Reality] environments — how much that would allow people to do, how much could come between you and the physical world, and how much that can be distractive. Over time we really found that particular picture less and less compelling. As we used the device ourselves, what became more compelling to use was a type of technology that doesn’t come between you and the physical world. So you do what you normally do but when you want to access it, it’s immediately relevant — it can help you do something, it would help you connect to other people with images or video, or it would help you get a snippet of information very quickly. So we decided that having the technology out of the way is much, much more compelling than immersive AR, at least at this time.

READ MORE HERE

Friday, June 29, 2012

Google Now



By Sarah Perez

Google Now, the smart personal search assistant announced yesterday at Google I/O, has now come online. Well, the landing page for the service has come online, that is. The new site introduces the key aspects to Google Now, which arrives in Google’s next mobile operating system, Android 4.1 (aka Jelly Bean), including its ability to track flights, keep an eye on traffic and your calendar, check sports scores and weather, see suggested places nearby, and more.

The feature, accessed by swiping up from the bottom of the homescreen has already been referred to as a “Siri killer” by some Android fans because of its ability to not just assist you, but to proactively alert you to new information based on your needs. One example which Google showed off in its demo yesterday was a flight search, which would later pop up a card that appeared with flight alerts and delays as they occurred in real-time. In another example, Google learned what sports teams you liked based on your search history and could then alert you to upcoming games and scores. In another, you could see suggested places to eat or shop as you walked down the street.

However, the biggest piece to Google Now is that the information comes and finds you – not the other way around. This is a key difference between how Siri operates today and what Google is promising. Of course, you as the user are in control of the experience and can enable or disable which cards and alerts you would see. It’s opt-in, which goes a long way to dispel the potential “creepy” factor here. It’s not as if Skynet has just come online. (I think).

The idea for this type of search-without-the-search technology, if you will, has been in development for some time. In 2010, then CEO, now Chairman Eric Schmidt spoke of a “serendipity engine” as the future of Google search. “We want to give you your time back,” Schmidt said at the time. Google Instant was the first step towards that goal, but Google Now takes a giant leap. At the IFA conference in Berlin, Schmidt described the experience that is today’s Google Now, talking about how phones could spout off random facts as you walked around town, or how they could inform you of the weather, understanding the natural language of human speech. He called this idea a new age of “augmented reality,” where computers work for us.

Unfortunately, for the time being, that new age will only be available to a precious few – those who buy or can upgrade their Android-based devices to Jelly Bean. But much of what Google Now offers could be bundled into an Android or even iOS (!) app using the platforms’ push notifications feature. Hopefully that is in the works, too.

Friday, June 1, 2012

Could we INHALE the movies of the future? Scientists encode moving pictures into a gas - and inspire a YouTube song



By Rob Waugh

One young man was inspired by the lingo of the University of Maryland's paper, especially the storage of images in the atomic memory, and contrived a song which he performs on a YouTube video clip.

As yet, there are not many practical uses for the technique, which stores information in tiny vials of rubidium, by beaming  light into a 20cm long tube.

To play back, the magnetic field is flipped backwards, the control beam turned back on, and the atoms start to move in the opposite direction.

The point? There is one, beyond simply creating a new storage medium, and presumably inspiring George Lucas to re-release the Star Wars films in gaseous form.

The gas can store 'quantum' information - and once it's refined, could be a crucial building block for the computers of the future. 

As yet, there are not many practical uses for the technique,
which stores information in tiny vials of rubidium, by
beaming light into a 20cm long tube

‘The big thing here,’ said Lett, ‘is that this allows us to do images and do pulses (instead of individual photons) and it can be matched (hopefully) to our squeezed light source, so that we can soon try to store ‘quantum images’ and make essentially a random access memory for continuous variable quantum information.

The thing that really attracted us to this method---aside from its being pretty well-matched to our source of squeezed light---is that the ANU group was able to get 87% recovery efficiency from it - which is, I think, the best anyone has seen in any optical system, so it holds great promise for a quantum memory.’

WATCH VIDEO HERE