Archive for the ‘Mixed Reality’ Category
The Laster SeeThru is lightweight (just under 2 oz.) wireless augmented reality (AR) eyewear. When you wear the SeeThru, information about your surroundings pops up without disrupting your normal field of vision. The information you see changes depending on what you are seeing. For instance, if you’re looking at a mountain chain, information about each peak can pop up alongside the landscape as you take it in. This kind of contextual information gives you a better awareness of your surroundings. There’s no looking up or down at tiny screens in the corner of your glasses with the SeeThru. Look the world straight in the eye, and the SeeThru will support you, seamlessly.
The Laster SeeThru is the first genuine wireless augmented reality glasses device. Your smartphone acts as the SeeThru’s processor. The two devices connect wirelessly via Bluetooth.
The SeeThru offers unrivaled AR applications with a full patented optical see-through technology. Augmented reality contextual information is overlaid directly onto the object you’re looking at without any image distortion, thanks to the SeeThru’s transparent lens. Compare this with other AR devices, where contextual information is usually displayed on a separate intermediary screen after taking a separate video capture.
With up to 8 hours of battery life, the SeeThru is the best way to experience AR all the day. LASTER kept energy use low by using only Bluetooth to communicate with your smartphone to produce and transmit AR content. This architecture reduces not only the SeeThru’s energy use, but also its overall cost.
And to protect privacy, LASTER decided not to include a camera or recording capabilities in the SeeThru (no spy glasses here!). Instead, the SeeThru’s AR capabilities and tracking are supported by 10 built-in location and GPS sensors.
To provide all of that AR contextual information, LASTER has embedded the most accurate sensors on the market (3 gyroscopes, 3 accelerometers, 3 compasses), and use your Smartphone’s processor to determine your location and what you are seeing.
Get It On Kickstarter! http://kck.st/19tDbMV
Innovega’s wearable transparent heads-up display, enabled by iOptik contact lens technology, delivers mega-pixel content with a panoramic field-of-view. This high-performance and stylish eyewear is perfectly suited for the enjoyment of immersive personal media. The first part of the video is a CGI compilation provided by CONNECT, San Diego and the second part is actual footage through our system.
iOptiks are contact lenses that enhance your normal vision by allowing you to view virtual and augmented reality images without the use of any bulky apparatus. Instead of oversized VR helmets, digital images are projected onto tiny displays in full color that sit very near to your eye.
iOptik lenses enhance your normal vision within the confines of your actual eye via the contact lens, the resulting effect allows for very real immersive 3-D large screen images.
Of course it isn’t just 3-D images that iOptiks can project. Innovega says that the applications for iOptiks go beyond simple movie viewing. While the micro-display can be occluded to allow for highly immersive 3-D images similar to what you would experience at the movies, it can also be used for 3-D gaming. You will even be able to utilize a “transparent display for augmented reality applications”.
iOptik Lens by Innovega were demonstrated at Innovega 2012. This contact lenses with nanotechnology that, when combined with a special set of glasses, allows one to focus close to read a heads-up display projected on the glasses, while seeing far. They can also be used for delivering full-field 3D or for 360 degree Gaming Experience.
In this video, Randall Sprague, CTO of Innovega, explains how this device works and potential applications.
2014 CES: Innovega Staff Wear Mega-pixel Panoramic Eyeglasses
Designers break media-bottleneck by using modern contact lenses
SEATTLE, WA., January 6, 2014 — Innovega Inc., developer of full field of view HUD eyeglasses, announced today that its staff will be wearing prototypes of its mega-pixel eyewear at its booth at 2014 CES. Steve Willey, Innovega CEO, explains, “at last year’s CES event we demonstrated new eyewear optics that offered to the wearer a clear and simultaneous view of both their personal digital media and of their immediate surroundings (http://youtu.be/-_sdoaemQ-k). The big news for 2014 is that our team has succeeded in advancing the platform from feasibility demonstration to wearable, contact lens-enabled, full-function, mega-pixel eyewear. Though 2013 represented an exciting launch of ‘wearable technology’ and ‘the Internet of things’, neither will gain traction without development of powerful user interfaces. Innovega staff will demonstrate our ability to fill this need by wearing the industry’s first rich-media eyeglasses at Booth # 70103 in the Venetian Hotel.”
The Innovega iOptik™ platform provides wearers a ‘virtual canvas’ on which any media can be viewed or application run. The prototypes will feature up to six times the number of pixels and forty-six times the screen size of mobile products that rely on designs limited by conventional optics. Our optics deliver games that are truly “immersive”, movies that mimic IMAX performance, a multi-tasking dashboard that incorporates five or more typical screens – all while simultaneously providing the wearer a safe and clear view of their environment.
Innovega provides second-generation components, core technology and reference designs that enable its OEM customers to develop new generations of high-performance, digital eyewear. Its novel iOptik™ architecture improves comfort and styling by removing all of bulky and heavy focusing optics from the eyewear. Its application of a modern soft contact lens yields an immediate panoramic field of view that enables immersive entertainment or benefits from multiple, active windows, simultaneous with a continuous view of the wearer’s real world. Innovega’s use of conventional, transparent and stylish eyeglasses eliminates the social barrier that traditional wearable displays have created. Innovega maintains offices in Seattle, WA. and in San Diego, CA.
Source: Innovega Inc. Contact: Steve Willey (425) 516-8175
Ever wished your computer could respond to your thoughts? Good news — it can. Get ready to leap into a new world with Tobii EyeX. Adding eye tracking to the action makes things fast, fun and totally intuitive. You control games like you’re in them. You zoom where you look. Text scrolls as you read. You are always in the right place.
Experience computer interaction with eye tracking by Tobii. This video shows some of the core interactions . And some experiences that are yet to be developed.
Tobii and SteelSeries team up to bring gamers the world’s first eye tracking gaming gear. Be first in creating the future of gaming with eye tracking.
Eye tracking increases the bandwidth between the gamer and the game, allowing gamers to do more at the same time, which also creates a richer gaming experience. Add an extra aiming mechanism, remove the interruption of the game play by creating easier access to menus and commands, or make games with complex controls easier to learn.
Get the Tobii EyeX dev kit now. http://www.tobii.com/en/eye-experienc…
Stay tuned. Follow us on twitter, Facebook and Google+
A movie Written and Directed by K-MICHEL PARANDI
Producers: James Lawler & K-Michel Parandi
Executive Producer: Virgil Price
Co-Executive Producer: Lauren Beck
Production Design & Concept Artist: Ben Mauro – Design art work: K-Michel Parandi & Ben Mauro – Costume Production Design: Julien Richard
Dialogue: K-Michel Parandi & Jack Coulton
Music and Sound Design: Pascal Bonifay (AOC/BOC)
First Assistant Director: Etan Harwayne-Gidansky
Second Assistant Director: Ramde Serolf
Assistant Producer: Paul Jarret
Unit Production Manager: Adam Benlifer
Production Coordinator: Dann Ramirez
Director’s Assistant: Louis Papaloizou
Director Of Photography: Ray Flynn
Cast: Max Kaminsky NYPC: Justin Campbell – Parker: NYPC Chris Beetem – Bodyjacker: Nathan Owen – Young NYPC cop: Tommy Walker – NYPC hungry cop: Mike Falcon / Waitress: Kim Allen – Rami: Roberto Lopez
Thief: Louis Paploizou – MPF Narc: Toby Wilson
Camera Operators: K-Michel Parandi & Ray Flynn
First Assistant Camera: Violetta D’Agata
Second Assistant Camera: Christopher Bye
DIT: Drew Ravani & Stephen Dirkes
Aerial Photography: Marcin Nadolni & Toby Wilson
Steadicam: Amar Ioudarene
Editing Assistants: Max Smith & Tom Klane
Sound Design: Pascal Bonifay & Fabrice Smadja
Audio: AOC/BOC (M. Letaconnoux – S. Weinberg – L. Jokiel – B Mora – M. Singer)
Voice Talents: Kate Clark – Billy-Bob Thompson- Roberto Serrini – Kim Bonifay – Mia Bonifay
Storyboard: Andrew Wendel
Art Dept: Nick Tong – Brian Rzepka – Nola Denett – Nicole Eure
Wardrobe: Marina Lelchuk
VFX by Hectic Electric Amsterdam
VFX Producers: Mark Kubbinga & Patty Veestra
VFX Supervisor: Robbert Lubken
Post Production Services by Moon Dog Edit – New York, in Association with Violet Creative
Colorist: Blasé Theodore
Gaffer: Raina Oberlin
Best Boy Electric: Matt Kessler
Second Electric: Noah Chamis
Third Electric: Brendon Swift
Forth Electric: Albert Phaneuf
Driver / Swing: Rebekka Bjornosdottir
G&E Intern: Deanna Covello – Jack Buckley
Key Grip: Stratton Bailey
Best Boy Grip: Will Gottlieb
Third Grip: Adam Barbay
Forth Grip: Matt Garland
Rig Gaffer: David Duktus
Sound Department: Brian Flood – Oliver Rush
Stunt Coordinator: Roberto Lopez
Stunts: Luciano Acuna – Kenny Wong
Associate Producer: Ray Flynn
Production Assistants: Anthony Salvatori – Curtis Yarlborough – Victor Trejo – Christopher Duchene – Pierre Tissot – Grady Daub – Chelsea Moore – Angel Martinez – Ben Budde – Aldo Rodriguez – Chris Gautsh – Benjamin Budd – Angel Paredes
Drivers & Production Assistants: Patrick Chen – Mikhail Chernikov – Stewart Resmer – Alexander Bragg – Stephen Mitchell – Aido Rodriguez – Ryan Hawk
Still Photographer: Simon Briand
Special Thanks To: Channing Tatum – Reid Carolin – Sandy Morhouse – Rory Haines – Sohrab Noshirvani – Micah Sherman – Hoke Hokansen – Jill McDermid – Rafael Childress – Jon Darman – Brian Zingale – Remi Liebert
Director’s cut. April 2013.
STREET ART & AUGMENTED REALITY BY GEC-ART & HUB09
GEC-ART and HUB09 Italian artist have created a new project combining Street Art and Augmented Reality. The HUB09‘s augmented reality app allows you to frame your smartphone with the street art in order to see her come to life in unexpected ways …. Interesting indeed!
New technology from Center of Nanotechnology and Molecular Materials holds promise in thermoelectrics
When Wake Forest graduate student Corey Hewitt (Ph.D. ’13) touches a two-inch square of black fabric, a meter goes berserk. Simply by touching a small piece of Power Felt – a promising new thermoelectric device developed by a team of researchers in the Center for Nanotechnology and Molecular Materials – he has converted his body heat into an electrical current.
Comprised of tiny carbon nanotubes locked up in flexible plastic fibers and made to feel like fabric, Power Felt uses temperature differences – room temperature versus body temperature, for instance – to create a charge.
“We waste a lot of energy in the form of heat. For example, recapturing a car’s energy waste could help improve fuel mileage and power the radio, air conditioning or navigation system,” Hewitt says. “Generally thermoelectrics are an underdeveloped technology for harvesting energy, yet there is so much opportunity.”
The research appears in the current issue of Nano Letters, a leading journal in nanotechnology. Potential uses for Power Felt include lining automobile seats to boost battery power and service electrical needs, insulating pipes or collecting heat under roof tiles to lower gas or electric bills, lining clothing or sports equipment to monitor performance, or wrapping IV or wound sites to better track patients’ medical needs.
“Imagine it in an emergency kit, wrapped around a flashlight, powering a weather radio, charging a prepaid cell phone,” says David Carroll, director of the Center for Nanotechnology and Molecular Materials and head of the team leading this research. “Literally, just by sitting on your phone, Power Felt could provide relief during power outages or accidents.”
Cost has prevented thermoelectrics from being used more widely in consumer products. Standard thermoelectric devices use a much more efficient compound called bismuth telluride to turn heat into power in products including mobile refrigerators and CPU coolers, but it can cost $1,000 per kilogram. Like silicon, researchers liken its affordability to demand in volume and think someday Power Felt would cost only $1 to add to a cell phone cover.
Currently Hewitt is evaluating several ways to add more nanotube layers and make them even thinner to boost the power output. Although there’s more work to do before Power Felt is ready for market, he says, “I imagine being able to make a jacket with a completely thermoelectric inside liner that gathers warmth from body heat, while the exterior remains cold from the outside temperature. If the Power Felt is efficient enough, you could potentially power an iPod, which would be great for distance runners. It’s pretty cool to think about, and it’s definitely within reach.” Currently Wake Forest is in talks with investors to produce Power Felt commercially.
If virtual reality creates a rich experience within a world that may not exist, then augmented reality (AR) creates a rich experience within the world that actually does. AR overlays relevant digital content on physical environments in real time so you can interact with them in ways that are more interesting and more powerful. Register now to hear how AR is creating cool new applications and exciting new business cases in areas ranging from consumer retail to travel to entertainment and more.
Read More: Digital Media SIG Event: Augmented Reality Gets Real | mitforumcambridge.org.
Google is working on a set of HUD, (heads-up display), glasses, they are now in prototype phase and will enable users to tap into Google’s cloud services through augmented reality. Here 9to5Google Explains…
We detailed the first information about the Google [x] Glasses project in December.
They are in late prototype stages of wearable glasses that look like thick-rimmed glasses that “normal people” wear. However, these provide a display with a heads up computer interface. There are a few buttons on the arms of the glasses, but otherwise, they could be mistaken for normal glasses. Additionally, we are not sure of the technology being employed here, but it is likely a transparent LCD or AMOLED display such as the one demonstrated below: In addition, we have heard that this device is not an “Android peripheral” as the NYT stated. According to our source, it communicates directly with the Cloud over IP. Although, the “Google Goggles” could use a phone’s Internet connection, through Wi-Fi or a low power Bluetooth 4.0. The use-case is augmented reality that would tie into Google’s location services. A user can walk around with information popping up and into display -Terminator-style- based on preferences, location and Google’s information. Therefore, these things likely connect to the Internet and have GPS. They also likely run a version of Android.
Since then, we have learned much more regarding Google’s glasses…
Our tipster has now seen a prototype and said it looks something like Oakley Thumps (below). These glasses, we heard, have a front-facing camera used to gather information and could aid in augmented reality apps. It will also take pictures. The spied prototype has a flash —perhaps for help at night, or maybe it is just a way to take better photos. The camera is extremely small and likely only a few megapixels.
The heads up display (HUD) is only for one eye and on the side. It is not transparent nor does it have dual 3D configurations, as previously speculated.
One really cool bit: The navigation system currently used is a head tilting-to scroll and click. We are told it is very quick to learn and once the user is adept at navigation, it becomes second nature and almost indistinguishable to outside users.
(As an aside, I built a head mouse as a Masters Thesis project a few years back that used head tilts to navigate and control menus. I am ready to collect royalties!)
I/O on the glasses will also include voice input and output, and we are told the CPU/RAM/storage hardware is near the equivalent of a generation-old Android smartphone. As a guess, we would speculate something like 1GHz ARM A8, 256MB RAM and 8GB of storage? In any case, it will also function as a smartphone.
Perhaps most interesting is that Google is currently deciding on how it wants to release these glasses, even though the product is still a very long way from being finished. It is currently a secret with only a few geeky types knowing about it, and Google is apparently unsure if it will have mass-market appeal. Therefore, the company is considering making this a pilot program, somewhat like the Cr-48 Chromebooks last year.
Yes, Google might actually release this product as beta-pilot program to people outside of Google—and soon.
FYI Motorola’s got something cool in this area brewing as well.
Another quick hack using the Kinect beta SDK and my new Windows Phone (which is great!). What you see is a simple game engine utilizing the pseudo-holographic effect from my other videos. A Kinect “sees” the position of the viewer and the 3D engine adjusts the image accordingly to give the illusion of a real 3D object. The 3D engine supports anaglyph 3D (red/cyan glasses) for a better effect in real life. A simple WP7 app controls the application and the helicopter using the accelerometers of the phone. (Source — If you like it, check out my other videos. Thanks for watching! )
Pico projectors are an easy way to increase the screen real estate of your mobile phone, but what if you'd rather not carry one around in your pocket or bulk up your phone's slim profile with a slip on solution? Well, a team of intrepid researchers may have come up with an elegant solution to your problem that can work with any smartphone and external display: virtual projection. The system works by using a central server that constantly takes screenshots of the external display and compares them with the images from the phone's camera to track its location. It then replicates what's on the handset's screen, while allowing you to add multiple image windows and position and rotate them as you see fit. Additionally, multiple users can collaborate and virtually project pictures or videos onscreen at the same time. Intrigued? See it in action for yourself in the video after the break. Continue reading... Researchers turn your smartphone into a virtual projector
Researchers turn your smartphone into a virtual projector originally appeared on Engadget on Sun, 22 Jan 2012 12:37:00 EDT. Please see our terms for use of feeds.
Got backseat boredom? DVD players and Game Boys are so five years ago, but a new concept in rear seat entertainment technology that uses the windows themselves could replace squirminess and snoozing with interactive scribbling, sweeping and pinching.
General Motors Research and Development put that challenge before researchers and students from the FUTURE LAB at Bezalel Academy of Art and Design in Israel. The task: Conceptualize new ways to help rear seat passengers, particularly children, have a richer experience on the road.
The Windows of Opportunity (WOO) Project was inspired by psychological studies indicating car passengers often feel disconnected from their environment, GM asked the Bezalel students to turn car windows into interactive displays capable of stimulating awareness, nurturing curiosity and encouraging a stronger connection with the world outside the vehicle.
“Traditionally, the use of interactive displays in cars has been limited to the driver and front passenger, but we see an opportunity to provide a technology interface designed specifically for rear seat passengers,” said Tom Seder, GM R&D lab group manager for human-machine interface. “Advanced windows that are capable of responding to vehicle speed and location could augment real world views with interactive enhancements to provide entertainment and educational value.”
Since GM has no immediate plans to put interactive display windows into production vehicles, the R&D team gave free reign to the Bezalel students to create applications without concern whether they could be mass produced. Bezalel is Israel’s oldest institute of higher education and one of the more prestigious schools of its kind in the world. (Source)
Immersiva is one of the most icon simulators in any metaverse, Bryn Oh‘s, (Immersiva’s creator), artistic soul flows, breathes, and evolves on a checker board grid that immerses you in lucid dreams and stories. Bryn’s free expression has had a profound effect on the residents of Second Life, and Immersiva is a grid favorite, it’s a place that always surprises.
So, it was such a surprise to hear, (from Bryn herself), that Immersiva was to shut down… with no good reason given. That news hurt, shocked and sadden me. I felt as if one of the last great SL creators was about to disappear from the metaverse, as so many have before her. Too many fantastic virtual artist have move out of SL, either to pursue real life artistic endeavors, or they immigrated to other open spaces. My heart broke at the news of Immersiva’s shutdown, and Bryn herself lamented that the experience was like the image of Marty and his siblings faded in the foreground.
It got me reminiscing on all the times I’ve paid a visit to Immersiva and other Bryn Oh builds around the Second Life grid. I have always been fascinated by her work, and I feel connected to the spirit she evokes. Both of my avatars have modeled with Immersiva in the background, and since Immersiva is a living and breathing simulator there is always something new to experience.
Bryn Oh’s Second Life builds seems like a faded memory now, I will charish these snapshots and archive them, I will not let Bryn’s Oh fade in vain! So Heartbreaking. =[
Huh? What was that Bryn? You found some funding? For another year and six months? Really?? …
YAY! \o/ … Yes! That’s excellent news! Immersiva is not going away yet, and in fact Bryn is asking for support to extend Immersiva’s breath and life. This is a great gift, Byrn! I’m so excited to see what you build, and the experiences you create. Check Out Byrn’s Crowd Funding site!
Byrn Oh is a true artist, she tells stories and brings them to life, she connects to your emotions and makes you think. Immersiva must stay open for as long as Bryn has something to say. Check out The Rabbicorn Story and Anna’s Many Murders.
Oringal Post Feb. 15, 2010
Immersiva, created by Bryn Oh and donated to Second Life residents by Dusan Writer, is one of the most artistic and existential simulators in world. Byrn Oh’s work strikes deep cords in one’s soul, it opens the cracks into ones dreams and distant memories, while telling the stories of our collective childhood.
Traveling through the simulation and zooming into the detailed builds one gets the sense of slipping in between time and space, stepping through portals of blinding light and falling into voids. Immersiva is the place in our minds that time and space forgot, a place that is both run down and working, a place that allows the visitor to dream up their own reality and sense of what it all means.
The builds and landscape are beautifully crafted and Bryn Oh’s use of particle noise is perfectly executed. I’ve taken several trips to the sim and I will make many more, Ms. Oh likes to keep busy so she’s always adding and taking away from the experience, which keep the simulation exciting and new. Below you will find a slide show of my experience with Immersiva and machinima videos created by Ms. Oh herself.
Check out my Immersiva Flickr set.
Reiterating its commitment to fuel best-in-class user experiences, Texas Instruments Incorporated (TI) today underscored strategic relationships with metaio and Total Immersion, aimed at bringing augmented reality (AR) capabilities to life on TI’s market-leading OMAP processors.
Both companies are providing AR software development kits (SDK), optimized to work on TI’s OMAP processors, which now makes it easier than ever to implement next-generation, immersive AR applications. This time, the collaborative efforts place the smart-multicore OMAP processors at the heart of award-winning AR advancements, and bring breakthrough AR design capabilities to a broader range of OEMs and developers. Exciting apps built using these SDKs are on display this week at the Consumer Electronics Show (CES) in TI’s meeting space (N116, North Hall).
“Our strategic partnerships with metaio and Total Immersion enable their AR SDKs to leverage the on-chip dedicated camera sub-system and hardware-accelerated computer vision libraries unique to the OMAP platform’s smart multicore architecture,” said Fred Cohen, director, OMAP user experience team, TI. “These innovators are at the forefront of their industry. In addition to differentiated technical capabilities, our work with metaio and Total Immersion introduces an unprecedented set of tools and access as well as professional support from each company, empowering developers to bring a new era of AR-based eCommerce applications to life.”
The OMAP-processor-optimized metaio Mobile SDK includes patented gravity awareness visual tracking technology for 2D and 3D objects, which ensure more natural, intuitive and realistic AR experiences. Total Immersion’s D’Fusion AR platform, leverages the OMAP platform’s processing speed for lightning-fast image recognition, rendering capability and extraordinary tracking abilities.
“We are thrilled to work with TI to make it easier and faster for developers to enable the most sought-after, futuristic AR capabilities imaginable. Our new gravity awareness feature and award-winning visual tracking technology for 2D and 3D objects pair with the OMAP processor to deliver natural, intuitive AR features that consumers demand.” – Dr. Thomas Alt, CEO, metaio.
“Collaborating with TI on our AR SDK ensures that D’Fusion® offers a best-in-class AR solution for mobile AR development. The OMAP processors have what our developers demand in terms of performance and optimization with TI’s smart multicore OMAP architecture. It makes existing AR applications better and faster, and will also enable new and exciting apps in markets thirsty for what AR capabilities have to offer.” – Bruno Uzzan, CEO and Co-Founder, Total Immersion
These SDKs are available today to developers and customers wanting to bring a higher quality, higher performing, lower power AR experience to mobile devices.
Visit metaio’s site to download the free SDK here: http://www.metaio.com/software/mobile-sdk/.
Visit Total Immersion’s site to download the free D’Fusion® SDK here: https://community.t-immersion.com and join Total Immersion’s developer community.
By Bill Weir, C. Michael Kim & David Miller — This Could Be Big
Amidst the massive buffet of electronic goodies that stuffs the annual techno fest that is the Consumer Electronics Show, we found – or rather, were found by – a company with an innovative new application that could truly change the way we utilize our phones and tablets, and even the way we look at the world. The company is Aurasma and their augmented reality application will forever change the way you look at a twenty dollar bill.
Aurasma is a free open source Android, iPhone and iPad app that allows you to interact with the world in a whole new way. You simply hold your phone up to a still image thats been augmented using Aurasma and it transforms into anything from a video, animation, or in the case of a Harry Potter poster, a movie trailer. But it doesn’t simply connect to a link like a QR code, instead it’s as if the movie poster or the $20 bill comes alive on the screen, like in the movie Jumanji.
Expect to here more from Aurasma in the months to come. They’ve been one of the most talked about innovations at this year’s show, and are a finalist in CNET’s Best of CES awards.
Available on smartphones, the app was created out of technology that is capable of recognizing images, symbols and objects in the real world and understanding them. It can then deliver relevant content in real time, including videos, animations, audio or webpages.
Using the smartphone’s camera, GPS, compass, accelerometer and internet connection, the technology combines image recognition and a conceptual understanding of the 3D world to recognize objects and images and seamlessly merge augmented reality actions into the scene. Without the need for barcodes or tags, the app is able to see its surrounding environment and make it fully interactive.
If you’ve seen a sci-fi movie recently, one piece of technology that shows up quite often is the transparent display screen. Samsung Global
Transparent LCD Technology – Endless Possibilities
Samsung leads the global LCD market with the world’s first mass produced transparent display product. Transparent displays have a wide range of use in all industry areas as an efficient tool for delivering information and communication. These panels can be applied to show windows, outdoor billboards, and in showcase events. Corporations and schools can also adopt the panel as an interactive communication device, which enables information to be displayed more effectively.
Benefits of Transparency
Samsung’s transparent LCD panel boasts the world’s best transmittance rate of over 20% for the black-and-white type and over 15% for the color type.
The transparent LCD panels have a high transmittance rate, which enables a person to look right through the panel like glass, and it consumes 90% less electricity compared with a conventional LCD panel using back light unit.
How does it work?
A transparent LCD panel utilizes ambient light such as sunlight, which consequently reduces the power required since there is no backlight.
Samsung’s transparent LCD panel maximizes convenience for not only manufacturers but also consumers by incorporating the High Definition Multimedia Interface (HDMI) and the Universal Serial Bus (USB) interface. These panels come in two types, the black-and-white type and the color type, and has a contrast ratio of 500:1 with WSXGA+ (1680*1050) resolution.
Qualcomm Inc. and Sesame Workshop, the producer behind Sesame Street, have joined forces to explore augmented reality experiences for children that encourage learning and imagination. A longtime advocate of embracing cutting-edge technologies to enrich children’s early learning experiences, Sesame Workshop teamed with Qualcomm to create a prototype playset that brings physical toys to life.
Vuforia brings a new dimension to mobile experiences through the use of augmented reality. Simply point your device at real world objects, and entertaining and useful information will suddenly appear.
For marketers, Vuforia can drive brand engagement in entirely new ways. Advertising can literally jump off the printed page. Product packaging can come alive on retail shelves. And once purchased, products themselves can provide enhanced interactivity to provide instructions and drive future sales. See how leading global brands are using Vuforia today.
For developers, Vuforia provides industry-leading technology and performance on a wide range of mobile devices. Vuforia’s computer vision functionality will recognize a variety of 2D and 3D visual targets. With support for iOS, Android, and Unity 3D, Vuforia will allow you to write a single native app that can reach over 400 models of smartphones and tablets. Download the Vuforia SDK and get started today!
Vu a variety of exciting Vuforia augmented reality apps: gaming, play, advertising, educational.
via Vuforia (Augmented Reality) Sizzle Reel.
Qualcomm News and Events – Press Releases Qualcomm and Sesame Workshop Collaborate to Explore Educational Applications of Augmented RealityJanuary 12, 2012
Qualcomm Incorporated (NASDAQ: QCOM) and Sesame Workshop, the producer behind Sesame Street, have joined forces to explore augmented reality experiences for children that encourage learning and imagination. A longtime advocate of embracing cutting-edge technologies to enrich children’s early learning experiences, Sesame Workshop teamed with Qualcomm to create a prototype playset that brings physical toys to life. The application will be demonstrated at International CES, Jan. 10-13, in Qualcomm’s exhibit (South Hall 3, Booth #30313).
Microsoft CEO Steve Balmer announced on Monday the company’s gesture-recognition technology Kinect will come to Windows on February 1. It was the one intriguing nugget in a presentation more notable for its a capella tweet choir than its news…
Scientists Hack Kinect to Study Glaciers and Asteroids
By Adam Mann
SAN FRANCISCO — Last summer, Ken Mankoff shimmied through zero-degree water and mud into a small cavern underneath Rieperbreen Glacier in Svalbard, Norway, holding a Microsoft Kinect wrapped inside a waterproof bag.
Using the little toy, originally meant as a motion-sensing device for the Xbox 360 videogame console, Mankoff scanned the cave floor in 3-D. During the summer, water from lakes on the glacier’s surface had gushed through the channel he was sitting in. The Kinect was going to provide a better understanding of its size and roughness, which could help researchers predict how the ice above would flow toward the sea…. Read More On Wired.com.