Synaps Labs displayed a billboard in Moscow changed the ad on its display based on the brand of car passing by. The billboard was created by, which used high-speed cameras placed 180 meters in front of the billboard to take pictures of cars. Machine-learning software determined each car’s make and model. The purpose was to show ads for Jaguar’s expensive new SUV to drivers who already owned expensive cars. AI and in particular machine learning has become quite advanced to extract all kind of information from photographs whether a person in a photo is young or old, male or female and glean many other facts useful for ad targeting. The company has since developed its technology further and installed billboards all over Russia and the United States.
Others, such as the MIT Media Lab, have also pursued the concept of multi-view displays using different techniques in the past. More recently, a startup called MirraViz drummed up attention at the CES technology trade show this year with a system that uses multiple projectors to display different content to multiple people on the same screen. Engadget called it “one of the wildest” things at the show, while noting that it was limited by the number of projectors that could fit around the screen.
Now “parallel reality” a breakthrough display technology has been developed that allows many different people see completely different content on the same screen, simultaneously. When combined with location technology and sensors, this content can be targeted in real time from public displays to specific locations, people and objects, essentially following them in three-dimensional space as they move through the world.
The display technology is based on a “multi-view” pixel. Unlike traditional pixels, each of which emit one color of light in all directions, Misapplied Sciences says its pixel can send different colors of light in tens of thousands or even millions of directions.. They call it a “magic pixel.”
This allows people to observe information only relevant to them for example their flight information in Airports, stats for your favourite players in a stadium and traffic signals on the road. These are examples of the long-term potential for “parallel reality” display technology to personalize the world, as envisioned by Misapplied Sciences Inc., a Redmond, Wash.-based startup founded by a small team of Microsoft and Walt Disney Imagineering veterans.
It works with the naked eye, no headset or high-tech goggles required.
“Multiple people can be looking at the same pixel at the same time, and yet perceive a completely different color,” said Albert Ng, the company’s CEO and co-founder. “That’s each individual pixel. Then, we can create displays by having arrays of these multi-view pixels, and we can control the colors of light that each pixel sends. After coordinating all those light rays together, we can form images at different locations.”
There’s no such limitation with the Misapplied Sciences technology, given the way its multi-view pixel works. Misapplied Sciences has applied for 18 patents related to its technology, three of which have been granted, and the founders say they have more in the pipeline. Its approved patents are for a multi-view architectural lighting system, a computational pipeline and architecture for multi-view display, and multi-view traffic signage, displaying customized content to different vehicles.
“I couldn’t stop thinking about different ways it could be implemented and made useful — if it could be scalable enough and made affordable for different venues in different populations,” said Tremblay, who is also a professor at the University of Washington in the Department of Speech & Hearing Services.

