Research insights

Magic Leap: Founder of Secretive Start-Up Unveils Mixed-Reality Goggles

Table of Contents

Magic Leap has introduced a mixed-reality headset that it claims will change how people interact with computers and the real world.

Unlike virtual reality headsets that fully block out your surroundings, Magic Leap’s device – called Lightwear – looks more like a pair of see-through goggles. Users can wear them like glasses and still see the world around them. These goggles connect to a compact, high-powered computer known as the Lightpack. Together, they can place realistic, interactive 3D characters – like people, robots, or spaceships – right into the real environment you’re in.

Founded in 2011, Magic Leap has stayed mysterious. It continues to puzzle tech writers and analysts by attracting huge investments from top companies and drawing in talented professionals. So far, it has raised $1.9 billion. Although the startup has shared some sleek concept videos showing what its augmented reality platform could look like, it hasn’t yet shown any working tech to the public. The long wait has even led some media outlets to question whether it’s all just hype. Still, the company’s valuation keeps climbing and was last reported at $6 billion.

Magic Leap is heavily tied to its founder, Rony Abovitz, a bold-thinking bioengineer. Before launching this company, he played a key role in creating robotic arms used in surgery for Mako Surgical Corp. When Mako was sold for $1.65 billion, the profits helped fund the early years of Magic Leap.

The last time Magic Leap spoke publicly in any meaningful way was about a year ago, when it allowed Wired magazine to visit its South Florida office and experience the technology – though they weren’t allowed to describe the hardware itself. Earlier this month, Glixel got a similar invitation. I was asked by founder Rony Abovitz to come to their Fort Lauderdale headquarters to explore the science behind the technology and, for the first time, share details about how the upcoming consumer headset works and feels.

This moment marked the first real look at what the secretive and highly funded company has been building behind closed doors. It also signaled the start of the company’s plans to release its first consumer device in 2018. The unveiling helped explain why tech giants like Google and Alibaba have poured hundreds of millions into the project and why some experts view it as a major leap in technology. According to David Nelson, creative director at USC’s MxR Lab, “Technology like this is moving us toward a new medium of human-computing interaction… It’s the death of reality.”

The Leap

My introduction to Magic Leap’s technology happened inside a soundstage located in a separate building from the main complex. This space is used to test out large-scale experiences that could eventually be adapted for places like theme parks. Like most of the demos I tried during my hour-long visit, I can share what the experience felt like and what it aimed to do, but I agreed not to reveal specific characters or intellectual property involved. Many of these demos are likely one-offs, created mainly for guests who visit under strict non-disclosure agreements to get a feel for what Magic Leap can do.

This first large-scale demo placed me in a sci-fi setting, where the environment was enhanced with effects like strong fans, booming speakers, and computer-controlled lighting. The goal was to show how a theme park attraction could work without walls, lines, or traditional infrastructure. What stood out was how these virtual elements blended into the real-world props and environment around me. While it didn’t fully mimic reality, it came surprisingly close. What made it striking was that the digital imagery wasn’t just floating on top of the real world – it felt like it was part of it.

We moved back into the main building and entered a spacious room set up like a cozy living area – complete with couches, tables, rugs, and assorted decorations. This setup was designed as a demo space, and I had the chance to test out several different experiences. The first one featured Gimble, a floating robot that hovered steadily in the space between where I stood and the far wall. I approached it, circled it, and viewed it from different perspectives. It stayed perfectly in place, and although everything else in the room was visible, the robot itself looked solid – not see-through or flat. The closer I came, the clearer the details became, with no pixelation, just finer textures that weren’t noticeable from farther away. However, if I got too close, it would vanish, or I’d find myself inside it – a reminder that the demo was still in development. I also noticed that the sound of the robot’s movements shifted depending on my position, keeping the audio source in sync with where it should be.

After that, Sam Miller, senior director of systems engineering, and Shanna de Iuliis, senior technical marketing manager, guided me through launching three virtual screens into my space. These screens looked like ultra-thin TVs and stayed locked in place, letting me position them however I wanted. I arranged them like a traditional multi-monitor setup, spaced just far enough apart that I had to turn my head to see each one. Meanwhile, the floating robot continued to hover nearby.

The next short demo featured a cube-like floating display with four live video feeds – one on each side. I could walk around it and view different channels on each panel. Each channel kept streaming regardless of whether I was facing it or not.

During another moment in the demo, a section of the wall lit up, forming the outline of a door. A beam of bright white light shone through, and a woman appeared, stepping into the room.

She approached and stopped just a few feet from where I stood. The realism was impressive. While she didn’t pass for an actual human – her glow and stylized appearance gave that away – her presence was striking. She didn’t speak or respond during our interaction, but she had the ability to do so. Miller controlled her manually, cycling through a range of facial expressions – smiling, angry, annoyed. What stood out most was that her gaze followed me. As I moved or shifted my view, her eyes tracked mine. The Lightwear’s built-in cameras supplied the data that allowed her to hold eye contact. It felt realistic enough that I eventually looked away, unsure if staring back felt polite.

Eventually, avatars like this may evolve into visual counterparts of assistants like Siri, Alexa, or Google Assistant. But instead of just hearing them, you’ll see them beside you – walking, watching, and helping through AI-powered presence.

The set of demos also included a digital comic projected in large format. You could step up to it and view scenes as if peering through a window, adding another unique touch to the mixed reality experience.

Earlier this year, I had a chance to speak with the team at Madefire, a company focused on making comic book reading more interactive. While in Florida, I got to try out one of their demos, and it left a strong impression. A comic page floated above a coffee table, with some panels appearing to hover at different depths. Walking around it, I could look at the artwork from multiple sides, similar to viewing a piece hanging on a wall. The scene showed a storm, and surrounding effects like rainfall and thunder sounds made the moment feel more immersive. It was a small but effective addition that enhanced the experience.

They also gave me a short preview of something called volumetric capture. For this, they worked with a capture studio to record actors performing using specialized equipment. These performances were then placed into a mixed reality setting, allowing users to view them in whatever space they were in. While some minor visual issues were noticeable – like how the area between the nose and lip looked a bit too blended – the overall result was convincing. I watched a fast-paced movement from different angles without any lag. Whether standing far or moving in close, the playback stayed smooth. I also learned that the recorded performance could be resized – either to life-size or small enough to fit in the palm of a hand.

Light Fields

“I call this the cockroach of the industry because it just never dies, and it needs to just stop,” Abovitz says as we look at a screen in his office showing an old stereoscope from the 1830s. This early device created a 3D effect by using two slightly different images placed a small distance apart. A viewer would hold the stereoscope up to their face, and a wooden divider would block each eye from seeing both images at once. The brain would merge the views to create a 3D illusion.

Abovitz has strong views on this. He explains that this method forces the eyes to work in an unnatural way, creating what he calls an "accommodation conflict." In short, the eyes can’t focus as they normally would because they’re processing two separate images to simulate depth.

Most 3D technologies today still use this old method. “It’s sort of distressing to me,” Abovitz adds. “It’s from the 1800s, but it keeps reappearing.” He points out how the concept has come back again and again – in the form of red-and-blue 3D glasses in cinemas in the 1960s and even into the 2000s. “When VR started making a return, it felt like the same old idea,” he says, clearly frustrated that we’re still relying on a technique invented nearly two centuries ago.

While the stereoscope of the 1830s used two flat pictures and modern virtual reality relies on a pair of screens, Abovitz believed there had to be a better approach. He wasn’t interested in simply improving VR. Instead, he focused on creating visual experiences that blend with the real world more naturally. That led him to a mixed reality. Unlike VR, which replaces everything you see, or AR, which adds digital images into your environment, mixed reality combines those images with an awareness of your surroundings. So, for example, a digital horse in mixed reality would know to avoid walking through your furniture or walls.

As Abovitz explored why the field seemed stuck and how to move forward, he identified two key areas of interest in perception and MR technology.

The first was the concept of the analog light field signal. A light field includes all the light reflecting off objects in a scene. A photo only captures a narrow piece of that light. Human vision, however, takes in much more of it, which helps us perceive depth and subtle motion. The second focus was on how that light field data gets processed by the brain – specifically, how it reaches the visual cortex through the eyes.

“The world you perceive is actually built in your visual cortex,” Abovitz explains. He describes the brain as a powerful rendering system, building what we see using around 100 trillion neural connections.

Scientists believe visual processing uses about 40% of brain activity in normal situations. That number can rise to 70–80% during high-focus activities like sports. “You’re basically creating the visual world,” Abovitz says. “You’re really co-creating it with this massive visual signal which we call the dynamic analog light field signal.” He describes this signal as the total mix of light in the universe – a vast and ever-present stream of information.

Building an artificial light field is extremely complex, especially when motion is involved. Back in 2011, Abovitz started digging into this challenge with a friend from CalTech who specialized in theoretical physics. Together, they explored the idea that the human eye doesn't exactly “see” light in the traditional sense – it works more like a filter. Instead of absorbing everything in the full light field, the eye pulls in a thin stream of light and sends that data to the visual cortex. “At this point, we were sort of on our own,” Abovitz recalls. “We were way off the grid.”

Eventually, they came to believe that the brain’s visual cortex behaves much like a computer’s graphics processor. It receives minimal visual data from the eyes and uses that to create the world we perceive. According to Abovitz, the brain might actually rely on a built-in visual model passed down through generations, which only needs small updates to stay accurate. “Maybe we all have genetically passed on versions of the world, and all we do is intake sparse change data to update that model, but we have a persistent model,” he explains.

He says this theory fits with how humans evolved – our survival and daily life depended on quickly processing space and motion, from what's close by to what’s far away. That explains why a distant tiger may appear flat and cardboard-like, while one that’s close up looks highly detailed and three-dimensional.

This idea led to a major shift in how Abovitz and his team approached the light field challenge. If the brain really only pulls in details when it needs them, then capturing the full light field wasn’t necessary. Instead, the technology could just focus on picking up the key parts of the light signal and sending them to the brain through the eye. Abovitz describes this as seeing the brain from a systems engineering point of view. “Our thought was, if we could figure out this signal and or approximate it, maybe it would be really cool to encode that into a wafer,” he explains. “That we could make a small wafer that could emit the digital light field signal back through the front again. That was the key idea.”

This insight marked a turning point. Rather than solving a visual challenge, Abovitz and his team now needed to design a solution. The goal became creating a chip that could send just the right bits of a light field to the brain and make it believe it was seeing something real. The concept relied on using the human eye and brain as part of the system, not building an external display. “There were two core zen ideas: The no-display-is-the-best-display and what’s-outside-is-actually-inside. And they turned out to be, at least from what we’ve seen so far, completely true. Everything you think is outside of you is completely rendered internally by you, co-created by you, plus the analog light field signal."

Abovitz adds, “Everyone is inherently creative because everyone is constantly making their own Avatar world. In the world you are living in, you are creating constantly; you are co-creating constantly, which is super exciting.”

The next move was to turn these ideas into something real and testable.

Hello World

Magic Leap’s defining breakthrough may not seem impressive to outsiders, but for the team, it marked the end of years of intense effort and the beginning of something groundbreaking.

That moment came down to one pixel.

“The first real moment, which no one will care about, is when we had a pixel, and we were using a joystick, and we are just moving a pixel around the room,” said Abovitz. “It was like Pong in 1970 or something. Well, less sophisticated than Pong. It was just a little dot that we were moving around the room, and it was like, ‘Whoa, did we just do that?’” He refers to those early years before their big 2014 milestone as “wandering in the desert.” In 2013, the team began building their first real prototype. Abovitz shares an image of it – something they called “the Bench.” I mention it looks like a heavy, Steam-Punk-style machine, but he offers a different comparison. To him, it’s closer to something from A Clockwork Orange. The contraption involved placing your head beneath a large suspended frame of electronics, locking it in place while a signal generator attempted to convince your brain it was seeing something that didn’t actually exist.

Progress was slow and often discouraging. Abovitz says he found motivation by visiting places like Kitty Hawk and NASA’s Saturn V building. The team itself was a blend of minds from across disciplines – NASA engineers, physicists, coders, and even comic book artists. They kept refining their concept again and again until, at last, they had their pixel.

That single pixel validated their theory. After that, progress moved faster. Characters from comic book ideas were soon being placed into mixed reality environments, and new concepts were emerging – like a game called Monster Battle, where kids would head to real playgrounds and watch giant digital creatures clash in the air above them.

The timing turned out to be just right. Abovitz knew the initial funding set aside for Magic Leap wouldn’t last forever, and they’d eventually need outside investors. Fortunately, the working pixel and the two digital characters were enough to convince Google and other potential backers that Magic Leap was on the right path. By the end of 2014, the company had secured $540 million in venture capital.

With that new funding, Magic Leap upgraded from their cramped single-room workspace and started building their first wearable prototype. Internally, the team nicknamed this early device the “Cheesehead.” “That was like, let’s take the light field signal generator stuff and put computer vision stuff on it and rig it up and start walking around,” Abovitz explained. “And it weighed like tens of pounds. And that was this moment where we were like we need to combine motion and high-end computer vision.”

The bulky Cheesehead helped prove they could scale the light field signal down to a nano-structured wafer capable of emitting the digital light field signal. This was a key step toward building a working mixed reality system. The device also gave the expanding software team a physical platform to test their code and explore how it functioned in a real environment.

Over the following two years, Magic Leap’s team pushed forward on all fronts – software, hardware, design, and scientific research. To speed things up, they relocated to a large facility outside Fort Lauderdale. Beneath this building, they built a wafer fabrication plant, starting in October 2014 and continuing the effort until December 2017. “We went on this really crazy sprint,” Abovitz said.

The Magic

Deep below the company’s main campus, clean rooms house both robotic arms and engineers in protective suits. Together, they assemble photonic chips – the technology behind Magic Leap’s vision for a new kind of reality. As Abovitz walks through the long underground halls, he occasionally stops to point out progress through viewing windows. Paul Greco, the company’s SVP of hardware and engineering, shares that the entire floor had to be stripped down and rebuilt to support the fabrication needs. Details about the wafers are scarce, possibly because they are central to Magic Leap’s innovation. Abovitz refers to the transparent rectangles as photonic wafers – though, to the untrained eye, they might look like lenses.

“Up until this point, we’ve been kind of in the woodshed first developing the notion of the signal and then trying to invent the transistor of that signal,” Abovitz explains. “We’re not moving electrons around with transistors; we are moving photons, a photonic signal with a three-dimensional ray of nanostructures. We don’t really have a name for them yet, so that is what I’ve been calling Sea Monkeys, but that is not a name we could use. I don’t want the Sea Monkey people to get mad at us. So we’re going to come up with a cool name for our structures.”

These wafers direct photons through a 3D nano-structure in a way that creates a specific digital light field signal. Eventually, they’re placed inside a larger lens and fitted into the company’s final product – a wearable headset. Upstairs, in a showroom-like space, Abovitz introduces the hardware. Gary Natsume, Magic Leap’s SVP of design, explains that all the components share a design style with soft grey colors – nicknamed “moon dust” – and smooth, circular forms. This latest version, the ninth-generation model, includes three main parts: the headset, a small pod-shaped computer connected with a long cable, and a hand-held controller called Control. The headset resembles high-tech goggles held in place by a padded strap. The design is sleek and significantly more refined than most virtual reality headsets. “The lens is a very iconic form,” says Natsume. “The aspiration is that eventually, this will become like glasses, and people will wear them every day.”

The headband used to secure the Magic Leap goggles follows what Natsume calls a “crown temple” style. “It came from our study on how to distribute weight evenly around your head,” he says. To wear the device, users grip the sides of the plastic headband and pull gently. The crown separates into three parts – left, right, and rear – making it easy to slide over the head. Two short cables extend from the back and join into one, which runs about four or five feet down to the Lightpack.

The Lightpack consists of two rounded sections joined together with a curved design that creates a small gap in the middle. It’s built to clip onto a pocket or attach to a shoulder strap, which Abovitz compares to a guitar strap.

Magic Leap will offer the goggles in two sizes. The forehead cushion, nose supports, and temple padding can all be adjusted for a better fit. The company also plans to offer prescription lenses for those who wear glasses.

The system’s controller fits neatly into one hand. It features a touchpad, a set of buttons, haptics, and motion tracking with six degrees of freedom.

Both the Lightwear headset and Lightpack unit have a playful, minimalist look – not because they feel like toys, but because they’re lightweight and sleek. Abovitz highlights the technology built into them. “This is a self-contained computer,” he says. “Think about something close to like a MacBook Pro or an Alienware PC. It’s got a powerful CPU and GPU. It’s got a drive, WiFi, all kinds of electronics, so it’s like a computer folded up onto itself.”

Abovitz then gestures toward the Lightwear goggles. “There is another powerful computer in here,” he says, explaining that it handles real-time sensing, computer vision, and machine learning. This allows the device to constantly monitor and respond to the environment. “You’re wearing something extremely light, but it acts like a high-tech satellite on your head.”

The headset includes four microphones that can pick up ambient sound. Alongside this, a real-time processor and six external cameras track both the user and their surroundings as they move. Built-in speakers near the temples produce spatial audio that adjusts based on both your movement and the position of the virtual objects you’re viewing. “This isn’t just smart glasses with a camera,” he adds. “This is spatial computing. It’s fully aware of the space you’re in.”

When asked about specific technical specs like the GPU, CPU, or battery life, Abovitz keeps things vague. He says some details are being saved for future announcements and that the team is still refining power efficiency.

As the tour wraps up, a large table covered in a white cloth catches attention. What's underneath? “That’s where the next prototypes are,” Abovitz says with a smile.

Sigur Ros Music and Weta Robots

As the demo wrapped up, Miller asked for my impressions. I told him the goggles were surprisingly light – comfortable enough that I forgot I had them on. The small computer module fits easily into my pocket, and the cable connecting it to the headset never feels intrusive. The handheld controller was quick to pick up and use. The sound quality was sharp, directional, and immersive. But I did have one concern: the field of view.

Like Microsoft’s HoloLens, which relies on a different type of mixed reality tech, the Magic Leap Lightwear doesn’t give you a full, eye-matching field of view. Instead, digital content appears within a horizontal rectangular area. Since it floats in mid-air, I couldn’t get a precise size. So, I tried estimating with objects I had. A credit card was way too small. Eventually, I settled on this: it’s about the size of a VHS tape held out in front of you at arm’s length. Bigger than the HoloLens display but still clearly limited.

“Our future-gen hardware tech significantly expands the field of view,” Miller said. “What you see now is the field of view for this version. The next product will be noticeably wider. We’ve already got that working in the labs.”

De Luliis mentioned that developers can apply a fade effect at the display edges. That way, the boundaries feel softer, and your brain will fill in the missing space more naturally.

Before ending the session, Miller demonstrated one last feature. He walked to the far end of the room and asked me to summon Gimble, the floating robot. It appeared right next to him. Then, he stepped into the same space the robot occupied and disappeared – well, most of him did. I could still see his legs below the robot.

At first, I just accepted what I was seeing. But then I realized something wild was happening: my eyes had prioritized a digital character over a real person standing in the same spot. According to Abovitz, this was an intentional effect of how the technology guides your brain to focus on the virtual object.

To wrap things up, I was led into a different room for a demo I’m actually allowed to fully describe. The Icelandic experimental band Sigur Rós has teamed up with Magic Leap on a project they refer to as a “soundscape.” Before starting, I put in a pair of earbuds connected to the headset. “What you’re about to see is called Tonandi,” said Mike Tucker, the project’s technical lead. “This isn’t a pre-recorded song. It’s an interactive soundscape – that’s how the band likes to define it.”

Tonandi begins by surrounding you with a circle of delicate, glowing trees and then waits for your response. Floating wisps of light drift through the air around you. When you wave your hands near them, they respond with musical tones, either disappearing or changing form. As time goes on, new elements appear. I tapped, waved, and reached toward these interactive shapes, each one adding a different musical layer to the ambient score. Pods slowly rose from the floor, plants grew from the rug and table, and shimmering stingray-like creatures made of light glided through the room. My actions didn’t just shift the scene – they helped build the music, blending my gestures with Sigur Rós’ compositions in real-time.

The experience was intuitive and dreamlike – something anyone could step into and enjoy, even without knowing how it worked. But as Tucker explained, a lot of complex technology was running behind the scenes. “We’re taking advantage of several features unique to Magic Leap,” he said. “We’re using room meshing, eye tracking, and our gesture-based input system throughout the experience.”

During lunch in a conference room, Abovitz shared that the team had once tested a horror-themed demo. “It was terrifying,” he admitted. “People didn’t want to enter the room again. It was very, very scary – almost dangerously so – so we decided to put that idea on hold for now.”

There were other demos I wanted to try, but time didn’t allow. One of the experiences I was especially curious about was being developed by Weta Workshop, the special effects company known for its work on The Lord of the Rings, Blade Runner 2049, and Thor: Ragnarok. Unfortunately, it wasn’t ready for testing during my visit.

That project is the first game created by Weta’s new division, Weta Gameshop. It’s built around the world of Dr. Grordbort, a quirky sci-fi setting imagined by Weta designer Greg Broadmore and owned by Weta co-founders Richard Taylor and Tania Rodger. According to Taylor, around 55 people are now working on the game, with Broadmore leading the division. “We’ve been developing it for about five years,” Taylor said. “The game’s design has evolved alongside Magic Leap’s hardware and software.”

This upcoming title, expected to launch with Magic Leap’s headset, is a first-person shooter set in Dr. Grordbort’s eccentric universe. In the game’s story, a robotic planet discovers how to open portals to Earth. Players use the system’s controller – visually turned into a ray gun in the game – to stop the invasion. “When the game starts, portals open on your living room or bedroom walls,” Taylor explained. “You can see into the robot world. Things begin quietly with Gimble [the robot from earlier demos], but once Dr. Grordbort appears, chaos follows. It’s the most intense experience we’ve created in a real-world setting.”

The Persistence of Reality

Magic Leap’s billion-dollar tech often feels so seamless that it’s easy to forget just how advanced it really is. That sense of effortlessness is actually one of the biggest achievements of the system – it’s designed to feel almost invisible.

One major challenge Abovitz and his team set out to address was the discomfort that comes with using traditional virtual reality headsets or spending too much time staring at screens. “Our goal is to ultimately build spatial computing into something that a lot of people in the world can use all day, every day, all the time, everywhere,” Abovitz says. “That’s the ambitious goal; it’ll take time to get there. But part of the day is that you need something that is light and comfortable. It has to fit you like socks, and shoes have to fit you. It has to be really well-tuned for your face, well tuned for your body. And I think a fundamental part of all day is the signal has to be very compatible with you.”

Recreating a light field, he explains, allows Magic Leap to offer visuals that feel as natural as real-world vision. That sense of ease is what they’re aiming for at the core. “You don’t ever want to think about it again,” he says. “You just want to know that we took care of it, and we think that’s an important first step.”

Even though the core tech is in place, Abovitz admits that the visual delivery isn’t quite perfect yet – especially when it comes to the field of view. “Field of view we think is, we’d call it workable and good for ML1,” he says of the first consumer headset. “It is one of the things we will continue to iterate on in Magic Leap 2 and 3 and beyond. And there is sort of a point where you hit a form factor and a field of view … where you are sort of done there.”

Abovitz didn’t directly answer my questions about one of the tech’s more debated aspects: supporting multiple focal points. In theory, a true light field display should allow users to shift focus naturally – seeing an object up close blur while the background sharpens, just like in real life. None of the demos I saw offered a clear chance to test this. When I asked if Magic Leap’s headset supported multiple focal planes, Abovitz followed up via email: “Magic Leap’s Lightwear utilizes our proprietary Digital Lightfield technology, which is our digital version of an analog lightfield signal. Developers may create applications and experiences with characters and objects that appear correctly in space and allow a user to focus naturally on an object of interest, as they would in the real world.”

When I pushed for clarification, he declined to elaborate further, citing proprietary details.

This emphasis on a first iteration that’s functional but not final may explain why the device is called Magic Leap One: Creator Edition. Their version of “creator” includes developers, early adopters, brands, and agencies. “The consumers who bought the first Mac, or the first PCs,” Abovitz says. “Everyone who would have bought the first iPod. It’s that kind of group. But it’s definitely not just a development kit. If you’re a consumer-creator, you are going to be happy.”

Abovitz wouldn’t confirm pricing or a specific release date either, though he was clear it would arrive sometime in 2018. As for the price: “So we have an internal price, but we are not talking about that yet,” he said. “Pre-order and pricing will come together. I would say we are more of a premium computing system. We are more of a premium artisanal computer.”

Even without all the answers, spending time in Magic Leap’s sprawling headquarters helped me understand their broader vision. It’s not only about a headset or even the light field tech that powers it. Magic Leap is blending a range of technologies to reimagine how we interact with digital experiences. Their system combines advanced photonics, which aligns digital content with real-world light, with continuous world awareness.

The headset doesn’t just display objects in a room – it actively maps the environment. It knows where your walls, tables, and chairs are and ensures that virtual objects behave accordingly. For instance, digital fish won’t float through a couch, and if you place monitors in midair above your desk, they’ll still be there the next day.

While we don’t have full specs for the Lightpack – the small, powerful computer that powers it – we do know it’s built to handle immersive gaming and more. The system also tracks hands, fingers, voice, head movement, and eye position. The audio, like the visuals, is spatially aware, meaning sounds stay tied to the objects they’re coming from, regardless of where you stand. Volume and direction respond dynamically as you move.

This tech foundation – the Lightwear headset, the Lightpack computer, and the Control device – is just the beginning. In early 2018, Magic Leap plans to launch a creator portal and release its software development kit (SDK). From then on, development won’t just come from partners like Weta, Sigur Ros, ILMxLAB, and Twilio – it’ll be open to anyone with the tools to build.

Before I left, I stopped by Abovitz’s office to say goodbye. He insisted on walking me out himself. On the way, we paused near a staircase where a Salvador Dalí print hangs. Titled Spectacles with Holograms and Computers for Seeing Imagined Objects, Abovitz sees the painting as symbolic of Magic Leap’s mission. He pointed out details in the art: scribbled equations, a pair of glasses, and what appears to be a lit-up brain.

To me, it looked like an image of visual information being absorbed by the brain. But to Abovitz, it represented the reverse – technology injecting visuals into the brain and then projecting them outward into the world.

In his eyes, it was Magic Leap by way of Dalí. A surrealist take on mixed reality – made real.

Recent posts
How to Write an Outstanding Music Essay
Essay writing guides
by Author avatar Mary Watson
How to Write a DBQ Essay in Simple Steps
Essay writing guides
by Author avatar Mary Watson