I'm not talking about their fake demo ads they put everywhere, I think they actually had an impressive in person demo using some different technology they couldn't miniaturize.
They had a few-hundred-pound cart-bound prototype called The Beast that was supposedly mind blowing to use, and that's what convinced a lot of engineers to drop everything and move to Florida to work on it. I agree pushing that technical narrative would have sounded much better.
I'm not a marketing person, but constantly alluding to something amazing without revealing details is a hack that stirs up a lot of curiosity and people discussing what it could be.
If you reveal the thing then that dies down (or worse knowledgeable people know that what you're dong isn't possible), but if you keep it secret while giving content-free little hints about it you can keep it going longer (and maybe raise more money by letting people in on the secret?).
I have a strong dislike for this kind of thing, but that doesn't mean it's not effective.
As funny as this sounds I must agree. If they had something cool & innovative they could have shown the world and been upfront about the challenges to make it smaller. That could in turn bring new talent to help them. Instead we now see a drowning company.
What's really crazy about that video--while it's technically possible to make something mostly like that with current hardware[0]-- is that, if you have any experience with AR at all, you know that most of those UIs would be terrible to use.
[0] The FOV is accurate, given we're looking through a narrow camera lens, but gives the wrong impression that it fills what the user could see because it fills the video frame. The graphics wouldn't be "solid", they'd be transparent, but a pre-setup room can definitely do occlusion effects with foreground furniture. The physical gun controllers could be done, though nobody would fork out the money for it. And all the hand gestures and UI pinning stuff could be done, though the software support on Magic Leap does not help you in the least.
Basically a meta-layer for the real world that you can interact with outside of a screen. This would let you do things like interact with a lightswitch from across the room by looking at it, get metadata about most object states by looking at it, anchor big displays to white walls, etc.
I think there's huge potential for this kind of interface, but I suspect the hardware isn't possible yet.
The hardware can do this, it's just that you can't get any funding for anything interesting. You're basically stuck with hobby apps and marketing demos developed via consultoware. The hobbiests can't afford the tech or the lack of reach and the consultoware shops have exactly zero imagination (I know, I worked at one).
I personally define VR vs AR as "who provides the context in which we are working? The app (VR), or the user (AR)". A lot of extant "AR" apps don't do anything particularly interesting with your surrounding environment.
If your AR app needs me to clear out a space in my livingroom to give you room to drop some 3D models that maybe bounce off my walls, you've not actually made an AR app, you've just made a crappy VR app instead. Facebook could release an update to the Quest any day now that auto-scans your room to set the boundary and then you'd have exactly the same experience in an occluded headset, but with twice the FOV and better input.
This is the thing that gets me. There are plenty of people who would be willing to design around existing constraints, just because they think the tech is cool and they want to see what they can do with it.
But the cost of entry is mid-4-figures, between the hardware itself and the required development equipment. It's obvious that the people making the hardware and platform aren't a wide-enough cross-section of the public to create what business interests or consumers don't know that they want yet. They're shooting themselves in the foot, trying to maintain control over the platform.
My guess at the killer app for AR is airplane maintenance.
Imagine a physical checklist where areas get highlighted, arrows to direct you to the next step, and a little red icon that goes green when you're done.
I think this could shave real time (maybe a third?) off airframe downtime while keeping the very high accuracy requirement. That would save actual money.
That was the concept video which was different from the demo, but you are correct and they did a horrible job conveying that that was a concept and not the actual product.
Did they? They raised a billions of dollars to try to basically try to drag reality to this demo. It feels like a bigger version of a regular venture story.
"Gimme a ton of money to run this experiment. If I'm right you'll be rich"
That's not even a demo of the actual physical product though, that was just a video that was posted online that purported to show what the experience would look like (but actually was not). It's not like you would've seen that had you actually been looking through the glasses.
i think that technology was CGI:
https://hothardware.com/news/magic-leap-admits-outrageous-au...