I think Google hasn't been paying attention. User experience has been defined for the google glass already. Automated 3D mapping combined with motion interaction is the largest missing aspect of the user experience. This has been observable in the VJ community for years now with many apps being developed in Unity 3D meaning they're ready to go in Android and iOS. Near eye transparent displays and projection work very similarly which makes me think Google Glass and Project Tango should be apart of the same project. Lastly Google doesn't seem to understand you need to be able to remove the 3D sensor from the display itself so mapping and motion sensing functions can be conrurrently run. More functionality of the display and sensor are unlocked if you can reposition them relative to each otheri.e. simulated surface touch with a pico projector.