We're glad to see EE Times delving into the future of augmeted reality HUD, especially from a processing perspective, which is where it appears the sources for this article are based.
I want to respectfully suggest that not all solutions leading to a very wide FOV and appealing HUD lie with the chipmakers, however, although they play a central role in smoothing quickly rendered display of imagery used for navigation and ADAS HMI.
The company leading small form factor A/R HUD is MVS-California, in San Jose, California. Customers using that 3D, volumetric A/R HUD are beginning to expose their super wide FOV HMI plans to the world, through conferences and technical meetups. I notice none of those sources was quoted here, unfortunately, which is a shame because some of them are happy to let their researchers speak to the press.
Please go back to Strategy Analytics for additional information. They have recently updated a study on automotive display in which the state of augmented reality HUD is pretty thoroughly analyzed and its leaders identified.
It simply isn't possible to build a small in-dash A/R HUD by focusing exclusively, or even primarily, on better chips or SOCs, although those play a signficant supporting role. The optomechanic design leads, first and foremost. Projection and optomechanics are exquisitely archane fields in which chipmakers do not specialize.
NVIDIA and TI have every reason to be bullish on the future of large-scale, crisply defined Head Up Display. But the designs that will actually fit into a car, and those whose components will make such a design feasible, and affordable for average drivers, will come from specialist HUD design teams, not from chip makers whose business is spread across many competing display types. These entities have to work together, with chip makers following the lead of the HUD design teams.