Computer and consumer electronics users now routinely expect 3D-rendered elements in user interfaces for LCD displays. In the years since initial 3D interfaces became available, consumers have become accustomed to seeing objects with depth and the ability to rotate, and menus that will move across the screen to reveal more choices. The best examples today include the iPad, iTouch and Android devices. Thousands of LCD-driven consumer devices are being designed with three-dimensional capabilities as the core user interface technology.
The reasons why 3D is better than 2D are obvious. By definition, a 2D image has no depth, only width and height, like a photograph. A 2D image of a car can be rotated, scaled, and moved around on a screen (translation) in two dimensions ('x' and 'y'). But a 3D image can be rotated, scaled and translated in three dimensions ('x,' 'y' and 'z'). A 3D object has depth and can be viewed from all angles. This builds on the human understanding of space and objects, making the experience much more intuitive and interactive. Effective 3D images dazzle consumers, help define a product’s style and value, and can convey a great deal of information.
Generating 3D images requires a sophisticated graphics display controller (GDC) which, in turn, needs a geometry unit and a texture-processing unit. Integrating these elements into a graphics engine provides optimal performance, as shown in figure 1.
Fig 1: Graphics SoC block.
A leader in this technology, Fujitsu has been active in the embedded graphics market for more than 10 years and in the graphics space for nearly 20, designing, developing and helping customers integrate leading-edge 2D and 3D GDCs. So, let’s review the basics of these powerful and innovative devices.
Many of the current generation of best graphics controllers can render both 2D and 3D images. But in many cases system designers are not taking advantage of built-in 3D capabilities despite the numerous advantages for the end-user. In automotive applications, for example, a driver wants to know about a car tire that is going flat or a light that is no longer working on the car. Using 2D technologies, this would require a tremendous number of pre-rendered images to highlight all possible angles and conditions. Adding a 'door or trunk ajar' condition will require hundreds of megabytes of pre-rendered 2D images (figure 2).
Fig 2: 2D Images showing rotation (hundreds more would be required for full rotation).
With 3D, all this and more can be done in several hundred kilobytes of image and geometry data (figure 3).
Fig 3: 3D image – single object can be rotated to any angle, scaled to any size and highlight any object (tires, lights, doors, etc.).
An extensive white paper continues this article with sections covering:
- How 3D objects work
- Adding additional sophisticated effects
- How 3D significantly improves 2D user interfaces
- Smaller memory footprint requirements
- Hardware-accelerated rotation, scaling and translation
- Adding or changing graphics assets 'on the fly'
- A simpler migration path to 3D
- How 3D improves the end-user experience
Download the white paper here.