The AMBA 4 specification for the connection and management of functional blocks in a system-on-chip (SoC) now features Advanced eXtensible Interface (AXI) coherency extensions (ACE) in support of multi-core computing. The ACE specification enables system-level cache coherency across clusters of multi-core processors. When planning the functional verification of such a system, these coherency extensions bring their own complex challenges, such as system-level cache coherency validation and cache state transition validation. At any given time, it’s important to verify that the ACE interconnect can maintain cache coherency across the different ACE masters in the system. Cache state transition validation involves verifying the ability of the ACE interconnect to handle all cache line state transitions in ACE masters in the system. This requires a high degree of configurability and responsiveness in the stimulus generation infrastructure, as well as a robust checking mechanism for validating the system-level cache coherency.
This article will describe how the Universal Verification Methodology (UVM) configuration mechanism can be leveraged to optimize configurability of the sequences. This mechanism also enables the reactive sequences to create the right stimulus for the respective CIP (Master/Slave/Interconnect) components. Given that coherency has to be maintained across multiple masters, this must enabled through the system and sub-system-level components. By using the UVM resource mechanism and ACE interconnect in different modes (active/passive), the cache coherency can be checked via a combination of front-door and backdoor accesses. The UVM hierarchical phasing schemes and configurable sequences can also be leveraged to model various transitions for the system to ensure complete verification closure. To handle such a complex system, an appropriate debug environment is also described, allowing the verification engineer to debug the environment at different levels of abstraction using the base UVM infrastructure.
The ACE protocol
Cache coherency refers to the consistency of data stored in the local caches of a shared resource. When clients in a system maintain caches of a common memory resource, problems may arise with inconsistent data among caches or main memory. This is particularly true for CPUs in a multiprocessing system. The cache coherence is intended to manage such conflicts and maintain consistency between cache and memory; see Figure 1.
Figure 1: Cache coherent components
The ACE protocol extends the AXI protocol and provides support for hardware-coherent caches. The ACE protocol is implemented by using a five-state cache model to define the state of any cache line in the coherent system. A cache line is defined as a cached copy of a number of sequentially byte addressed memory locations, with the first address being aligned to the total size of the cache line. The cache line state determines what actions are required during access to that cache line. Additional signaling on the existing AXI channels enables new transactions and information to be conveyed to locations that require hardware coherency support. Additional channels enable communication with a cached master when another master is accessing an address location that may be shared.Coherency model
The ACE protocol ensures that all master components observe the correct data value at any given address location. The coherency protocol ensures that all masters observe the correct data value at any given address location by enforcing that only one copy exists whenever a store occurs to the location. Figure 2 shows an example of an ACE based Coherent System.
Figure 2: Cache coherent system
The masters initiate requests and often contain a cache. An interconnect connects one or more masters to one or more slaves. When a transaction requires coherency support, it is passed on to the coherency support logic within the interconnect. After each store to a location, other masters can obtain a new copy of the data for their own local cache, allowing multiple copies to exist. The interconnect can initiate “snoop” transactions to access cache lines in the master cache. There is no requirement to keep main memory up to date at all times. It is only necessary to update main memory before a copy of the memory location is no longer held in any shareable cache.
The ACE protocol enables master components to determine whether a cache line is the only copy of a particular memory location, or if there may be other copies of the same location. If a cache line is the only copy, a master component can change the value of the cache line without notifying any other master components in the system. On the other hand, if a cache line is also present in another cache, a master component must notify the other caches by using an appropriate transaction.
Besides these, there are additional specifications centered on the granularity of coherency, access rights, cache line state updates, protocol transactions, protocol channels and transaction flows.