EDA users drive powerful requirements for the underlying computer and data storage infrastructure. In particular, many aspects of the EDA process such as regression testing and correction processing engines like Synopsys' Proteus often cause storage system bottlenecks due to heavy peak loads.
For designers looking to make full use of these tools, optimize performance, and speed time to market, all parts of the infrastructure must be in balance. While clustering processors has been a boost to performance at the CPU level, storage I/O operations have remained constrained by a reliance on mechanical disk, leaving the I/O link in need of attention. Now new acceleration solutions based on centralized storage caching restore I/O performance and boost overall system productivity.
EDA environments, with their demanding data intensive applications, often experience the pain of I/O bottlenecks. The most common I/O constrained scenario is a large number of clients, using distributed processing, simultaneously accessing a single data set. This concurrent client access drives a heavy workload that takes its toll on storage systems due to the inherent latency of mechanical disks.
Proteus, an optical proximity correction solution from Synopsys, fits this model. Its data sheet states that:
Proteus supports distributed network processing for fast turn-around time. By distributing the task among several platforms, cycle times can be shortened to meet your requirements. Excellent scalability has been achieved on more than 100 processors.
Using 100 processors in a distributed application requires a storage system capable of interacting with more than 100 processors simultaneously. To date, those storage systems have been few and far between.
In addition to the IT challenges presented by distributed processing architectures, applications themselves can lead to certain obstacles that cause excessive I/O loads and development delays. EDA applications in particular have characteristics that make them susceptible to I/O bottlenecks in terms of scale, random data access, and service requirements.
Applications requiring storage stability often hit I/O bottlenecks in the following scenarios:
- Many I/O requests for shared data leading to queuing
- High number of concurrent clients and transactions
- Support for millions of files
- Very large data sets
The scalability required frequently means that a storage system relying primarily on mechanical disks will have trouble keeping up. There is simply too much time required for drive head seek and disk rotation time when I/O demands are so high.
Random data access
Random data access for devices that operate mechanically " like disk drives " adds an extra degree of strain. Because seek and rotation time outweigh transfer time, queues can build up and solutions that focus on throughput as opposed to access time might not be sufficient to alleviate bottlenecks.
For users of EDA applications, time to market and the ability to maintain schedules drives customer satisfaction and the bottom line. Severe peak loads may lead to missed service levels causing unpredictable and consequential delays to project teams.
Introducing centralized storage caching
Centralized storage caching innovatively applies well-known caching concepts to modern data center architectures by keeping frequently accessed data in a very large central memory pool instead of on traditional mechanical disk drives. This enables high performance data access by avoiding time-consuming disk operations and accelerates applications due to dramatically decreased response times and increased data throughput.
Scalable caching appliances deliver high capacity, high performance cache as a shared network service to accelerate data center performance. They connect to the network via standard Gigabit Ethernet, complementing existing storage to provide very high throughput and real time data access.
By serving the most requested I/O operations from high speed memory as opposed to slower mechanical disks, applications are freed from I/O restrictions. This has a tangible and measurable result for engineers using I/O intensive EDA tools.
Centralized storage caching provides immediate benefits to EDA environments.
- Accelerate storage and applications
Caching frequently accessed data " or even the entire data set " in high-speed memory delivers a dramatic performance boost in throughput and more importantly, access time. With I/O latency reduced from milliseconds to microseconds, applications can operate more efficiently than ever before. This speeds time to market for critical components of product launches.
Further, in cases of a large number of clients accessing a single data set, applying a centralized cache provides the foundation to deliver I/O while protecting systems from coming to a crawl.
- Reduce over provisioning
Rebalancing the data center with high-speed memory to complement persistent storage systems reduces over provisioning. Before centralized caching, the conventional method to increase storage performance was to deploy more mechanical spindles. Now administrators looking to increase I/O operations without having to support excess capacity can do so. Fewer mechanical devices means a reduction in power, space, and cooling needs, lowering otherwise compounding and cascading capital and operating costs. Simpler configurations promote baseline efficiency and allow administrators to focus on end-user application delivery.
- Improve Quality of Service
EDA environments encompass multiple projects that typically share the same storage resources. During periods of multiple regression tests, software builds, or correction analysis most systems fall below acceptable service levels. Centralized caching enables consistent and robust guarantees. This leads to improved quality of service, protection from peak load disruption, and the ability to more accurately predict and meet tight project schedules.
Additionally, the ability to instantly provision a high throughput, low latency storage volume enables rapid application deployment with minimal or no reconfiguration. EDA administrators can more easily provide new application options, and EDA users can ensure they have the latest tools.
Centralized storage caching can be seamlessly implemented through the use of scalable caching appliances without upsetting current applications or infrastructure. A basic NAS/NFS implementation would follow these steps:
- Connect a scalable caching appliance to the Gigabit Ethernet network
- Identify the data that needs to be accelerated and make that data accessible to the scalable caching appliance
- Identify which clients need the additional throughput and reduced access time
- Direct clients to the accelerated view of the data through the caching appliance
- See immediate results from a boost in throughput and reduction in access time
This solution maximizes the use of existing infrastructure and does not require customers to change out their storage systems. The result is a reduction in deployment time and risk to achieve the high throughput and low access times required by demanding EDA applications.
The rapid expansion of EDA markets has led to dramatic growth in the storage infrastructure for EDA applications, but more importantly the demand for I/O operations to be on par with powerful distributed application architectures. Before centralized caching, CPU resources could not access data quickly and efficiently enough leading to overall compute bottlenecks. Now, complementing storage systems with high capacity, high throughput, instant access cache restores infrastructure balance and boosts overall application productivity.
About the Author:
Gary Orenstein is Vice President of Marketing for Gear6, Inc. He is the author of IP Storage Networking: Straight to the Core. You can email Gary at firstname.lastname@example.org