As media-rich applications continue to explode, network equipment manufacturers, service providers, and enterprises must fine-tune their network infrastructures and services to deliver high performance and quality of experience (QoE) needed by these applications. The drastic increase in bandwidth and performance associated with LTE are forever changing the way that mobile services are used. In addition to new broadband services increased capacity will trigger the development of new handset features, including higher screen resolution and better battery technologies.
LTE is bringing the mobile browsing experience to a whole new level – indistinguishable from wired network experiences – allowing greater interaction with social networking sites such as Facebook. Service providers networks will support a variety of applications only recently seen on mobile phones, including web surfing, streaming video, peer-to-peer networking, and machine-to-machine communication that will consume large amounts of bandwidth for longer durations.
In order to sustain large numbers of concurrent sessions, application delivery infrastructures must be equipped to handle connections, transactions, and tunnels while maintaining the elasticity to meet peak usage. As network deployments are challenged to run smoothly with new levels of legitimate traffic, they must also deliver expected performance and QoE while withstanding attacks and stresses from malicious traffic. The technology and service complexity combined with the sheer load on the network, and having to contend with this increase in malicious traffic, has created a perfect storm for test engineers.
Test engineers need to be able to deliver their applications with confidence. Since they often have limited time and CAPEX resources to test the complex scenarios of today’s networks, they need a robust and easy-to-use application simulation and testing system to verify new service offerings. In light of today’s “hyper-active” environment, here are six critical components to look for when evaluating application test systems:
The breadth to test new technologies. A test system should not only work with your IP services, but it should also keep current with the new technologies that you’ll be supporting in the future. This is a good indication that the test system vendor is investing in its technology and that the system is on the leading edge. Testing is imperative before launching a new service, so your test system needs to be ahead of the market. Plus, if you’re looking to roll out VoLTE next year or over the top (OTT) video services, your test system had better support it now.
Advanced subscriber-modeling capabilities to create realistic scenarios. It is critical that your test system emulates real-world application traffic scenarios that will traverse your network once a service is up and running. Rather than building a lab full of expensive equipment to replicate a network, most NEMs and network operators use a test system to simulate a fully functional network, including applications and services. A thorough test system will give you the ability to accurately-model specific user profiles such as a business user, shopping user, or iPhone user. It also emulates specific activities such as Facebook logins or wall posts, user think times, video-quality bitrate throttling, cookie jars, mixed browser emulation, and device emulation with command or transaction modeling.
Extreme scalability. Your test system must not only generate a realistic traffic mix, but it must also be able to generate maximum traffic that your network will likely experience. For most of today’s networks that’s an unprecedented amount of bandwidth-hungry traffic, therefore you should look for concurrency (IPs, sessions, connections, transactions, tunnels, users), peak rates (connections, transactions, tunnels, calls per second), and throughput (clear text and encrypted). It’s also important to look for high throughput performance numbers for IPv4 and IPv6 HTTP and SSL, millions of user sessions and connections rate, 10G line-rate video and voice and IPSec throughput, tunnel rate, and capacity.
Simple user interface (UI). Test engineers need to focus on testing your applications, rather than learning the complexities of the test user interface. That said, a test system UI should not look like it came from the 1990s. Microsoft has led the way in system UI ease-of-use, and your test system should be making great strides in capturing that look and feel. A familiar ribbon-based menu system will launch most frequently used capabilities with a single click. In addition, advanced capabilities are exposed progressively, allowing for easy ramp-up for new users.
Tools to increase test engineer productivity. Look for a rich suite of pre-built quick tests based on industry standards and user requirements in a test system. A powerful, customizable test environment will empower your test engineers to produce more thorough tests faster. You’ll be able to construct the simulated application configuration settings based on a few high-level user-defined parameters, or take advantage of fully customized user settings that represent the real world network to be simulated. It should let you easily customize the packaged tests, as well as add custom tests to the test library for reuse. Test results should be collected and presented with a flexible set of functions – including real-time graphs presenting test execution progress, formatted reports for detailed post-test analysis, and detailed test execution logs.
Security without compromise. Your test system must ensure performance of legitimate traffic without tradeoffs on security effectiveness and accuracy. Since security infrastructure can be a choke point in consolidated data centers, and inaccurate countermeasures cause service disruption, be sure security testing measures the detection accuracy (limiting false positives) to limit service disruptions and ensures up-to-date attack/threat immunity by testing against the latest exploits and vulnerabilities.
As these rich applications become more dominant throughout a wide user base, current networks and infrastructures will be pushed to their limits. By using the tips listed above, test engineers will be able to keep pace with the increasing subscriber growth as well as generate insightful analytics that can be used by applications in real-time. Testing the networks before they go live will help network equipment manufacturers, service providers, and enterprises maintain customer satisfaction without sacrificing revenue.
About the Author
Eddie Arragehas held senior positions in marketing, engineering, and business development in the test and measurement industry for ten years, covering security, web infrastructure, multicast, IPv6, routing, and switching. At Ixia, Eddie defines and drives marketing plans for the L2/3 business segment, which covers broadband access, mobile backhaul, core network, and data center.
I agree with kinnar. There are built in security measures developed with each pltform. The point here is that each infrastructure that the application is riding on is different and security settings on firelwall, IAD and UTMs can vary greatly beween each enterprise. In order to fully test the resilienccy of an application, you must test the application agaist both good and bad traffic hitting the network to find out the ultimate quality of experience for an end user.
David Patterson, known for his pioneering research that led to RAID, clusters and more, is part of a team at UC Berkeley that recently made its RISC-V processor architecture an open source hardware offering. We talk with Patterson and one of his colleagues behind the effort about the opportunities they see, what new kinds of designs they hope to enable and what it means for today’s commercial processor giants such as Intel, ARM and Imagination Technologies.