Built on the cloud to enable application testing in the cloud

Cloud Testing

Subscribe to Cloud Testing: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Cloud Testing: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

cloudtesting Authors: Pat Romanski, Tim Hinds, Dalibor Siroky, Plutora Blog, Elizabeth White

Related Topics: Cloud Computing, Open Cloud Collaboration, F5 Networks, Cloud Testing

Blog Feed Post

Cloud Testing: The Next Generation

It seems only fair that as the Internet caused the problem, it should solve it

imageOne of the negatives of deploying an Internet-scale infrastructure and application is that until it’s put to the test, you can’t have 100 percent confidence that it will scale as expected. If you do, you probably shouldn’t. Applications and infrastructure that perform well – and correctly – at nominal scale may begin to act wonky as load increases. Dan Bartow twitterbird, VP at SOASTA, says it is still often load balancing configuration errors that crop up during testing that impedes scalability and performance under load. Choices regarding the load balancing algorithm have a direct impact on the way in which sites and applications scale – or fail to scale – and only under stress does infrastructure and applications begin to experience problems. The last time I ran a scalability and performance test on industry load balancers that’s exactly what happened – what appeared to be a well-behaving Load balancer under normal load turned into a temper-tantrum throwing device under heavier load. The problem? A defect deep in the code that only appeared when the device’s session table was full. Considering the capability of such devices even then, that meant millions of connections had to be seen in a single session before the problem reared its ugly head.

Today load balancers are capable of not millions, but tens of millions of connections. Scale that is difficult if not impossible for organizations to duplicate. cloud computing and virtualization bring new challenges to testing the scalability of an application deployment. Application deployed in a cloud environment may be designed to auto-scale “infinitely” which implies testing that application and its infrastructure requires the same capability in a testing solution.

That’s no small trick. Traditionally organizations would leverage a load testing solution capable of generating enough clients and traffic to push an  application and its infrastructure to the limits. But given increases in raw compute power and parallel improvements in capacity and performance of infrastructure solutions, the cost of a solution capable of generating the kind of Internet-scale load necessary is prohibitive.

One of our internal performance management engineers applied some math and came up with a jaw-dropping investment:

quote-leftIn other words, enough hardware to test a top-of-the-line ADC [application delivery controller] would set you back a staggering $3 million. It should be clear that even buying equipment to test a fairly low-end ADC would be a big ticket item, likely costing quite a bit more than the device under test.

It seems fairly obvious that testing Internet-scale architectures is going to require Internet-scale load generation solutions but without the Internet-scale cost. It’s only fair that if the scalability of the Internet is the cause of the problem that it should also provide the solution.


It isn’t a huge leap of logic to assume that the same operational model that provides automated Internet-scalability for applications could do the same – at a fraction of the cost – for Internet-scale testing solutions. So all you need is a cloud-deployable load generation client, a couple of cloud computing environments, and a way to control those distributed clients in such a way as to generate the scale necessary to push your application and infrastructure to its limits.

Past experience distributing software load generation clients across even as few as ten servers in the same physical location says this is easier said than caution-loadlimitdone. It’s not the deployment that’s the problem, it’s the management of the distributed test clients that is problematic, and distribution across multiple cloud computing providers would prove a nearly insurmountable challenge for most solutions let alone the organization employing them. And yes, you must distribute across cloud computing providers for several reasons, including:

  1. Location of clients matters to applications and infrastructure. Whether it’s codified location-based application logic that requires testing or the reality that most applications are not stateless and thus require client-application instance affinity, location matters. In the case of the former it’s obvious, in the latter, not so much. But what happens when you use a single cloud computing provider or region is that all client IP addresses are within a specified range, on the same network. That’s not reality, it’s just the way it is. This is the inverse of the Mega-Proxy Problem suffered by load balancers in the early days of the scalable Internet. Combine narrow range of IP addresses with client-server affinity and you’ll quickly find that you have a scalability challenge with your application – one that is unlikely to be realistic. You need the diversity of location to properly test an Internet-facing application.
  2. Bandwidth. Depending on the cloud provider from which you choose to launch such a test you may find their network to be the bottleneck. Whether internal or external (to the backbone), launching the kind of scale necessary to stress your site or application from a single cloud provider could prove little more than the provider has limited bandwidth (yet again debunking the infinite scalability of cloud computing belief) or a less than optimal internal network.
  3. Security alerts. A barrage of requests coming from a narrow range of IP addresses is likely to set of security mechanisms you have in place to detect such attacks. While this is a great test of your security infrastructure (and you should consider doing this, if you want to be confident the protections you have in place are actually working) it’s not so great for testing your application and infrastructure because in a well-architected datacenter such activity will be detected – and stopped. While generating load from multiple sites may mitigate this problem, you may need to turn the volume down on your security infrastructure, regardless.



What it comes down to is this: to properly vet the performance and scalability of Internet-facing applications and infrastructure, especially those deployed using elastic scalability, i.e. cloud, you need the same scalability, i.e. cloud.

You may recall that we explored such solutions in “To Boldly Go Where No Production Application Has Gone Before”. SOASTA, captain of the galaxy-class cloudship “CloudTest” returns in this post as admiral of the Internet-class cloudship known as “CloudTest Grid.

quote-left From a web browser, a performance engineer can deploy hundreds or even thousands of servers using an intuitive wizard interface.  The wizard walks through a series of steps to enter locations, number of servers and other relevant parameters. The result: within minutes, servers are deployed and our customers are ready to test.

-- Unlocking the Gridlock of Testing from the Cloud

What SOASTA is doing is opening up its “cloud-based grid” the organization (and some of its trusted customers) have been using for the past year or so to load test Internet-scale applications. That means any organization will have at their fingertips – literally – the ability to launch a distributed load test against their application and supporting infrastructure wherever it may be deployed. Point and click. Drag and drop. The ability to perform such testing easily is paramount to ensuring that auto-scaling solutions in public cloud (and private) are configured correctly under load and behave as expected without incurring an expense that sounds more like you’re buying a cloud provider than simply testing an application deployed in one.

SOASTA’s decision to offer these capabilities to organizations is exciting, but not entirely without risk – mostly on their part. Chatting with Bartow about the announcement he was quick to note that the solution was only available to those of “lawful alignment”. In other words, those who would abide by the terms of service offered and not use the incredible power of distributed load generation for evil.

Load testing has always been restricted to large organizations who could afford the hardware and software and staff to design and subsequently execute large-scale load testing on its applications and infrastructure. The combination of cloud computing and SOASTA’s solution mean organizations of a much broader size and budget spectrum can take advantage of load testing without incurring what would likely be budget-busting costs.

In other words, there’s no excuse for not testing an application and its infrastructure to ensure correctness of architecture, of implementation, and of configuration to meet demand when it arrives.

Test early, test often, test hard.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.