LoadRunner and Performance Center

Reaching the new frontiers of load testing

Reaching the new frontiers of load testing


Space Load Runner.jpg



The passion that drives Load Testing reminds me of the human desire to reach the outer limits of space and the desire to continue exploring.

The “New Frontiers” program is a series of space exploration missions being conducted by NASA with the purpose of researching several of the Solar System bodies (including the dwarf planet Pluto).

With this program, NASA’s goals are to:

  • Examine the "big picture" of solar system exploration today
  • Perform a broad survey of the current state of knowledge about our solar system
  • Obtain an inventory of the top-level scientific questions that should provide the focus for solar system exploration in the next decade
  • Generate a prioritized list of the most promising avenues for flight investigations and supporting ground-based activities

The modern reality of digital transformation demands that companies deliver applications faster—without sacrificing quality and performance. Hewlett Packard Enterprise is a recognized leader in the performance engineering and testing “space”, and with every new software release, we continue to solidify our position as a forerunner and innovator. In fact, I think it’s very simple to map NASA’s goals to ours:

  • Examine the “big picture” of platforms and technologies today
  • Perform (it is appropriate to say!) a survey of the current state of knowledge about our performance testing space
  • Obtain an inventory of the top-level problems that should provide the focus for performance testing in the next years
  • Generate a prioritized list of the most promising technological trends for further investigation and activities.

In this blog, I shed light on the next frontiers of load testing. I will focus on three topics: Internet of Things (IoT), Chaos Engineering and Big Data.

Internet of Things (IoT)

The Internet of Things (IoT) is the network of physical devices, vehicles, buildings and other items—that are embedded with electronics, software, sensors, actuators, and network connectivity. This capability enables these objects to collect and exchange data (source: Wikipedia). The world is becoming increasingly connected; from toothbrushes and thermostats—sensors are embedded everywhere. These sensors allow communication from security systems to appliances and other emerging connected devices. The app ecosystem is moving beyond computers, tablets and smartphones to every connected device. According to Gartner, there are around 6.4 billion IoT devices in use as of 2016, and that number is expected to grow to 20.8 billion by 2020. (That is 3X in only four years!)




How to approach the challenges

Greater complexity. IoT devices are those that are constantly connected to a server. However, with hundreds of thousands/millions of these smart devices sending traffic back to the servers, it is critical that the backend is properly load tested to ensure it can handle the load. Developers should simulate load on the servers to see how well they hold up under pressure. MQTT (MQ Telemetry Transport) is one of the most popular protocols in the IoT space and it works well with high latency, low bandwidth constrained devices like smart meters. To create an IoT load scenario, you can use TruAPI, a script type supported by HPE StormRunner Load, website load testing software, based on the highly scalable NodeJS runtime. There is also a specific MQTT NPM module that can be separately installed, that can be used to make specific MQTT calls.

Connectivity. To a performance tester, the IoT might seem daunting. But, it is similar to testing mobile applications: they often move around, so network connection is not consistently reliable. We need to simulate overburdened WiFi channels, unreliable network hardware, and slow Internet connections. HPE Network Virtualization empowers you to accurately test and optimize applications for all network performance conditions. With this software you can discover and capture live network performance conditions—such as latency, packet loss, bandwidth limitation and jitter—and re-create those conditions for network performance testing.

Performance Testing APIs. A smart device should work on a wide range of devices and web browsers to reach the largest possible audience. For example, a smart thermostat that only works with iOS won’t sell to people who use Android. Application Program Interfaces (APIs) are designed to help a specific device communicate with other systems. Beginning with LoadRunner 12.53 load testing software, we added a usable and simple feature that enables the user to add REST API-based syntax within WEB protocol scripts.

Try HPE LoadRunner load testing software for yourself here.

In this blog you can find more information on a real IoT load testing use case.

Chaos Engineering

Since application complexity increases over and over and failure is unavoidable, why not deliberately introduce it to ensure your systems and processes can deal with errors? Chaos Monkey is a software tool that was developed by Netflix engineers to test the resiliency and recoverability of their Amazon Web Services (AWS). Setting Chaos Monkey loose on your infrastructure—and dealing with the aftermath—helps strengthen your app. The idea is that by scheduling the Chaos Monkey to "work" during the normal business day, the team can react to issues that would happen in the middle of the night.

Chaos Monkey.png



How to approach the challenges

The advantage of chaos engineering is that you can quickly experiment with issues that other testing layers cannot easily capture. This can save you a lot of downtime in the future and help design and build fault-tolerant systems. If you are running large distributed systems using cloud computing with a variety of services and process’s designed to scale up and out, injecting some chaos will potentially be very valuable. One thing commonly overlooked with chaos engineering is its ability to find issues caused by cascading failure. You may be confident that your application still works when the database goes down, but what about an issue related to a third-party service? HPE Service Virtualization easily creates simulations of application behavior. You can model the functional network and performance behavior of your virtual services by using step-by-step wizards. With this capability, you don’t need to reprogram to accommodate changes in test conditions and performance needs. To facilitate performance testing business processes simulating services that are not available, HPE LoadRunner and HPE Performance Center integrate with HPE Service Virtualization.

Big Data

Big data is defined as large amount of data which requires new technologies and architectures so that it becomes possible to extract value from it by capturing and analysis process. Testing of these datasets involves various tools, techniques and frameworks to process. Big data relates to data creation, storage, retrieval and analysis that is remarkable in terms of volume, variety, and velocity.

Big Data.jpg


How to approach the challenges

Big data invariably means that enterprises must handle larger amounts of data on existing network infrastructures. This presents a huge performance and capacity challenge, particularly for the use of Apache Hadoop as a building block for big data. The Hadoop Distributed File System (HDFS) is the first building block of a Hadoop cluster and an efficient and resilient network is a crucial part of a good Hadoop cluster. A network is also crucial for writing data, reading data, signaling, and for operations of HDFS and the MapReduce infrastructure. Therefore, the failure of a networking device affects multiple Hadoop data nodes. As a result, networks must be designed to provide redundancy with multiple paths between computing nodes and, furthermore, must be able to scale; in addition, the network must be able to handle bursts effectively without dropping packets.

Access latencies create bottlenecks in systems in general, but especially with big data. For example, moving petabytes of data across a network in a one-to-one or one-to-many fashion requires an extremely high-bandwidth, low-latency network infrastructure for efficient communication between computer nodes. As mentioned above, with HPE Network Virtualization you can accurately test and optimize applications for all network performance conditions, recreating those conditions for network performance testing: latency, packet loss, bandwidth limitation and jitter.

As said by Buzz Lightyear, we can plan to go “to infinity and beyond”…with HPE performance engineering tools!



  • Load Testing
0 Kudos
About the Author


Gaspare Marino is the WW Product Marketing Manager for HPE LoadRunner and HPE Performance Center. He currently works with customers to facilitate the creation and management of a Performance Engineering Center of Excellence (PCoE).


Thanks for the posting interesting and valueable arcticle

Can you also let us know which protocol might be the best for test the performance of BIG DATA