Get up to speed fast on the techniques behind successful enterprise application development, QA testing and software delivery from leading practitioners.
Modernize your performance testing: 6 tips for better apps
Why humans are core to DevOps success
How software testers can demonstrate value to the business
How DevOps teams are using—and abusing—DORA metrics
How to improve your observability systems
Trends and best practices for provisioning, deploying, monitoring and managing enterprise IT systems. Understand challenges and best practices for ITOM, hybrid IT, ITSM and more.
4 things IT Ops teams need to know about data management
The new IT services model: Why you need to get product-centric
The 8 flavors of serverless: How to choose wisely
5 steps to becoming a data-sharing master
How AIOps is a game-changer for predictive analytics and CloudOps
All things security for software engineering, DevOps, and IT Ops teams. Stay out front on application security, information security and data security.
DevSecOps and hybrid cloud: 4 items for your security checklist
5 infrastructure security tasks your developers can automate
What you need to know about KVKK data-privacy requirements
Transform your security approach: 7 ways to shift to cyber resilience
3 best practices for locking down your hybrid cloud security approach
TechBeacon Guides are collections of stories on topics relevant to technology practitioners.
TechBeacon Guide: The State of SecOps 2021
TechBeacon Guide: Application Security Testing
TechBeacon Guide: Data Masking for Privacy and Security
TechBeacon Guide: Cloud Security & Data Privacy
TechBeacon Guide: Unstructured Data Security
Discover and register for the best 2021 tech conferences and webinars for app dev & testing, DevOps, enterprise IT and security.
DevOps Enterprise Summit Virtual – US
DevOps World 2021
Webinar: Threat Hunting—Stories from the Trenches
Webinar: Cybersecurity Executive Order Challenges and Strategies
Webinar: Data Privacy and CIAM—Complete Your Identity Stack
The world of application development keeps evolving at breakneck speed with respect to processes, delivery, and methodologies. But it’s not just developers who are struggling to keep up with constantly changing software: This evolution is forcing test engineers to modernize their performance testing practices—and to let go of old methodologies that can’t keep up.
Here are tips that will help your team implement modern performance testing practices—and drop outdated processes that drag down your results.
Creating load automation, executing load scenarios, and testing a system’s performance by slamming it with load have been what organizations typically do when they have to embrace performance testing.
This practice has caused performance testing and load testing to be wrongly seen as interchangeable terms. Even performance testing professionals often switch these up, which perpetuates the bad old tradition of testing performance by running only load automations and load tests.
Today, load testing and load automations are just some of the actions you need to exercise in your performance testing practice. But they should be one of the last steps you execute and, in some situations, you shouldn’t even do them at all.
Performance testing encompasses myriad practices and actions that must be taken as a whole. Load tests have their place, but first you need to perform other tasks, described below.
The traditional approach to performance testing doesn’t address performance assurance, which implies all the possible tasks you may need to perform to ensure the best performance.
The best processes to assure good performance require tasks to be executed even before writing the first line of code. Some of those tasks create mechanisms in the environments, including pipelines, monitoring, and instrumentation.
Old strategies focus on automation and front-end load tests until the very last steps in the software development lifecycle, which limits the time available to complete the usual load testing. This practice weakens performance assurance, leaving little time for corrections and resulting in massive costs when problems are detected. If rework is needed or the team must release faulty software into production, there will be a significant impact.
Think early about performance, including not only infrastructure, but also all performance implications from the requirements gathering stage to building epics, features, and tasks. Everything you implement around performance should define metrics that must pass before you mark anything as done.
Teams must define measurements such as response time on a single thread, concurrent response, the number of database connections/reads, maximum bandwidth consumed, and so on. With this performance focus, your teams—including your developers—will have performance etiquette in mind before, during, and after creating software. 
Contrary to the old ways of thinking about the software lifecycle and QA practices, where developers were disconnected from QA efforts related to the code they created, your developers must be wholly involved in QA and performance assurance from the beginning.
The old mindset made it difficult to identify defects generated in the code and allowed those defects to reach and at times pass QA, acceptance, and performance tests and go into production. And the cost of fixing defects that make it into production is much higher than if you catch them earlier.
Modern practices suggest implementing rules for what developers deliver. One possibility is implementing telemetry, instrumentation, unit tests, and timers inside the application code and storing the performance measurements. Those actions help trigger, detect, and measure performance issues, even at the development stage, and make it easier to identify and report any problem, even before you check in any code.
It helps to have application performance measurements at every moment in the software development process. As soon as developers write code, the team should have performance measurements, which should continue until production.
Having these measurements is a drastic change from old practices, where often there was no way to measure the performance of an application and its components. Usually, no mechanisms were in place until the software reached a test environment or even the production stage. In some cases, there weren’t even any metrics in production.
Even so, performance metrics in the code are not enough. Teams must complement these with application performance management (APM) systems. An evolution of the old application performance monitoring systems, these systems provide lighter agents and a myriad of new functions to monitor and manage performance thresholds.
Teams must implement APM agents and instrumentation in every environment that the application passes through in the software lifecycle. As code passes from development environments to staging, testing, branches, and so on, your team will be able to observe and measure performance metrics and any outstanding deviation in a continuous manner.
Another outdated practice is trying to automate processes for testing at the end, just before releasing code into production. This issue affects both performance automation and test automation in general. Traditionally, performance testers and QA teams often had to reverse engineer code, functions, and front ends to automate the testing of such code, which had considerable impacts on every task.
There were multiple occasions when the testers could not automate the processes because the software bits were sealed, compiled, or inaccessible. The software would go completely untested on those occasions, or testers would have had to test manually.
To avoid this, developers creating the code must consider the nature of the test automations used and ensure that the code can be easily triggered from those automations. They can implement calling methods and create test backdoors, test-oriented APIs, and any mechanism that allows for automated testing.
These mechanisms have multiple benefits. On the one hand, creating needed test automation for general QA and performance measurements, including load, will be easier. On the other hand, this will help the team integrate these tests and validations into continuous and automated processes that will receive those results as flags for letting the code move into production.
In traditional practice, testing professionals used to think of performance testing just as a single load test to be executed once before launching, or at most every year if there were any changes. But these days, your solution is expected to change frequently. Performance test results become obsolete the moment you include new code or after sprint releases, making a just-once performance test a useless practice.
If you follow the best practices above, your team will efficiently and continuously measure performance at every step of the software-development lifecycle and increase the capabilities of integrating every performance automation and threshold into any platform.
Your tests will be light and highly automatable so that your team can schedule them or configure them to be triggered by code check-ins, scheduled jobs, or external events. As the automations are triggered, your teams will receive performance measurements continuously, allowing you to implement thresholds that will automatically stop new code or let it reach production.
Finally, your automation will be repeatable even in production, allowing the tests to run in any tier and environment of the application, together with thresholds for alerting. When your team implements all of these thresholds, they will allow for notifications and corrective triggers. In this way you will avoid having to watch everything at all times and being overloaded with uneventful measurements.
Following the same old practices for performance testing and assurance can be unproductive or even harmful to your application, so move your focus away from just automated load tests. Think early about your performance needs and risks. Involve the developers in performance-enabling tasks.
Measure performance everywhere in your code and in every environment. Make your solution easy to automate. And allow your automations to be triggered constantly and whenever changes happen.
If you do these things, you will be several steps ahead in modernizing your performance assurance efforts.
Want to know more? Drop into my talk, “Performance—Really What Is It These Days? Why Does It Matter?” on October 7, 2021, during STARWEST. Both in-person and virtual registration options are available. The conference runs October 3-8, 2021. You can also catch me on the PerfBytes Podcasting channel, where I host PerfBytes Español edition, and on my YouTube channel, Señor Performo in English.
Keep up with QA’s evolution with the World Quality Report 2020-21.
Put performance engineering into practice with these top 10 performance engineering techniques that work.
Find to tools you need with TechBeacon’s Buyer’s Guide for Selecting Software Test Automation Tools.
Discover best practices for reducing software defects with TechBeacon’s Guide
Get the best of TechBeacon, from App Dev & Testing to Security, delivered weekly.

Brought to you by

I’d like to receive emails from TechBeacon and Micro Focus to stay up-to-date on products, services, education, research, news, events, and promotions.
Check your email for the latest from TechBeacon.