Why Banks Need Flexible Infrastructure?
Why does IT governance fail so often?
6 Benefits of ISMS Implementation
7 Ways Value Stream Tool Integration Can Improve Your Software Quality
Securing Microservices: Strategy to Implement Security for Microservices
Zero Trust Architecture: What It Is And Best Practices For Implementing It
Things to Consider When Developing a Financial Services Application
Leveraging the power of Value Stream Intelligence
Performance Testing using Docker/Container
Ready to Ship words on a cardboard box to illustrate a product or goods that are in stock and prepared to be sent or delivered to a buyer or customer
Performance engineering takes center stage, as more consumers prefer shopping from the comfort of their couch even for basic things like pantry supplies and produce. With so much competition, it is imperative for brands to enhance the end user experience and application performance while not compromising on the frequency of release. Even though the frequency of release has been addressed through non-conventional approaches, which include extreme agile, continuous delivery and philosophies of DevOps, it has been limited to functionality and functional experience. The performance experience, real user experience and speed of application response have not seen the overhaul in design thinking they deserve, considering the demands of our ADD generation. In most enterprises even today, the relative education on performance engineering is lacking and where it (the education) exists the measures adopted seem meager. Performance Engineering is mostly limited to a few load tests and almost always becomes an afterthought, thus becoming a bottleneck in the release sequence and when ignored it becomes the cause for about 80% of the page abandonment.
One thing is sure. Performance testing or load testing as some refer to it as cannot come at the end of the engineering lifecycle, right before deployment. In essence it can no longer be an after thought. Two things that need to change fundamentally. One, a mere load test is insufficient, the goal must be to find potential areas of optimization and congestion right after the build check-in; two, it cannot appear so late in the lifecycle. If an enterprise talks about adopting continuous delivery and being deployment ready, then performance engineering should be on top of the agenda there.
Three steps that enterprises are taking to adopt performance testing in a CI/CD:
- Early Performance Testing combined with tuning
- Load testing and Pre-release engineering
- Experience Testing post release
Since #2 is a widely adopted strategy, I am going to limit the scope of this discussion to 1 and 3.
Early Performance Engineering
As part of early performance testing, it is important for application development teams to leverage the expertise of performance architects or engineers to create 10-50 user tests that can be incorporated and implemented into the continuous integration. The results from such tests help identify basic performance issues that may be 10X more complicated to fix downstream. Examples can be optimization of stored procedures or even some specific SQL queries. Another element that can be measured and fixed at this juncture is the impact the page elements can have on the overall application performance. These can be identified through set up of node.js or similar and can play a critical role in understanding the application performance, eliminating the dependency on underlying network speeds. The issues detected are predominantly UI related and fixes can boost performance and page load times tremendously. The tools employed can be JMeter, Locust, Jenkins, Node etc., and this approach can be implemented with near zero overheads on the project time.
So, besides having an idea of what your potential pitfalls may be, you are giving all stakeholders in the value chain a heads-up. The developers get a fair warning of what needs attention, the DB guys have a little extra time to optimize and the Ops folks now have the data they can extrapolate to determine what is needed in production in preparation of a release.
So, now you have overcome all your hurdles and are live in production. There were no anomalies, everything looks rosy until you get that call in the middle of peak usage, the database is failing or response times are bad. You gather your teams, throw more cores at whatever you can and hope the problems go away. It works at times and other times it doesn't. Why can't we have a better approach at this?
A lot of enterprises have tools and skills to plugin the performance tests into their CI-CD strategy. But, ask yourself this question – Are your performance tests talking to you? Having performance tests run and report the response times is as useless as having no performance tests at all. If your goal is to be deployment ready, your tests have to be intelligent enough to talk to you and tell you a story beyond response times. Did the change to the payment gateway impact your store API? You cannot deduce this from a pure response time standpoint or throughput standpoint.
Making your performance tests talk is no easy task and there is no silver bullet here. This is more of a machine learning exercise where your tests keep learning about your IT landscape and get smarter after each run. But having a head start to achieve this goes beyond scripting and running tests. At Qentelli, our core Performance Research team developed a framework that can breathe life into the lifeless load tests and help the tests get smarter. This framework is tool agnostic and abstracted at a very high level. Once implemented for a certain industry/organization, the framework’s flexibility in being able to be tailored to a certain workflow, architecture, development and build practices will give you a right head start in becoming a “performant deployment ready”. As with any infrastructure, performance problems are inevitable. But having a test framework that learns what went wrong last week and keeps you from getting there again is the stuff you want to do.
Post release you need to enable monitoring that allows you to have control, be more proactive to issues that may arise in production. What worked on your data center under ideal conditions may not necessarily work as designed in less than ideal conditions.
Here are 3 things you can do:
- Proactive monitoring at server level with well desired thresholds
- Real-user experience simulations - An algorithm developed by Qentelli to understand key transactions in ideal and throttled conditions
- Correlate RUX and server logs to understand, analyze and predict applications
One thing is clear; performance testing cannot be about load soak or stress tests or the tools anymore. It has to stretch beyond conventional thinking and approaches; it must be engrained in the Continuous Delivery methodology and DevOps philosophies at enterprises.
The goal must be performance engineering with a mindset to eliminate bottlenecks, proactively upstream and an effort must be made to understand and predict production application behaviors.
Headquartered in Dallas, TX with global delivery teams in India, Qentelli is an Industry Thought Leader in Quality Engineering, Automated Testing, and Continuous Delivery. With high performing engineering teams working in the dedicated Innovation Group, Qentelli brings design thinking to address complex business problems and enables Continuous Delivery across Enterprise IT through automation for its global customers.