Posts Tagged :

QualityEngineering

Continuous Delivery (CD) and it’s benefits

In the world of software, CD is one such development practice that present benefits to all the stakeholders (Development, Operations, Testers, Business teams). Continuous Delivery is for every organization driven by software, it is hard to think of business not using software.

It benefits people and requires changes around people for better adoption and implementation. It starts with Collaboration, change in skillsets, perfect hand-offs; Top Down drive for CD, Adoption of technology, Elastic Environments, Stronger Pre-Production and automation.

There are companies that are still contemplating investing in CD and some companies which think they are doing it are still running into deployment issues and code failures with every release. If you lie in the latter camp, it is time to rethink your approach towards it and look at the benefits you are getting for all the teams. It provides everyone with their share of benefits. Read on to know how CD benefits everyone –

Developers – More efficiency and reduction in tech debt

Without tools and processes in place, developers spend more time in fixing code issues, instead writing new codes. Developers are pressed with looking into past codes written months back and waste time in fixing them. With deadlines approaching fast, they write low quality and less clean codes, compromising on the overall product quality.

CD enables proper feedback loop which speed up resolving issues when they are fresh in the mind of developers instead guessing and waiting till the end. Tools and processes for version control makes developers life easier by keeping track of the changes done in the project without being lost. Similarly, tools for automating provisioning of environments help in saving time and efforts, ensure compliance and security practices from the beginning of the development lifecycle. This benefits developers in the ways mentioned below –

  • Better end to end visibility to trace the changes and error codes
  • Writing new and quality codes improving product quality
  • Faster feedback loops
  • Integrated compliance and security best practices
  • Less dependency on operations

Operations – Less Firefighting, More Innovation

Traditional IT Ops is tasked to provide reliable, stable, optimized and highly available infrastructure. At the same time, they are working closely with the development teams to ensure environments are available, functioning at peak performance, make sure rolling out new environments doesn’t hamper the stability of old ones. This requires operations to keep on doing redundant work and achieve desired SLAs. In a non-DevOps environment, applications or systems going down is often being blamed on operations.

With CI/CD and DevOps, the lines between development and operations are blurring and everyone is responsible for overall application performance. In DevOps – Developers can provision environment whereas operations can understand code, creating a team of hybrid skillsets. This gives operations time to look beyond operational issues and contribute towards innovation. This allows operations to deliver environments quickly, testing with real users and shift from cost centre to innovation centre. Some of the benefits CD brings in for Operations are –

  • Leading the innovation front for digital transformation
  • Stable and highly available environments
  • Efficient operations due to removal of unnecessary wastage, waiting times and processes
  • Reduce bottlenecks and dependencies on each other

QA – Never ever shipping the broken code into production

The main job of quality assurance team is to keep software ‘ready to be deploy’ every time new code is written and merged with source repository. The aim is to accelerate deployment but in develop-first-test-later environment QA is a bottleneck and holding it back.

DevOps is fuelling faster adoption of Automation across the Development Lifecycle. CD involved functional testing and performance and security tests. This increases more confidence in deployments and keeping application deploy ready all the time. With DevOps, there are high chances of errors being caught by QA and fix it, before deployment. DevOps gives safety net of shipping code as tests are well integrated in the development cycle with automation. DevOps benefits QA teams by –

  • Integrating QA in the development process
  • Keeping applications ‘ready to deploy’
  • Early detection and faster resolution of defects
  • Quick rollbacks to achieve stable state
  • Spend more time in writing new tests than executing redundant ones

Business – Features reach market faster

The business teams are looking for increase in revenues, better customer satisfaction, reduced costs, end to end visibility about the new launches, and data & insights availability to support decisions. DevOps fosters improved collaboration between development and business teams. Marketing knows when the next product is coming out, sales know what and when to up-sell and cross-sell and customer service knows when the next feature is rolling out and how they can help customers in using and understanding them. DevOps presents businesses with set of benefits such as –

  • Businesses decide to go live, not the operational issues
  • Faster time-to-market
  • Improved customer experience
  • More time in innovation rather than fixing
  • Improve communication and collaboration

Integrating CD into your teams

CD and DevOps are more of cultural change and less of tools and technologies and as humans, we all tend to resist change. It is very crucial to make your teams aware of the benefits it brings in and how the teams adopting DevOps are running less into deployment issues making everyone happier.

Whether it is culture, process or technology transformation, Qentelli has worked with customers to mature CI/CD implementations or helping teams to adopt it better. Our expertise in this space has helped many enterprises and smaller organizations to realize and quantify the benefits organizations are looking for with DevOps adoption.

To learn and explore more in detail, please write to us at info@qentelli.com. Our experts will be delighted to engage with you. Also, you can visit Qentelli’s social links for more details-

Facebook Twitter Linkedin

About Qentelli

Headquartered in Dallas, TX with global delivery teams in India, Qentelli is an Industry Thought Leader in Quality Engineering, Automated Testing and Continuous Delivery. With high performing engineering teams working in the dedicated Innovation Group, Qentelli brings design thinking to address complex business problems and enables Continuous Delivery across Enterprise IT through automation for its global customers.

www.qentelli.com

Moving from Continuous Integration to Continuous Delivery – The “Qentelli Way” to successful DevOps

Enterprises want to innovate at faster pace, to match expectations from end users, and to beat competition. Continuous Delivery and DevOps are key to achieving the pace of innovation organizations are looking for. Many organizations who have moved from traditional models of development to Agile, have achieved some level of Continuous Integration, to support their Agile initiatives. Organizations are still struggling with the move to the next level, to help them achieve true DevOps. From idea to released product is what determines how fast an organization can innovate. Read on, to know how you can take that next step, and become successful at DevOps

Failing fast at CI

If you have practiced agile for a while, you should have implemented some level of continuous integration. Checking in code to version control system, polling the changes periodically and running builds through a CI system.

Taking the next step from such practice towards DevOps, is to achieve faster CI cycles, which provide feedback on whether a build was successful, and if its deployment to the Dev environment was successful. While traditional CI cycles stop at running builds, and tests in some cases, the new CI cycle should involve running builds, executing unit and integration tests, code analysis and deployment to Dev environment. After successful deployment to Dev environment, quality gates should be setup to mark the deployment as success or failure. All these steps should happen within minutes, to help with faster feedback to development team, so they can fix issues faster.

Branching your way to CD

As part of Continuous Integration, the branching strategies were not given a lot of importance. They were used to better manage code based on releases. To achieve Continuous Delivery, having a proper branching strategy, and promoting the code between different environments based on the feedback from quality gates at each stage of pipeline is very important. The CI systems should be configured to run different jobs based on the branches and provide a way to auto-merge and promote the code to higher environments. The key to achieving Continuous delivery successfully would be to setup the right branching strategy for your end to end CI/CD pipeline.

Adding Continuous Testing into the mix

People often think automated testing is the same as continuous testing. Although as part of Agile, test automation was given utmost importance, the automation has always been Sprint-n. For continuous delivery being in-sprint with test automation is very important, as you like to test the features you like to release at end of pipeline. Continuous testing does not mean running all the 1000’s of selenium tests you might have created. It is important to build the right set of tests to test the features being rolled out, and make your test automation act as a quality gate to provide feedback on your release. Being able to leverage parallel execution mechanisms to complete test execution faster is also a key implementation strategy for continuous testing. Test automation frameworks should be designed with these considerations in mind.

Magic of metrics

Metrics were always important in an engineering lifecycle. For a successful DevOps, metrics have become much more important than ever. But it is no longer useful to have multiple sources of data to look for answers. DevOps needs actionable insights into data, often pulling details from different data sources into a single metric. While metrics from tools such as Jira were used to make Go-No Go decisions as part of Agile releases, for continuous delivery, the releases happen in automated fashion, and hence quality gates determine Go-No Go decisions of releases most times. To have predictive analytics on the metrics being tracked and appropriate notifications getting generated during errors helps in successful Continuous delivery. A quality intelligence platform is required to be implemented instead of a metrics pulled from different tools, for your continuous delivery initiatives

Automate as the key to success

Automating some of the key tasks as part of the CI/CD pipeline is important for frequent successful releases. The automation can be in terms of the various quality gates, in merging of code between different branches to promote the code, provisioning infrastructure, generating metrics and even rollbacks. The key to successful DevOps is automating the most important tasks of the CI/CD pipeline.

The Qentelli way towards achieving CD for better quality and faster velocity

At Qentelli, we understand the importance of Continuous Delivery being an essential link in the IT leadership’s armor and our approach and framework have Quality Engineering and DevOps at the heart of this approach. Qentelli’s tool such as AiR ensures the highest quality being delivered using the CI/CD pipeline by providing the ability to auto-heal test scripts based on changes to code base and predicting issues with code during performance tests based on the result patterns.

Such AI-driven tools will improve the overall quality of the application being deployed into production and help in faster feedback during various stages of the CI/CD pipelines. This ability to predict issues and to fail fast would result in better overall delivery speed for organizations. AI-driven tools and Predictive analytics as part of your CI/CD pipeline is the next big step your organization can take today to be future ready and deliver faster with better product quality.
Our clients and most importantly, the teams found it very effective to have a complete deployment history visible across the team. The way to achieve ultimate nirvana in DevOps i.e Continuous Deployment starts with automating build, testing, integration, and deployment by practicing them on repeat mode.

To learn and explore more in detail about Qentelli’s AI-driven DevOps implementations, please write to us at info@qentelli.com. Our experts will be delighted to engage with you. Also, you can visit Qentelli’s social links for more details – Facebook Twitter LinkedIn

About Qentelli

Headquartered in Dallas, TX with global delivery teams in India, Qentelli is an Industry Thought Leader in Quality Engineering, Automated Testing and Continuous Delivery. With high performing engineering teams working in the dedicated Innovation Group, Qentelli brings design thinking to address complex business problems and enables Continuous Delivery across Enterprise IT through automation for its global customers.

Poor Product Quality? See how AI can be your saviour for better quality and speed.
The DevOps Journey

Organizations implementing DevOps and Continuous Delivery practices understand testing often and getting quicker feedback is key to a successful DevOps implementation. But as organizations seek to improve their product quality and delivery speed, having the ability to predict quality and performance issues before they occur would help organizations concentrate their efforts on improving other areas of the CI/CD pipeline. As organizations implement DevOps practices, it is important they track key metrics and KPIs which can be leveraged in such predictions.

Saving the day with Predictive Analytics

Qentelli helped many organizations implement end to end CI/CD pipeline with automated early Performance, Security and Functional testing. In our experience, production defects and performance issues caused unanticipated application downtimes and rollbacks. As we started collecting metrics, logs and data from the environments and analysing the logs and other KPIs, we were able to identify patterns and generate algorithms that could predict possible areas in environment and code that could cause most common production issues. There were many instances when many hours or days of production downtime was avoided as the incidents were predicted in production environments before they occurred, and corrective actions could be taken to almost cause no downtimes and rollbacks.

As organizations implement predictive analytics, alert and notification systems, they would be able to improve their processes to avoid common mistakes. But, being able to achieve better quality and speed would mean more than predicting production issues, and AI can help in other areas during the pipeline to improve quality and speed.

Improving Quality and Speed with AI

Improving quality in a CI/CD pipeline would mean identifying failures and failing fast, auto healing test scripts based on application changes. AI-driven tools, such as Qentelli’s AiR, provide the ability to auto-heal test scripts based on changes to code base and predicting issues with code during performance tests based on the result patterns. Such AI-driven tools will improve the overall quality of the application being deployed into production and help in faster feedback during various stages of the CI/CD pipelines. This ability to predict issues and to fail fast would result in better overall delivery speed for organizations. AI-driven tools and Predictive analytics as part of your CI/CD pipeline is the next big step your organization can take today to be future ready and deliver faster with better product quality.

To learn and explore more in detail about Qentelli’s AI-driven DevOps implementations, please write to us at info@qentelli.com. Our experts will be delighted to engage with you. Also, you can visit Qentelli’s social links for more details – Facebook Twitter LinkedIn

About Qentelli

Headquartered in Dallas, TX with global delivery teams in India, Qentelli is an Industry Thought Leader in Quality Engineering, Automated Testing and Continuous Delivery. With high performing engineering teams working in the dedicated Innovation Group, Qentelli brings design thinking to address complex business problems and enables Continuous Delivery across Enterprise IT through automation for its global customers – www.qentelli.com

Plant Quality Intelligence & Assurance Tree: Learn how Qentelli Does Regression Testing!
Regression Testing 101:

Selective retesting to detect faults introduced during modification of a system or software component, to verify that modifications have not caused unintended adverse effects, or to verify that a modified system or software component still meets its specified requirements.
“…a testing process which is applied after a program is modified.”
How often do you perform regression testing on your code base? What is the recommended frequency of regression testing? What is the correct technique of regression testing?
This article will answer all these questions!

How do you decide right test case for Regression Suite:

The objective of regression testing is to uncover broken functionality that got induced with modification on existing code base, hence it is recommended to identify core functionality test cases from the existing Test Bank and based on the below factors:

  1. Test case covers critical functionality
  2. Test case is high priority test case compared to other testcases in that component
  3. This test case is part of the functionality that is directly impacted by the change performed

If answers for all the above are “Yes” then this test case is a candidate for Regression Test Suite

Should I automate regression test cases:

Test often, test more! This is possible only if you have automated your regression suite.
Having said that, it is all too common for regression testing to focus solely on the success rate of tests, whether automated or manual, without fully evaluating if this indicates the software is functioning as expected. In some instances, making changes to fixed bugs can uncover functional problems that were not addressed by the previous regression suite, making exploratory testing that much more beneficial.

How much of exploratory testing should I include:

It is important to weave an exploratory testing phase into the regression testing process; this step can greatly improve the quality of the software. In many cases, a human will catch an unknown issue, even where a test case indicates no such issue exists.
When developers make too many changes to the Software too frequently, it often leads to instability of the software, higher test failures, lot of bugs and unknown defects.
For this reason, it is beneficial to perform exploratory testing before a major release. This approach usually reveals failures that structured test many not discover and even points to improve test suite, that would significantly alter the the end-result of the software.

Qentelli brings to you an example:

Consider a simple, yet common application – an online shopping portal. During the purchase process, the application validates the address provided by the user. On the surface, this validation is independent of the payment type used (Credit/Debit/Bank Transfer etc). However, there is a bug in the code for payment through a gift card (address validation is not performed!).
Because gift cards are not commonly available in Test Environments and it was considered that payment methods don’t have any impact on address validation, the bug was deemed low priority. Only later when the bug surfaced in production, it was found that the address validation code was an older version causing another defect.
This type of “masked defect”, where one defect hides another can cause cascading issues due to dependencies in the software design. These types of issues can be problematic for the application development process. In cases where defects alter the behavior to such a degree that it is effectively hidden from the developers, it’s common for test cases to fail to notice these issues. Hence, it is recommended to perform a combination of Automated regression and Exploratory testing in every major release.

What should be the frequency of Regression testing:
Regression testing is usually performed after any changes are made to the code base. The frequency of regression testing may vary from project to project based on the complexity of the software and/or change made to the system. It is recommended to perform core regression testing, nightly or alternate days and complete regression suite execution, once a week. Generally speaking, the more often regression testing can occur, the more issues can be discovered and resolved, and the more stable and production-ready the application becomes

Qentelli’s Best Practices in Regression testing:
  • Automate as many test cases as possible and eliminate effort for repetitive validation. Complement this with validation from a Business Expert to ensure the automation process did not miss any operational scenarios
  • Prioritize and identify to include test cases that are absolutely required in final validation of the software.
  • Test Data Management and ensure you have a way of generating and cleaning Test Data for the suite.
  • Perform exploratory testing that empowers individual Tester, responsibility and freedom to examine and explore the software in various ways that are not documented.

To learn and explore more in detail, please write to us at info@qentelli.com. Our experts will be delighted to engage with you. Also, you can visit Qentelli’s social links for more details- Facebook Twitter LinkedIn

About Qentelli

Headquartered in Dallas, TX with global delivery teams in India, Qentelli is an Industry Thought Leader in Quality Engineering, Automated Testing and Continuous Delivery. With high performing engineering teams working in the dedicated Innovation Group, Qentelli brings design thinking to address complex business problems and enables Continuous Delivery across Enterprise IT through automation for its global customers.
www.qentelli.com

Simplifying Continuous Delivery: Five “Must-Dos” to accelerate your DevOps Journey

Sanjay Jupudi, Founder & President at Qentelli, brings the practical best practices that his teams adopt / implement when working towards their DevOps journey.

Here’s an infographic that briefly tells you how to approach Continuous Delivery along with an elaborated blog post to kick start your Continuous Delivery Journey.

Process First

Organizations moving towards DevOps should adopt processes that enable extreme agility built on collaboration and automation as the corner stones. Automation has to span beyond your traditional thinking of testing (Functional and Regression). Automation first must be the mindset for every stage, every process and none more important than operations and user feedback. For instance, adopting processes such as Test Driven Development(TDD) / Behavior Driven Development(BDD) help in achieving In-Sprint automation and further to setting up quality gates that include true build parameters that include dev, test and Ops metrics as the toll gate criteria at various stages in the pipeline. One thing we preach and practice is to Re-Think your definitions of Ready and Done to improve efficiency in process and adoption of DevOps at large.

Automate

After setting up the processes, it is important to automate each stage of the pipeline to achieve true Continuous Delivery. In our experience, automated test execution is only the start. Teams must explore areas of automation that include configuration management, environment creation / management, build promotion and deployments in addition to other manual activities. This automated process flow must be backed by automated check gates / toll gates that provide into real-time insights into the quality at each stage and learnings to be had for the consecutive iterations must be presented to teams for accelerated learnings. Simple process automation tweaks have helped many organizations increase their product delivery speed by over 25x.

Orchestration

To unlock the true power of automation, a seamless orchestration of the various tools in the DevOps tool chain, along with specific quality check gates, helps achieve greater product quality, and less production defect leakage. For some of our enterprise customers, we had brought down their production defect leakage from 25% to a mere 5% and reduced their downtimes from couple of days to few hours, by automating and orchestrating the delivery process as part of continuous delivery pipeline. Start thinking MTTR and other key areas that help influence outcomes positively. It is imperative for teams to orchestrate analysis and predictability through automated correlation by leveraging AI, ML and deep learning to understand, remediate and predict patterns.

Real-time Collaboration

From large enterprises to mid-sized to Tech start-ups, the need to break silos is paramount. It goes without saying that leaders that advocate One-team, one-dream mindset are more successful not at just breaking the silos between teams but increasing collaboration dramatically and thus accelerating efficiency / productivity gains to be had from the adoption of DevOps. Encouraging use of collaboration tools effectively, such as Slack/Microsoft Teams including integration to various build and deployment tools enhances the ease of DevOps adoption, and satisfied the need for speed of communication in digital journeys. Disrupt with collaboration, unthank email communication.

Measure, Track, Learn, Improve

Breaking silos, automating key processes and bringing everyone together are only the first steps to successful DevOps implementation. While some might argue that Continuous Delivery is an incremental journey with continuous improvement, from our experience, the best way to accelerate (and start / increase adoption) is by being disruptive in breaking the traditional thinking and forklifting the process and automation pieces; While letting continuous improvement have its place to mature the processes and rapidly (with agility) introduce learnings and best practices. Being able to view the status of various stages as part of a Single source of truth, in real time, and taking corrective actions based on predictive analytics has been key to achieving high levels of maturity in Continuous Delivery.

The “Qentelli Way” to Continuous Delivery

Qentelli is technology company focused on Digital Acceleration through automation and continuous delivery. We helped leading financial services firm, fast-food retail chain (QSRs) and low-cost carrier in their journey towards DevOps and Digital Transformation by automating their end to end CI/CD processes with Qentelli’s CI/CD approach, popularly known among our customers as “The Qentelli Way”. Qentelli takes a wholistic approach to DevOps, CI/CD implementation which includes removing silos and increasing collaboration between various stakeholders, training existing team, defining roles and responsibilities, and providing technical solution. We have deep experience implementing enterprise level CI/CD solutions which included complex systems with integrations to legacy platforms and packaged applications. Our expertise in implementing Functional and Non-Functional test automation solutions has also helped our customers release with more confidence!

Whether it is customers with mature CI/CD implementations or someone just getting started on the journey, our expertise in this space has helped many enterprises and smaller companies with our customized solutions.

To learn and explore more in detail, please write to us at info@qentelli.com. Our experts will be delighted to engage with you. Also, you can visit Qentelli’s social links for more details-
Facebook Twitter Linkedin

About Qentelli

Headquartered in Dallas, TX with global delivery teams in India, Qentelli is an Industry Thought Leader in Quality Engineering, Automated Testing and Continuous Delivery. With high performing engineering teams working in the dedicated Innovation Group, Qentelli brings design thinking to address complex business problems and enables Continuous Delivery across Enterprise IT through automation for its global customers.
www.qentelli.com

Real User Experience through Speed Index – Do Not Miss out on this!

Customer Engagement Concept. Modern Illustration. Multicolor Customer Engagement Drawn on Dark Brick Wall. Doodle Icons. Doodle Style of Customer Engagement Concept. Customer Engagement on Wall.

Real User Experience! So, this article is about Real User Experience computed through the measurement of Speed Index and Qentelli’s take on it. My statistics say, about 80% of people reading this article will only pay attention to the words and phrases in red. 10% will scroll through the page after this line trying to get as much information from this article with as little reading as possible. Only about 5% will completely read the article while 5% will search for images and because they don’t find it, will leave. Interestingly, any highlighted information is read in full and this paragraph almost succeeds in getting 90% completion while about 5% fall off starting at the second line.

We are a generation that suffers from chronic ADD. We want faster build cycles. We need faster machines. A 5 second delay will make us think why this universe was created in first place. This Need for Speed is taking over everything in our lives. It dictates how we do business and how we retain customers. A lot has been written about xx% of users will abandon a site if it doesn’t respond in y seconds. I am not going there again. It has already been beaten to death!

Thanks to the performance awareness brought in by a myriad of vendors, bloggers and researchers, almost every organization is becoming increasingly aware and conscious about their applications’ and websites’ performance. From measuring server response times in 90’s to including browser response times in 2000’s to implementing network impairment nowadays, the craft of measuring performance has come a really long way. Interesting point to note here is that, barring a few organizations that really care and have deployed RUM solutions, a vast majority of them are blind to their customer or end user perception/experience of their apps/website’s performance. Tools that tell you the response times on a network layer do not necessarily tell you how long it took on a device/browser. Modern websites and apps have become so complex and distributed that an average webpage today gets served from 20 domains. And fair part of those domains are either social plugins, ad feeds, analytics beacons, payment gateways or personalized content feed from a partner. What it tells us is that, though you choose to ignore, though you hate the fact, the performance of your application in production is dictated most times by the third-party APIs you have in play.

If you have gotten this far, kudos! Look at the previous paragraphs in this article, the visual highlights helped the author drive his point to 90% of footfall on this article.

Page Speed Index is a measure of that visual completeness. Real user experience is a composite score that considers, web page speed index and usability to show what your end users perceive / experience when they access your application. If this article were written in a mono spaced black font as opposed to the way it is, more than 80% of footfall on this article bounces off even though the word count is the same. Page Speed Index addresses this very issue on your web pages. How quickly can a user engage with your web application?

Even if your web page loads in 3 seconds, exactly like that of your competitor’s, you can still lose customers if your competitor’s web page engages within the first second. One of the most useful features of PSI is that it helps you fine-tune your page to start engaging your customers within the first half second even on a 3G wireless signal.

One of the biggest concerns when it comes to looking at performance metrics is, what does it mean to my user?

Want to know see your customer’s perception of your web page’s performance?

Check out Qentelli’s customized implementation of Real user Experience and Page Speed Index here. Don’t worry. It is free.

If you’re one of those who is going back to the top of this article to read it again in full, that’s the power of visual engagement. Bring it to your websites today.

Real User Experience measurement by Qentelli can be one of the steps in your performance engineering journey. It helps you understand things on the production side, the side where a majority of those that judge, reside. Granted you cannot talk to each of your customer to understand what they think, but a tool such as our Real User Experience through Page Speed Index can simplify and enhance that critical feedback cycle.

Performance Engineering – The Key to Deployment Ready Builds

Ready to Ship words on a cardboard box to illustrate a product or goods that are in stock and prepared to be sent or delivered to a buyer or customer

Performance engineering takes center stage, as more consumers prefer shopping from the comfort of their couch even for basic things like pantry supplies and produce. With so much competition, it is imperative for brands to enhance the end user experience and application performance while not compromising on the frequency of release. Even though the frequency of release has been addressed through non-conventional approaches, which include extreme agile, continuous delivery and philosophies of DevOps, it has been limited to functionality and functional experience. The performance experience, real user experience and speed of application response have not seen the overhaul in design thinking they deserve, considering the demands of our ADD generation. In most enterprises even today, the relative education on performance engineering is lacking and where it (the education) exists the measures adopted seem meager. Performance Engineering is mostly limited to a few load tests and almost always becomes an afterthought, thus becoming a bottleneck in the release sequence and when ignored it becomes the cause for about 80% of the page abandonment.

One thing is sure. Performance testing or load testing as some refer to it as cannot come at the end of the engineering lifecycle, right before deployment. In essence it can no longer be an after thought. Two things that need to change fundamentally. One, a mere load test is insufficient, the goal must be to find potential areas of optimization and congestion right after the build check-in; two, it cannot appear so late in the lifecycle. If an enterprise talks about adopting continuous delivery and being deployment ready, then performance engineering should be on top of the agenda there.

Three steps that enterprises are taking to adopt performance testing in a CI/CD:

  1. Early Performance Testing combined with tuning
  2. Load testing and Pre-release engineering
  3. Experience Testing post release

Since #2 is a widely adopted strategy, I am going to limit the scope of this discussion to 1 and 3.

Early Performance Engineering

As part of early performance testing, it is important for application development teams to leverage the expertise of performance architects or engineers to create 10-50 user tests that can be incorporated and implemented into the continuous integration. The results from such tests help identify basic performance issues that may be 10X more complicated to fix downstream. Examples can be optimization of stored procedures or even some specific SQL queries. Another element that can be measured and fixed at this juncture is the impact the page elements can have on the overall application performance. These can be identified through set up of node.js or similar and can play a critical role in understanding the application performance, eliminating the dependency on underlying network speeds. The issues detected are predominantly UI related and fixes can boost performance and page load times tremendously. The tools employed can be JMeter, Locust, Jenkins, Node etc., and this approach can be implemented with near zero overheads on the project time.

So, besides having an idea of what your potential pitfalls may be, you are giving all stakeholders in the value chain a heads-up. The developers get a fair warning of what needs attention, the DB guys have a little extra time to optimize and the Ops folks now have the data they can extrapolate to determine what is needed in production in preparation of a release.

So, now you have overcome all your hurdles and are live in production. There were no anomalies, everything looks rosy until you get that call in the middle of peak usage, the database is failing or response times are bad. You gather your teams, throw more cores at whatever you can and hope the problems go away. It works at times and other times it doesn’t. Why can’t we have a better approach at this?

A lot of enterprises have tools and skills to plugin the performance tests into their CI-CD strategy. But, ask yourself this question – Are your performance tests talking to you? Having performance tests run and report the response times is as useless as having no performance tests at all. If your goal is to be deployment ready, your tests have to be intelligent enough to talk to you and tell you a story beyond response times. Did the change to the payment gateway impact your store API? You cannot deduce this from a pure response time standpoint or throughput standpoint.

Making your performance tests talk is no easy task and there is no silver bullet here. This is more of a machine learning exercise where your tests keep learning about your IT landscape and get smarter after each run. But having a head start to achieve this goes beyond scripting and running tests. At Qentelli, our core Performance Research team developed a framework that can breathe life into the lifeless load tests and help the tests get smarter. This framework is tool agnostic and abstracted at a very high level. Once implemented for a certain industry/organization, the framework’s flexibility in being able to be tailored to a certain workflow, architecture, development and build practices will give you a right head start in becoming a “performant deployment ready”. As with any infrastructure, performance problems are inevitable. But having a test framework that learns what went wrong last week and keeps you from getting there again is the stuff you want to do.

Post release you need to enable monitoring that allows you to have control, be more proactive to issues that may arise in production. What worked on your data center under ideal conditions may not necessarily work as designed in less than ideal conditions.

Here are 3 things you can do:

  1. Proactive monitoring at server level with well desired thresholds
  2. Real-user experience simulations – An algorithm developed by Qentelli to understand key transactions in ideal and throttled conditions
  3. Correlate RUX and server logs to understand, analyze and predict applications

One thing is clear; performance testing cannot be about load soak or stress tests or the tools anymore. It has to stretch beyond conventional thinking and approaches; it must be engrained in the Continuous Delivery methodology and DevOps philosophies at enterprises.

The goal must be performance engineering with a mindset to eliminate bottlenecks, proactively upstream and an effort must be made to understand and predict production application behaviors.

Business Driven Testing – The way Continuous Delivery

In a fast paced environment, it is very important to have Continuous Delivery as the theme to deliver your releases as customers demand and also with the highest levels of quality. To get to continuous delivery continuous development and continuous testing backed by continuous integration are a first step. To achieve continuous development and testing, In this note we use Business Driven testing which is a piece of the larger methodology by Qentelli called Quality Driven Development, as the key first step. Let us now take a look at what it entails.

Meet Business Driven Testing (BDT) the way to Leverage ATDD Principles for Continuous Delivery

BDT is a component of Quality Driven Development (QDD™ by Qentelli). Behavior Driven Tests focusing on high level design not the technical implementation and not the technical terminology.

BDT ensures test cases/stories are written in Given – When – Then style that makes it a natural language which are easily understandable to any persons like technical/non-technical.

BDT is open source framework available in many languages and it will be support different platforms: Cucumber, JBehave, NBehave, Speckflow, Behat and Twist.

We have to write the stories in the following format:

  • Given <An initial context>
  • When <Event Occurs>
  • Then <An outcome should be observed>

Involvement into BDT

Stakeholders or Users, Business Analysts, Testers, Developers will involve and perform their Respective roles.

BDT Notation:

  • Scenarios are designed to test.
  • Scenarios are grouped into features
  • Each scenario consists of   –Given   –When   – Then

BDT Process Flow:

Business analyst identifies the features as per the customer requirements.

  • Tester use cucumber features as test scenarios.
  • Tester writes test cases as behavior driven cucumber scenarios(user stories).
  • Developer writes code that makes test cases pass.
  • Test cases get automated by the testers.
  • Code is deployed and test executes their automated tests.
  • Bugs will get fixed, automated tests run as regression tests.
  • When done customer/user accept the software (Acceptance and quality criteria met).

Advantages and Challenges:

  • Easy understandable by the stakeholders or users because tests in the form of plain text.
  • Users and business Analysts can participate in test cases review process and give their feedback to enhancement.
  • Behaviour driven tests are easier to modify and hence.
  • BDT tools are open source which reduces the source of investment.
  • Very easy to maintain.
  • We can implement in mobile testing.

BDT with Cucumber Framework Implementation:

If we get the business requirement or user story from the user, the user story requires to define every piece of requirement in terms of:

  • who is the primary stakeholder for a particular requirement?
  • what effect does the user wants from this requirement?
  • what business value would the user get if the aforesaid requirement is achieved.

Consider the below example to understand the above.

As a user of the Gmail, I want to find a Gmail link in Google Search Page, So that I can see the Gmail login page.

We have to write the above behaviour in feature file format (-Given -When -Then format).

  • Given describes the pre-requisites for the expectation
  • When describes the actual steps to carry on the effect of the expectation
  • Then describes the verification that the expectations were met.

Feature File: Test.feature

Feature can contain multiple scenarios and scenarios can contain steps.

We have to follow the steps to execute and run steps with cucumber:

  1. Create a project directory: In that create 3 folders one for feature files, one for java libraries and one for step definition file as below.
  2. Copy the Test.feature file in features folder. Test.feature
  3. Get the following cucumber jar files and copy into jars folder
  4. Now we have to compile the feature file from command line for step definition methods generation. Open command prompt and go to the project directory and give the command as: Java –cp “jars/*” cucumber.api.cli.Main –p pretty features
  5. Press enter then we will get the step definition methods as follows:

Copy the selected three methods and paste into StepDefinition.java file.

Prior to that just create StepDefinition.java file as

package step_definition; // we have to give the folder name as package

import cucumber.api.java.en.Given;

import cucumber.api.java.en.Then;

import cucumber.api.java.en.When;

import cucumber.api.PendingException;

public class StepDefinition {

// We need copy the methods generated from the commandline.

}

Note: Remove the throw new PendingException(); line because we are not dealing with that.

5. Compile the StepDefinition.java file and enter the following command from the command line then the .class file will be generated in the step_definition

           javac -cp “jars/*” step_definition/StepDefinition.java

6. Enter the below commands and run the file for getting result.

            java -cp “jars/*;.” cucumber.api.cli.Main -p pretty -g step_definition features/Test.feature

If we want the result in json format then we have to give the command as

java -cp “jars/*;.” cucumber.api.cli.Main –plugin json -g step_definition features/Test.feature

Note: We can see the result in so many ways for that refer with the help of

java -cp “jars/*;.” cucumber.api.cli.Main–help

At Qentelli, we have applied these principles in complex N-Tier architecture based applications to bring Continuous Delivery to our customers. The next steps in the sequence are ensuring Continuous Testing covers Continuous Security and Performance Engineering followed by Continuous Build, Deployments and Monitoring and Feedback. To learn more about Quality Driven Development, Continuous Delivery the Qentelli Way and DevOps reach out to us.