Metrics is a very delicate subject at most enterprises I have worked with and within. It is often thought of as a fact-finding or worse faultfinding exercise by the teams that report the data. But then, if you really think about it, How would you measure something you do not track and how do you analyze something you don’t even measure and most importantly how do you improve if you don’t begin to understand what went right or wrong?
Quality Intelligence and Metrics in the Application Engineering Lifecycle are the one way to keep everyone in the value chain on their toes and assist in steering course as and when needed. Metrics specially the kind that are predictable can enable IT leaders and application teams in better planning and enable critical decision making, proactively. Just as an athlete or a sports team need to understand the quality of the effort, statistics to continually improve an enterprise needs adequate means to track, measure and analyze metrics for continuous improvement.
This is what Quality Intelligence is all about! In an era where quality has taken center stage in how your customers view your brand, it is more important than ever that you ask your teams to track the data relevant to the 5 metrics being discussed here. Note that these metrics are not listed in any particular order, but when the information from all these is correlated it would paint the IT leadership a perfect picture on where things stand.
1. Productivity Metrics
Productivity metrics are crucial in more than one way. One, to understand where you stand in your current design / execution cycle, and two to accurately estimate the effort involved for future cycles. Not all applications or projects are born equal, so one formula at effort estimation is rather ineffective; thus arises the need to accurately capture the burn rates, design and execution productivity and environment parameters that influence the ability of the teams to be productive! The appetite of variance varies by the enterprise and hence the thresholds must be appropriately assigned.
2. Efficiency Metrics
Enterprises often mix up efficiency to productivity – there is a fine line between the two or a significant one – depends on how you look at it. Point being, they need to be measured differently. Efficiency grows when a task is performed repeatedly – for instance, the more often a test suite gets executed, the faster the team can complete it – technically at least! One of the key things to consider when working on improving efficiency is Automation. Consider measuring Design and Execution Velocity, Automation Efforts Vs Manual effort reduced, Defect detection efficiency, Release and deployment efficiency, environment parameters, down times, planned Vs Actual metrics for design and execution etc.
3. Effectiveness Metrics
You testing team could be spending hours in test execution producing near perfect results for the QA environments, but then – none of those modules they are executing tests on were touched; or on the other hand, they’ve had too little time to test and resulted in delayed release or worse yet – production defects. The point is not to eliminate regression tests entirely, but the goal is to optimize the effectiveness of the test efforts.
The effectiveness measurement is important in understanding functional vs. regression effort, defect yield, defect density, impact analysis, coverage analytics (Requirement traceability, Code coverage, functional coverage), performance metrics, security analytics and environment metrics (It is critical to ensure that the tests would pass on any environment – Literally).
4. Defect Metrics
Perhaps the most important of all! Capturing defects alone is not enough. Classification of defects based on their detection in the lifecycle is significantly of more value than knowing that a defect showed up. Defect metrics must include defect analysis, defect severity, defect age, when the defect showed up originally, defect rejection rate (represents quality of testing team), defect RCA, defect distribution by module or similar, defect status, defect reported by automation Vs. Manual test efforts, regression vs. functional etc. Additionally, it is imperative to understand and report the cause of defect such as environment, data, requirement and much more. This list could potentially be overwhelming, but it is the source of truth.
5. Production Metrics
Most times the best way of understanding your application quality is your production metrics; it often becomes the scorecard of not just your IT organization but also your brand itself! The production metrics that must be considered are performance capacity planned Vs Actual run time, response times, real user experience, synthetic monitoring, uptime, defects in production proactively detected, defects reported by customer etc.
During a Sprint, velocity is the average amount of Product Backlog that is turned into an increment of product. It is essential to be aware of what can go wrong before checking the performance. Here is an article that explains Six Things That Can Go Wrong in Scrum Team Velocity.
One of the significant factors is the ability to accurately report the production defects. Case in point, at a large enterprise that we interviewed and subsequently implemented metrics for, the QA / Testing vendor consistently reported zero production defects for 2 years running. However, when we went down to the source of truth – their production and issue reporting system, we noticed about 10 critical defects that went unreported. The argument from the vendor was that “We knew those defects would surface”. This thought process is fundamentally wrong! You must ensure that all production defects are captured and reported for analysis; this could be the first step towards continuous improvement if analyzed accurately.