Insights

Hard Won Lessons in Digital Transformation

One of the key actions we do regularly with each of our customers and internally is to conduct Retrospectives – to critically understand the original problem, target solution and the journey so far. These retrospectives help us do course corrections and sometimes, even shelve the original plan!

In this article, we present you with a few key lessons distilled from our Retrospectives and experiences, in the hope that these will help in your Digital Transformation plans.

Culture is the key element of Digital Transformation and Leaders own it

Although this phrase is overused, but the meaning holds relevance. You revamp technology stack, adopt DevOps tools and technologies, improve your delivery practices, but the needle doesn’t seem to move! That’s because underneath all of the hype, your teams don’t change their fundamental habits and ways of working. You can’t expect to hold a townhall, walk through a presentation and expect the organization to shift.

Changing the culture is hard, but leaders need to show the way every day – by constantly talking to the teams, understand their challenges, provide trainings and investments, changing the performance and incentive structures and most importantly showing why the change is important for the organization and how it benefits them too!

In one case, we helped the client appoint a Chief Digital Officer (CDO) to drive the enterprise-wide Digital Transformations.

In short, Culture emerges as the most important part of Digital Transformation journey, while digital winners approach culture with creating sub-units or identifying change agents to drive culture transformation, laggards leave it with the mails – ‘We are on a Digital Journey’.

“I came to see, in my time at IBM, that culture isn’t just one aspect of the game, it is the game. In the end, an organization is nothing more than the collective capacity of its people to create value.” — Louis V. Gerstner, Jr., Former CEO of IBM

IT Strategy = Business Strategy

The old generation IT was as a cost-centre, especially in industries where core business is not IT. These opinions were busted when the new generation IT gave rise to business models such as Netflix, Amazon, Uber. Digital native companies are already well-equipped with digital capabilities and technologies. To compete with them, non-digital organizations must integrate new-generation IT and technologies with their legacy systems.

Increasingly, even the non-IT Enterprises have adopted a vision of “IT as the Platform for Delivering our Capabilities”. To achieve this vision, organizations are investing in IT in a big way in the following areas:

  • Cloud-Native Applications for rapid Development, release and easy maintenance
  • Increasing adoption of DevOps, Automation and Agile
  • Modernizing legacy systems
  • Piloting use cases of AI, Blockchain, IoT

One of our Star client’s business model is to digitalize orthodontist industry with 3D printing and Digital Manufacturing using cutting edge software impressed us. In their initial conversations with us, they briefed us how their Digital Business Strategy is driving IT investments. Their ground-breaking tele-dentistry platform and vertically integrated, direct-to-consumer business model provides affordable, convenient and premium dentistry experience. Their rapid expansion in terms of geographic presence and revenue growth meant that they had to significantly upscale their IT Application Delivery and Operations process.

We helped in establishing a DevOps based Enterprise Delivery Pipeline that included Product Management, Engineering staff, Infrastructure and Operations teams. The Engineering transformation accelerated the pace at which IT could bring new markets online, resulting in a significant upsurge in Revenue.

“An IT Strategy that truly aligns with Business Strategy and Goals can act as a significant force multiplier for the Business!”

Measure and Adapt

In a third company, we were pleasantly surprised to see the DevOps tools and advanced CI/CD practices our client, a premier global valuation and corporate finance advisor was using. However, the leadership was not seeing direct benefits of the improvements. A quick Assessment showed that they did not have right metrics and data to demonstrate success and find issues in the pipeline. As a result, they continued with some practices that were not helping them, such as running every build through a Security Analysis product which did not provide the right results and slowed down the entire pipeline.

A comprehensive Measurement Program was designed using Business Metrics as the starting point and deriving Engineering/Operational metrics from it. The metrics were designed to measure system performance and not individual performance. A Lifecycle Intelligence Dashboard was built to get visibility into the metrics captured on an hourly basis. Feedback loops were built into the process through this Dashboard that the team could drill down to find and resolve bottlenecks.

“When a measure becomes a target, it ceases to be a good measure”

Automate, Automate, Automate

Another Client of ours was struggling with low morale, lengthy times to get things done, broken manual processes and reliance on a few superheroes. Out of these, the biggest Technical debt was in Automating Testing. Their software had hundreds of combinations for each workflow, so manual testing was getting to the point of being unmanageable and expensive.

We introduced a tried-and-tested QA Automation Framework that quickly helps clients achieve incremental benefits of DevOps with Advanced Automation and improving customer interaction by improving existing digital touch points or creating new ones. We augmented our Automation framework with our AI bot to quickly identify areas of change, tests that were really required and generate necessary test data. Test Environments were containerized with different configurations and tests were run in parallel, sometimes using hundreds of test agents running in parallel.

DevOps requires Continuous testing to augment Continuous Planning, Continuous Development and Continuous Integration.

I am text block. Click edit button to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Transformation is not a destination, it is a journey

An industry player in Loyalty Management Solutions turned to us for help in Digital Transformation of their Loyalty Management Platform to improve the user-experience and simplify development and customization of the platform. The engagement is a good example of how Digital Transformation is an ongoing process.

When we asked one of our clients in Loyalty Management Business “What single word best describes the business landscape in your industry?” We got a single word response: Evolving and Transforming. We drove a matrix ranking their solution to the counterparts and explained as they are evolving and transforming their platform, competitors are doing the same.

We started the discovery phase to outline the solution for the client. We started with business objectives of providing superior User Experience with Self-Service portal and introducing Loyalty-As-A-Service (LaaS) as a key feature. For achieving these business objectives, we recommended embracing CI/CD and API-first approaches to product deployments. But, as we explored more solutions in the industry, we found competitors are offering Advanced Analytics, AI, Gamification and Big Data solutions in the product.

We learned that organizations cannot reach the goal of the Digital Transformation as there are new features/solutions emerging in the market challenging the status quo of the existing ones. Once an organization feel it has completed the first phase of Digital Transformation, it’s the time to look at the internal processes, competitive landscape, customer expectations and refine the processes to make them more efficient.

Organizations must continuously evolve their offerings and services for competing in the digital world. A Customer-Experience focused strategy, right technology partner and exploring continuum of the latest technologies is the way to evolve current digital capabilities. The goal post of Digital Transformation keeps on shifting with the rapidly evolving expectations of digital customers.

Conclusion –

Digital Transformation needs a combination of Leadership, Cultural Changes, a motivated workforce and Technology excellence to succeed. In this article, we have tried to tell some of our stories about our experiences.

If you have your stories to share or you want to create new stories with us, we are here to listen. Contact us and we can co-create success for you.

Test Automation in the age of microservices – Strategies and Challenges

Microservices have been gaining traction across industries and are poised to see a stronger adoption rate in the years to come. Across sectors, many companies are aiming to achieve better enterprise agility and bring in a system more efficient than the traditional monolithic architecture. With organizations such as Amazon, Apple, Google, and Netflix scaling their microservices, the acceptance and implementation of it has grown and has compelled other players to emulate the model.

As companies look to decouple domain-level problems, API gateways see strong adoption across sectors. An API Gateway is a reverse proxy that exposes microservices as APIs. As the name implies, it acts as a “gatekeeper” between clients and microservices. Typical features of an API Gateway include the ability to authenticate requests, enforce security policies, load balance between backend services and throttle them if necessary.

While it may seem like a perfect answer to do away with monolithic systems, IT solutions need to be more modular and independent. As a result, microservices will need to co-exist with the traditional architectures and interact with the existing processes. Moreover, it will also need to be in sync with compliance imperatives for best results. To simply put, with organizations having numerous other architectural patterns already deployed, doing away with the traditional system completely may lead to a new set of challenges. In order to tame the complexities and manage the speed and flexibility of microservices, the API strategy makes for the best solution.

Test Automation in microservices and its challenges

With microservices becoming a critical part of Enterprise Architects, applications implementing them will need to be tested to ensure that the services are fully functional and are orchestrated as per business requirements.

As per the “Automation Testing Market by Technology, Testing Type, Service, Endpoint Interface and Region – Global Forecast to 2023” report published in 2018, the global automation testing market size is expected to grow from USD 8.52 billion in 2018 to USD 19.27 billion by 2023, at CAGR of 17.7% during the forecast period.

The increasing adoption of mobile devices and technologies, increasing adoption of the DevOps methodology, and transforming testing by digital transformation, are some of the factors driving the automation testing market. Moreover, manual tests are time-consuming and not full-proof. With most teams favoring automated tests in a CI-CD pipeline, the Test Strategy must use an Automation-first approach. However, microservices bring certain unique challenges to the Testing team – we articulate some of them from our real-life experiences:

Skilled resources: The primary challenge for automation testing is the lack of skilled resource. Most organizations struggle to set up QA teams that have the right skills to write automation scripts. The test automation frameworks that enterprises employ require testers with the skills to compose test scripts using various scripting languages and frameworks.

Tracing the problem: Automated tests are tools that are used to identify errors. Being able to trace back the to where the applications business logic failed is a mammoth task. using behavior-driven development could be a possible solution to refer to business-readable language that tells the business which requirement the automated script is testing.

Scaling test environments: QA teams usually don’t keep into consideration, the possibility of scalability issues while introducing test automation. The challenge arises when the need to rapidly make provision for the differing test environments that automated testing requires, scale them up, run the tests, tear them down, and do it all again just as fast if you’re not testing in the cloud. On premises, teams typically have a limited number of environments they can use, which means fewer tests they can deploy at any given time. As a result, testing takes much longer.

Too many UI tests: CSS and XPath locations in the UI change often. If QA teams target attributes like these in automated tests, it can lead to false positives and continued maintenance when the changes weaken or break the tests. Hence, the bottom-up strategy is vital that demands a unit level testing of APIs, which makes the testing more consistent.

Lack of transparency: Test automation can lack visibility when different teams are using different, disconnected automation strategies. For teams that work remotely, using different test automation frameworks, getting insights on the total testing quality can become challenging.

Adapting to the culture shifting: Adopting test automation requires a culture shift, an evolution of behavior and thinking. Often, team members and stakeholders look at throwing tools at a problem, which doesn’t fix the underlying mindset.

Strategies for successful test automation

These were just a list of some challenges that come in the way of a successful test automation of microservices. But to be able to overcome the challenges and to ensure that the results meet expectation, the most recommended is Mike Cohn’s Testing Pyramid, which takes a bottom-up approach for a quantitative analysis of how much automation effort is required at each stage of the testing lifecycle.

Let’s look at some approaches for how to approach automated testing.

Unit Testing: This is an internal testing service and is the largest in number and size. It is also the fastest and cheapest to automate.

Component Testing: Contract testing should treat each service as a black box and all the services must be called independently and their responses must be verified. When assured of the results, the longevity of the accuracy increases, also making way for seamless new additions to the existing systems.

Integration Testing: It is a must to perform verification of all independently tested services. Service calls must be made with integration to external services, including error and success cases. Integration testing ensures that the system is working seamlessly and that the dependencies between the services are as expected.

End-To-End Testing/API Testing: As the name suggests, end-to-end testing verifies that the entire process flows work correctly, including all service and database integration.

Additionally, there are several non-functional tests that requires equal attention. While functional testing helps in ensuring smooth performance of all the major functions, nonfunctional testing helps in assuring the reliability and security of the application.

Performance Testing: This not only evaluates the performance of the software, but it also ensures that the response time aligns with the desired time. Performance testing is carried out as a part of integration testing as well.

Load Testing: Load testing is carried out to check whether the system can sustain the pressure or load of many users accessing the application at a time. The production load is replicated in the test environment to get the accurate results by load testing.

Stress Testing: Stress testing is conducted to push the application beyond its capabilities to observe how it reacts. Contrary to load testing where maximum capacity of load is generated, stress testing is conducted where the load which is generated is more than the application can manage.

Improvement in software architecture has led to fundamental changes in the way applications are designed and tested. Teams working on testing applications need to constantly educate themselves and stay informed on the latest tools and strategies. Here are the most popular test automation tools that teams across organizations are using to get best results.

Hoverfly: Simulate API latency and failures.

Vagrant: Build and maintain portable virtual software development environments.

Pact: Frameworks consumer-driven contracts testing.

Apiary: An API documentation tool.

API Blueprint: Design and prototype APIs.

Swagger: Design and prototype APIs.

While there is a gamut of tools to choose from, there is no single-stop solution for automated testing. Testers must evaluate all the available options to make sure that the tool in consideration meets all the criteria for testing. Microservices are the next big thing in the IT solutions’ market but to be able to adopt the technology and implement the changes is more challenging than it appears to the naked eye. A skilled team equipped with the right tools is critical to ensure that a complex system architecture built on microservices can deliver the functionality, scalability and performance that business needs.

Make your smart contracts really smart – A look at building intelligent smart contracts

Smart Contracts are one of the most popular applications of Blockchain technology. As Wikipedia says, a smart contract is a computer protocol intended to facilitate, verify, or enforce the negotiation or performance of a contract digitally. Smart contracts allow the performance of credible transactions without third parties. These transactions are trackable and irreversible.

Smart Contracts are nothing sort of Smart as they are just pieces of code incorporating business logic and rule-engine to execute them. In this article we will explore what are the limitations in adoption of Smart Contracts and how AI can bridge the gap to make the adoption faster.

Before moving further, let us have a common understanding on how Smart Contracts work?

How Smart Contracts work

Let’s take an example of implementing Smart Contracts for the supply chain. Logistics and freight solutions involve a lot of paper-based contracts–there are high chances of stealing and losing them. The smart contracts can avoid this by providing secure, transparent digital version to all the parties involved–sender, receiver, intermediaries, customers, logistics partners, etc. Smart Contracts store business legalities and terms & conditions as codes.

  1. Transactions – Two parties (X-seller and Y-buyer) separated geographically want to trade some goods with each other. They involved a third party as a logistics partner to deliver the good under certain conditions of date of dispatch, date of receiving, damage/theft, if any. All these legal conditions go as the code in the Smart Contracts. The code executes the terms in pre-defined ways and doesn’t have the nuances of human languages. In simple words, If Y receives an order on the date, then initiate the transaction of XXX amount to the account XXX.
  2. Block – Once Y receives the order under the decided conditions, he triggers the transactions through Smart Contracts instead purchase orders or invoices, etc. Blockchain receives this code in an encrypted way via distributed network of ledgers.
  3. Verification – Once all the computers verify the transactions with an agreed consensus mechanism and on achieving more than 50% consensus, contract verifies the transactions.
  4. Hash – Each of this block is time stamped with a cryptographic hash and reference to previous hash, removing the chance of any manipulation or tampering.
  5. Execution – The agreed amount moves from Y’s account to X’s account.

While the process looks simple, Smart Contracts are tough to implement and still in an infancy state. As they are still maturing, they come with potential disadvantages of security breaches, execution flaws, immature coding practices and language. The lack of international regulations on smart contracts further leave the parties at a possibility of running into legal disputes.

Smart Contracts Adoption

 Are they ready for adoption in highly distributed and large-scale organizations?

While some theories suggest that they are ready for enterprise-wide adoption and will achieve maturity with more organized efforts towards it, some negate adopting them. The very first challenge of adopting Smart Contracts for enterprise level is limited manpower available for coding and managing them. Enterprises need all-inclusive engineering capabilities to code, manage, secure and test Smart Contracts for developing secure solution. This requires heavy investment in hiring engineers with right skills and proficient in languages like Solidity (one of the widely used Smart Contracts language).

The second area of concern is security in form of end-point vulnerabilities, public and private key security and vendor risks associated with Blockchain. Experts believe that Smart Contracts are the most vulnerable points for security breaches, cyberattacks and technology failures. Organizations have to implement adequate testing measures to prepare for these risks before implementing Smart Contracts.

The Evolution of ‘Testing’

Over the last few decades, the art and craft of Software Development have followed the Darwinian principles of Evolution. From being an activity to ‘check if the software works’, to ‘find out when it doesn’t work’ to the current thinking of ‘anticipate failures and prevent them’, Testing has grown in importance within the lifecycle of software application development. Testers are no longer people who simply click buttons on a screen, but write frameworks, test scaffolds, code automated tasks, setup and manage environments and do everything that is traditionally associated with Developers and Technical staff.

Quality Engineering is the new mantra, where QA teams work side-by-side with developers in the trenches, engineering quality into the product from the start, preventing many defects from occurring and being detected. If you would like to move to this level of maturity, get in touch with us and we’ll set up some time for you with our experts to see how we can help you.

Give eyes to your SDLC

With the ever-increasing complexity of the modern applications, it is now important to keep an eye on all aspects of the Software and Infrastructure to achieve greater agility, availability, and quality to take immediate action based on exceptions. In other words, Continuous Monitoring using an Integrated Dashboard is becoming a critical aspect of DevOps. But what to monitor and how? Can Automation be any help? Build or Buy? If so, how to find the right SaaS vendor? Read on to find out.

“DevOps is around only for about a decade but Monitoring SDLC goes back a longer time than that. We don’t have to reinvent the wheel and learn how to monitor the life cycle. We just need to find the right workflow that fits your business so you don’t overlook monitoring the important features which will often lead to unforeseen and unpleasant surprises down the lane.”

What To Monitor?

Today, nearly everything across the SDLC can be monitored and reported with the help of a Monitoring strategy, a little perspective and a lot of tools that are already available in the market. Let’s look at the major aspects that need to be monitored to stay informed well enough to stay alert, plan well, perform better, react faster and fix issues proactively.

Planning : Continuous Planning is where it all starts – after all, this isn’t a one-time task anymore when Continuous Delivery is the agenda. So, just like every other activity and asset of a project, Planning must be monitored too. Today, planning inputs come from user opinions, complaints and requests, competitive analysis, product vision and even operational insights. The ultimate goal of DevOps or any other IT methodology is to deliver Business Value.

What to measure?

Planned Value (PV) – Estimated cost of project activities planned/scheduled as of reporting date.

Sprint Goal Success Rate – Measuring Average number of sprints that met the goal in a defined time

Agile Velocity – Number of user stories completed by the team, on average, in previous sprints

Sprint Burndown – Number of hours remaining to complete the stories planned for the current sprint

Available Tools

01

Development Milestones : Development is a phase that involves the actual built of the software, writing code, designing infrastructure, implying possible automation efforts, defining test process, security implementation and preparation for deployment. Evidence is the most important part of this phase and adopting the right strategy and tools would make it achievable. As frequent code changing became the new normal, an efficient Code Manager can help developer store the code to make it re-usable, versioning, managing environments, and modules. Detailed development and defects statuses can be tracked through an Application Lifecycle Management (ALM) tool. There are various Continuous Integration (CI) tools available to monitor the build job and pipeline.

What to measure?

Cycle Time – The time taken for a task to go from ‘started’ / ‘in progress’ to ‘done’

Code Coverage – The percentage of code covered by unit tests

Cumulative Flow – The status of tasks in a sprint or release to visualize bottlenecks in the process

Time to Market – The time a project takes to start serving or providing value to the users

Release Frequency – The rate of official releases being deployed to production

Available Tools

02

Infrastructure Every IT department thrives on a reliable, highly-performance and secured infra set-up for smoother operations. Considering the need of having 99%+ System Availability, businesses need to invest in real-time as well as proactive infrastructure monitoring solutions. As the organization grows, and the operations are spread over Virtual, On-premise and Cloud Infrastructure; there is stress on the system availability, maximizing uptime and reducing errors.

What to measure?

MTTR & MTTF – To estimate the uptime of systems

Infrastructure Stability – Percentage of reduction in the number of major incidents

Velocity – To evaluate Throughput and Bandwidth

Available Tools

03 (1)

Application Log Output Having a system to monitor this fundamental part of the application can help administrators and security professionals to collect, analyze, and correlate the log data and provide actionable insights to the management teams. Application logs are Informational events which can form data that can help identifying abnormal user activities, troubleshoot abrupt application crunches, and detect security risks.

What to measure?

Average Response Time – the amount of time Application Server takes to provide the results requested by the user.

Error Rates – Identifying how often an application is exposed to Bugs and Production issues.

Count of Application Instances – To analyse the in-demand and off-peak times.

Request Rate – Understanding the traffic of the application

Available Tools

Application Performance All the greatest code, tools and frameworks in the world are not going to help if the user is unable to use the application as expected. It’s important to have a system that can quickly identify when a problem arises and find out the root cause of the glitch and fix it immediately. That can be done only when the response time for various requests, CPU, network, memory usage etc. are monitored regularly.

What to measure?

Availability – Operational and Functional usability of an application to fulfill user requirement.

Requests per second – The throughput handled by a system

Response Time – The time taken by a system to react to a given input.

Available Tools

Quality Assurance : customers value bug-free software with fewer or no post-deployment dependencies after being delivered. Plus, with DevOps practices, there is really no need to sacrifice quality for speed anymore. Measuring the QA efforts through result as well as predictive metrics is important not only to improve the software testing frameworks but also to have a deeper and better understanding of the end product quality.

What to measure?

Performance (Response Time) – measuring the time application takes to respond to a given request

Automation Test failure rate – frequency of failures during automated testing

Application Quality Index – calculating and reporting the stability of an application under testing

Available Tools

Vulnerabilities The list of dependencies for modern application is increasing day by day, so is the list of potential security threats. The Vulnerabilities that come as part of the package with third-party systems needs to be monitored. But the ones that evolve due to poor coding and development practices need to be identified and prevented as quick as possible. Mapping the complete application dependency tree, constantly monitoring the source code, test, production and pre-production environments, having a proactive monitoring system with alerts enabled can help.

What to measure?

CVSS Score – Capturing the principal characteristics of vulnerabilities and their severity.

Coverage – A qualitative view in culling the asset’s data and scope of scanning practices.

Dwell time – Time that a known vulnerability exists in the user environment

MTTD – Time taken to detect a vulnerability in the system.

Error rates – The fact of life for applications to track Bugs and production issues.

Available Tools

Now that you have all the major aspects under monitoring with the help of these tools, how to make sense of all this monitoring data? Whenever there are multiple tools at work, the only way to collaborate them is to have an aggregator. But,

Dashboards – Build or Buy?

Well, that’s always a tough decision to take, isn’t it? Well, it’s not impossible to build the ultimate DevOps Monitoring Dashboard. Whether to purchase a pre-built tool or to build your own solution, these are what you need to look for:

  • The range of tools supported – Through Plug-ins and Data Collectors
  • Ability to customize data fields for data collection
  • Powerful UI for data representation and customization of fields & views
  • Range of metrics supported to represent the business KPIs
  • Ease of installation and configuration A.K.A Usability
  • Ongoing Maintenance

MONITOR YOUR DEVOPS PIPELINE – THE QENTELLI WAY

Make a list of the Tools and Activities to monitor and identify the Metrics for each tool. Not just from development but cover the tools from all the areas of DevOps like code building, repositories, quality assurance, testing, deployment, and feedback to get the full picture. Now, correlate these metrics with the desired business value to define KPIs. This way, the monitoring console shall report all the aspects and demonstrate the current status and gaps. Well, having a well-designed dashboard is nearly useless if there is no data flowing into it, isn’t it? As the metrics are defined, the next step is to identify the right API or command-line interface to fetch the data.

Wait, we’ve already done all the messy work have a solution that is ready-to-implement (well… right after defining metrics to suit your business needs) and we call it TED – The Engineering Dashboard. TED is a machine-learning engine driven by Artificial Intelligence built on Big Data architecture and that gives you the much-needed single source of truth. TED is built to aggregate the data from all the tools through a custom-built API set that integrates into nearly all the tools within DevOps pipeline. Once the data is gathered and ready-to-use, the correlation algorithm starts its job, bringing the UI charts and dashboards to life for the consumption by Engineers, Chief Executives or anyone in between.

Any specific question about DevOps Monitoring, the tool or TED? Write us: info@qentelli.com

Edge Computing – The link to converge Physical Operations and Digital Businesses

Customers are obsessed about connecting everything on their devices – Be it upcoming meeting, or taking next-turn while driving or about maintaining room temperatures. As mentioned in one of the articles in Forbes – A Simple Explanation Of ‘The Internet of Things’. Simply put, IoT is the concept of basically connecting any device with an on and off switch to the Internet (and/or to each other). This includes everything from phones, coffee makers, washing machines, headphones, lamps, wearable devices, machinery and almost anything else you can think of1. This gives new rule to digital businesses to connect everything that can be connected. Edge Computing is the safest bet for building an IoT app or optimize current products. Read on to know the advantages of Edge Computing and it enables Business 4.0 –

IoT and Edge Computing

IoT is connecting everything and provide highly immersive, personalized and real-time responses to consumers. The success of IoT implementation depends on how hurriedly data processes, analysis happens, and results appear without any lags.

Edge Computing architecture is an ideal choice for three reasons – Speed, Optimization and Outage Reduction.

Applications working with Edge Computing push data away from the centralized network to locations closer to the user. Edge Computing layer helps application in removing latency experienced in data processing by removing round trip of data and processing to cloud. This results in location and process optimization of data collection and processing to deliver results at speed. The Edge Computing facilitates processing of tons of data generated at the device level and provide quick insights to customers and businesses to act on them supercharging customer experience.

According to a study conducted by Uptime Institute

Global data center outages have increased by 6% since 2017.

The average duration of an outage ranged between 95 and 97 minutes.

The cost of unplanned outages has risen to of $8,851 per minute.

Aside from improved customer experience, Edge Computing reduces outage reduction. Cloud solutions are open to downtimes due to power outages or ISP failures, which is why leaving everything on cloud adds latency. Edge Computing makes application less susceptible to outages by doing the critical computations near the device or on the edge while leaving the non-critical ones for the cloud.

An Enabler for Business 4.0

Businesses are laying down their plans for unlocking deeper digital experience for their customers. While working with some of these clients, we developed a deep understanding of their business models and how Edge Computing helps in building futuristic IoT applications. There are successful use-cases of Edge Computing used for IoT platforms in industries like Healthcare, Manufacturing, Automobiles, Retail, Construction, etc. While the focused business cases vary from predictive maintenance, monitoring solutions to smart wearables etc, each of these business cases primarily take three forms–

Mobile – for phones, cars, wearables, drones, etc
Static – for POS in retail stores, machinery, smart homes, cities, etc.
Mobile+Static – Healthcare is a suitable example here where medical records are static while the medical personnel move within the facility.

Each of this business case when implemented correctly, gives rise to Business 4.0. We, at Qentelli develop IoT applications with Edge Computing architecture brining applications and analytics closer to devices using a combination of IoT sensors, RFID technologies, Micro Data Centers, Hybrid clouds, new Wireless technologies and devices. This supports workloads and data to compute, store and analyse data at the edge. The architecture varies on a project basis and brings the end goals of control, visibility across the business chain and analytics.

Plow back into your Infrastructure Strategy

IoT applications and Edge Computing architecture need organizations to alter their old IT infrastructure with hybrid cloud to house ever-growing data and provide computing, storage and analysis hardware to IT teams. Businesses must look to integrate various FPGA or ASIC-enabled specialized chips for different devices they want to integrate into their IoT platforms or new edge architecture-based applications. Businesses need to analyse a lot of vendors around a network and wireless technologies based on their requirements. These new hardware systems have to work with existing and emerging business technology systems. This requires new investments in infrastructure for preparing them to work for applications built on Edge Computing.

Challenges Ahead

We see a lot of possibilities around IoT applications, but this does not spare the challenges coming along. Altering infrastructure is the first and foremost challenge but there’s no way out apart from putting investments in the new age storing and computing devices. The second challenge is implementing Edge Computing without or minimum alteration to current technology stack. Amidst all these challenges, there lies a bright opportunity to converge physical assets into digital operations and enhancing current business products and services. Organizations should include Edge computing in their next IoT projects.

We talk to a lot of clients assessing their current technologies stack and powering their business transformation using new technologies. We recently interacted with two clients- one in Energy Spaces and the other Logistics. They talked about their business plans to get massive technology up-gradation. Both the clients wanted to optimize their transportation routes and automate financial processes, eliminating human inputs and to remove wastage associated. We assessed their technologies and proposed an IoT platform with Edge Computing architecture to make the future business ready. At Qentelli, we believe agile practices and the best engineering approach can help in building IoT applications that work as desired. To work with Qentelli on your next IoT project, write to us at info@qentelli.com

Managing the Multi-Cloud: A Review of the new Kids on the Block

As the number and complexity of applications being deployed on the cloud grows, organizations are finding that a single cloud is not sufficient to address all of their needs, such as tech stack, Platform as a Service (PaaS) components, workloads, data management, compliance across regions, performance, and security. Enter the Multi-cloud Strategy: where different applications can be hosted on different clouds but use common interfaces to integrate them.

Sounds complicated? Yes, since applications can quickly outgrow their platforms and any deep ties to one platform can make it difficult to migrate. Then there is the challenge of managing application and infrastructure resources and costs across multiple providers – each with their Resourcing models and Pricing plans that can be difficult to compare.

But Enterprises have no choice – according to a survey late 2018 by RightScale, close to 80% of Enterprises are adopting a multi-cloud strategy, albeit due to different reasons.

How then do you manage your application portfolio distributed across different cloud entities? Where there is a need, there is a Business!

Cloud Management Platforms (CMPs) are the answer. They have been around for half a decade, but support for different clouds has been difficult. But with rising cloud adoption, these CMPs have become critical components.

In this post, we compare different Cloud Management Platforms, their capabilities and pricing tiers, where available.

Let’s define the terms used, so that we have a common understanding

Multi-Cloud is a strategy of using multiple public cloud service providers such as AWS, Azure, Google Cloud Platform, OpenStack etc. for managing multiple applications or different aspects of a single application, including on-premise infrastructure. Hybrid Cloud, on the other hand, involves a mix of on-premises, private cloud and public cloud services with integration between at least two platforms.  Hybrid Clouds are Multi-Cloud but not all Multi-Cloud are Hybrid Cloud. With both of them, challenges of managing costs, uniform governance policies, allocation and charging back, dependencies on IT teams for provisioning are common.

Rise of Multi-Cloud

As per 451 Research survey, 69% of organizations are planning to run a Multi-Cloud environment by 2019. As they said, “the future of IT is multi-cloud and hybrid” – but with this rise, cloud spending optimization and management also become more of a challenge.

Research firm IDC has predicted that 90 percent of enterprises will use multiple cloud services and platforms by 2020. The Multi-Cloud enthusiasm also comes with some speed bumps along the way regarding Orchestration and rising Cloud Costs, less visibility of Cloud Spends, allocation and chargebacks, Governance and Compliance, Provisioning of different service providers. But, at the same time help is on the way with the rapidly emerging options of Cloud Management Platforms.

Comparison of Cloud Management Platforms

Businesses vary in their challenges around Cloud Management. Selecting a right Cloud Management Platform requires laser focused approach to match challenges and features of platform. We picked up eight midsized vendors ($7.5 million to $30 million in revenue) in Cloud Management Space to draw a comparison among all of them.

  1. BMC Cloud Lifecycle Management: BMC Cloud Lifecycle Management replaces the existing IT landscape with self-service IT infrastructure for cloud and non-cloud platforms. The cloud management platform supports highly complex, large-scale IT initiatives that involve Self-service portal, Full-Stack service provisioning, Automated ITSM governance, Continuous compliance and Service health management. BMC’s advantages include support for leading cloud service providers to avoid vendor-lock in, cost-savings with unified view of all cloud resources and automating compliance to reduce risks across heterogenous IT environment.

When to pick – BMC is a good pick for large organizations having significant IT needs and investments and where one cloud service provider will not suffice their needs. These organizations have diverse needs leading to continuous rise in cloud usage and IT teams are already pressed with speed required to deliver it. BMC’s Cloud Lifecycle Management fulfils the need of continuous delivery of IT services, while maintaining control, improving security, and optimizing cost.

Clouds supported: OpenStack, AWS, Azure, Rackspace, SoftLayer
Pricing: Upon request

2. RightScale: RightScale is another popular platform in the Cloud Management space. RightScale claims to optimize cloud costs by auto-scaling and automated scheduling of workloads, leveraging discounts from cloud providers, and automated downsizing of instances based on usage. It offers a comprehensive solution for enforcing Self-Service IT by Governance and Compliance automation, maintain consistent and secure configurations and ensuring repeatable and standardized architecture across Multi-Cloud environments.

When to pick – RightScale is a great choice for organizations utilizing multiple clouds for running their applications, workloads, disaster recovery etc. As per the RightScale 2019 State of the Cloud report from Flexera, respondents are already running applications in a combination of 3.4 public and private clouds and experimenting with 1.5 more for a total of 4.9 clouds. The advantage with RightScale is their strong capability of providing unified view around multiple public and private cloud resources including compute, network, and storage with a single pane of glass. The dashboard provides actionable information to reduce costs, improve infrastructure efficiency, and close security holes.

Clouds supported: OpenStack, AWS, Google Cloud Platform, IBM, Azure, Rackspace, VMware
Pricing Details: Upon Request
Free Trial: Yes
Deployment: Cloud, SaaS, Web
Training: Documentation, Webinars, Live Online, In Person
Support: 24/7 (Live Rep)

3. Scalr: Scalr Enterprise-Grade Cloud Management Platform enables enterprises to achieve cost effective, automated and standardized application deployments across Multi-Cloud environments. Scalr uses a hierarchical, top-down approach to policy enforcement empowering administrators to find the balance between the needs of Finance, Security, IT and Development teams. Leading global organizations have selected the Scalr platform, including Samsung, Expedia, NASA JPL, Gannett and Food & Drug Administration.

When to pick – Scalr is a multipurpose suite and provide benefits in four areas of Cost Optimization and Visibility, Governance, Security, and Compliance, Business Agility and Increased Productivity. Scalr is a recommended choice for organizations struggling to implement standardize policies in Multi-Cloud Environment. Scalr’s Policy Engine creates re-usable guardrails around Access, Workload Placement, Application Lifecycle, Integrations, and Finance.

Clouds supported: OpenStack, AWS, Azure, Google Cloud Platform, Rackspace, Eucalyptus,Nebula
Free Demo: Yes
Deployment: Cloud, SaaS, Web, Installed – Mac, Windows
Training: Documentation, Webinars, In Person
Support: Online, Business Hours, 24/7 (Live Rep)

4. CloudCheckr: CloudCheckr is a comprehensive cloud management solution, helping businesses manage and automate cost as well as security for their public cloud environments. We are an AWS Advanced Technology Partner with Security and Government competencies, as well as a certified Silver Partner with Azure, to support multi- or hybrid-cloud strategies.

When to pick – CloudCheckr’s platform focuses on cost optimization to a huge extent, so if you are looking for cloud costs optimization, CloudCheckr can be a suitable choice. CloudCheckr has in-built Predictive Analytics for forecasting future cloud spend and recommendation engine to eliminate unnecessary cloud wastages.

Clouds Supported: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud
Starting Price: $499.00/month, Includes all Cost, Security, and Compliance modules.
Free Version: Yes          
Free Trial: Yes   
Deployment: Cloud, SaaS, Web
Training: Documentation, Webinars, Live Online, In Person
Support: Online, Business Hours, 24/7 (Live Rep)

5. Cloudability: In the world full of Cloud Management solutions, this platform talks about bridging the gap between IT, business and finance together to achieve accountability for the cloud spend. Apart from Cloud Cost being their key differentiator, Cloudability offers Governance and Migration solutions for top cloud service providers.

When to pick – Cloudability is a good pick for organizations looking to control their cloud costs and have their financial teams to support in their goal with unit economics of cloud for competitive advantage. Cloudability focuses on cost optimization, adoption and democratization of cloud spend to translate cloud bills to different business units.

Clouds Supported: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud
Starting Price: $499.00/month, Includes all Cost, Security, and Compliance modules.
Free Version: Yes          
Free Trial: Yes   
Deployment: Cloud, SaaS, Web
Training: Documentation, Webinars, Live Online, In Person
Support: Online, Business Hours, 24/7 (Live Rep)

6. Apptio: Apptio is another acclaimed vendor in the Cloud Management Platform MarketFor most companies, cloud initiative is a complex landscape–with CIOs, Finance, Operations, Infrastructure, and Security teams consuming their share of cloud applications and resources and restrict the ability to optimize cloud spend by trimming the wastages. Apptio targets cloud challenges and their solutions based on various roles and specific problems they deal in cloud usage. Apptio talks about offering solutions to cloud challenges based on the role like CIOs, CFOs, Infrastructure and Operations and other cloud initiatives like DevOps and Agile, Corporate Shared Services, Digital Business and Service Transformation.

When to pick – Apptio is a suitable choice for organizations or IT leaders looking to highlight the financial value of IT departments. Apptio claims to be one of its kind Technology Business Management (TBM) tool to provide the visibility about costs, budgeting and forecasting. Apptio offers Apptio Cost Transparency to align costs automatically to peer infrastructure benchmarks and IT Planning to align IT budgeting and forecasting to business strategy.

Clouds Supported: AWS, Azure, GCP
Pricing Details: Upon Request
Deployment: Cloud, SaaS, Web

7. Embotics: Embotics is a new age Cloud Management Platform providing solutions for adoption of DevOps, Microservices, Continuous QA, Kubernetes Version Management, Integrated Cloud Governance and Cloud Expense Management across all these initiatives. With the conventional solutions of providing cloud usage visibility, uniform governance policies and self-service IT infrastructure, Embotics seems to be an all-in-one solution for organizations to navigate in their cloud journey.

When to pick – Organizations confused about picking cloud management platform based on their business initiatives of Digital Transformation, DevOps, Microservices, Containers, IT Modernization should consider Embotics. Embotics talk about the bigger picture of cloud management space and target its solutions to new-age technologies like DevOps automation, Microservices and Containers, while continue to manage traditional workloads, IaaS, and development and support methodologies. Embotics provides use-cases about each of these practices that help businesses to deliver modern features, services and solutions faster than ever before with high quality standards and consistent user experience.

Free Trial: Yes
Deployment: Cloud, SaaS, Web
Training: Documentation, Webinars, Live Online, In Person
Support: Business Hours

8. Accenture Cloud Management Platform: Accenture Cloud Management Platform boasts of the patented innovation and Accenture IP built into the solution for cloud resource visibility, management consistency, and operations control, that can be scaled as per the needs of global organizations. Accenture’s Cloud Management Platform presents concrete numbers in terms of cloud management efficiency, cloud migration, saving costs and deployment of SAP, Oracle and DevOps instances.

When to pick – Accenture’s Cloud Management Platform has a rich legacy of Accenture cloud capabilities and project implementations. Accenture Cloud Management platform is a right pick for organizations working towards leading technologies like next-generation mobility, advanced analytics, Internet of Things (IoT), cognitive technologies, blockchain, APIs/microservices, and natural language interfaces. The global organizations struggling with scalable, secure and compliant solutions should implement Accenture’s Cloud Management Platform.

Deployment: Hybrid Cloud
Support: Phone, Web Chat, Live Training, Email and Online Ticketing

Conclusion :

Cloud adoption and specifically, multi-cloud adoption is here to stay and while challenging and expensive, Enterprises need to manage them properly to give their customers, IT staff and internal Business stakeholders the best infrastructure to run their applications. A cloud Management Platform can go a long way in easing those challenges.

Digital Alchemy – Making your Business the gold standard

We are living in the times where a popular business existing for over two decades find a potential market threat in a 9-months-old Digital native start-up. Irrespective of the line of business, Digital Transformation has become nearly inevitable to stay sustained. While we are at it, let’s just admit that it’s no small race to get the gold star first. The accelerating digital technologies such as one-click Customer service touch points, immersive VR/AR experiences, voice-based virtual assistants tagged to electronics… all are brought into play. Nothing feels like fantasy anymore!

Finding the right ‘Philosopher’s Stone’ that can create significance, bring in the much-needed Digital Transmutation and result in Magnum opus is quite a task. So, in this article, we will discuss a way to achieve the ultimate golden standard which is… sustainable Customer Value to a business. Let us imagine a matrix of Digitization state defined by a company’s Digital Capabilities against Integration Capabilities. Based on their current status of Digital readiness, every company lies somewhere within this matrix. To achieve the gold standards, the company will have to move upwards from one state to other and eventually attain a state where it can distill best out of every asset and technology synergistically. Digital Success!

There is no comprehensive know-how guide for Digital Alchemy that works for everyone. Digital Transformation is to emphasize the uniqueness of a business using the right amount of Technology as an accelerator and it is impossible with a one-size-for-all approach. Every company whether a Digital-native or a Traditional company investing efforts to stay relevant to its millennial customer base would be in one of these below-mentioned stages before achieving the Elixir of business value.

Digital Entropy

This is the rudimentary stage where naturally deteriorated (with time) entropic forces make a business resistant to change and adoption. Every day we work with enterprises and no one wants to be stuck with the primitive stages of Technology.

“But these organizations are having siloed efforts of Business Analytics, Digitization of Customer Services, Cloud Migration and other technological efforts without the end goals of achieving gold standards in sight.”

What’s required at this stage is a strong intent towards betterment and ability to bring all the stakeholders on board to achieve the goal. In simple words, their current system doesn’t capture (enough) value for the employees or customers, and it requires decomposition and Putrefaction, which is ‘Nigredo’ in Alchemy terms. It must be a natural progression.

It is always suggested to believe in ‘Incrementalism’, especially when your company seems to be at this condition. Start small, maybe with IT infrastructure, Data Integration, Inside Operations and slowly find your way up. Choosing the technology partners and vendors carefully might require a lot of brainstorming, but it’s worth it, to avoid a scenario where the golden opportunity can turn into a pitfall.

Digital Enthalpy

In an urge to stay up in the race, companies oft-times fall into this category and we came across a lot of such during our assessments and client interactions.

“These companies are hot on Digital, have Digital Transformation at the top of their every agenda and desperately adopt every technology emerging around them and struggling to integrate them to their main aim and get people to adopt the sudden storm of changes.”

This is a path drifted away from achieving Digital Alchemy which eventually ends up making the business leaders worry about their Return on Investments. What it needs is ‘Albedo’, Purification, strategizing and streamlining of alacrity.

It’s always necessary to have a system to constantly reassess the current Digital Strategy and check how well the new adoptions are making synergies with the rest of the infrastructure. Some adoptions that have saved our clients from being disrupted at this stage are – App modernization in DevOps way, automating manual processes using Robotic Process Automation (RPA) and Improving customer interaction by introducing new digital touch points. The sense of confidence that comes from customers who are treated with excellent service and experience is long lasting.

Successful Digital Alchemists aren’t the ones who create gold with DT and stop there. It is always important to reinvent themselves to stay sustained.

Try these to turn your disruptable-business into a bi-modal enterprise:

Traditional Product with Digital Service – You can still sell the same product that made you special to your existing customers but adds value by transforming the way you do it through digitally enabling the current business model. Build applications (In-house or Third-party) that’ll serve your customers better. Traditional Service with Digital Service – While your traditional (off-line) service is serving your existing customers, make yourself available to the new generation prospects with services that are gadget-friendly. That way, you aren’t going too away from your current base but still ready with an expandable business model for the new-gen. Going all Digital overnight would create a lot of unwanted confusion within the organization which will eventually affect the customer value.

Hybrid Enthalpy:

Transforming the businesses that are set down in this state is the toughest of all.

“Business in this state are ahead in their journey of achieving Digital Alchemy but are struggling with the adoption of data-driven technologies.”

Organizations stuck at this stage holds a lot of unutilized data and requires immediate attention towards employing Artificial Intelligence and Machine Learning Algorithms to achieve gold standards. They strongly believe in an extreme level of information and system consolidation but, is a screaming need for Digital Discovery. As they say, Adopt or die. What can save these businesses is ‘Citrinitas’? An awakening to dust off the unmined data?

They are one step closer to achieving the gold standard because system integration is the toughest one of all tasks needed for Digital Transformation. Since the consolidation is in place, they just need to perform a Readiness Assessment and spend time on picking the right technology(ies) that can improve the value and customer experience and monitor the progress. There are literally thousands of technologies available and the number is only growing day by day. AR/VR, Artificial Intelligence, Bots, Machine Learning, Artificial Intelligence, Internet of Things, Block Chain the list goes on.

Digital Alchemy:

The ultimate position everyone would love to see themselves in but is not an over-night journey to reach there. This is where the enterprises constantly disrupt their own models through continuous innovation which is nothing but distilling the best out of every asset and technological adoption. It’s not a destination but is a state of continuous rediscovery for the business. The enthalpy needs to help the system functioning at an optimal level of Digital Success.

Once the unique recipe and modus operandi are found to Digitally Transform the business, the leadership needs to transcend this movement by establishing the new agenda of culture, purpose, and future; and make it articulated well to the entire workforce. They must act as the change agents to drive the digital and physical business transmutation. A DT journey map needs to be drafted at every touch point with intent and attribution. Understanding that Digital Literacy is the only way of business, every part of the business must feel responsible for the Customer Experience and Innovation to make it happen. When Innovation becomes an integral part of the culture, that is ‘Magnum opus’!

Why are we one of those very few successful Alchemists?

For a Technology company who is not even a decade old, Qentelli has an impressive collection of illustrative cases demonstrating how a Digitization can rebuild the business in more than one way. When a giant global food chain was feeling left behind mainly because of,

  • Their Monolith ERP Application for POS and Back office operations
  • Siloed implementation of CI
  • The time spent to spin up new Environments
  • Multiple versions of the core Front- and Back-of-store Applications
  • Incorrect versions of Software deployed to Environments
  • No streamlined process for Test Data Management
  • Inconsistency in environments, and many other issues.

That is exactly when Qentelli has stepped in to introduce production-ready, lightweight containers with standardized images for streamlined CI/CD practices. We’ve used just the right amount of AIOPs that can complement and eventually capable enough to supplant the human IT staff efforts to perform monotonous tasks such as system monitoring, alert response, trouble diagnosis, and course of action drafting. There are more multiple instances where we’ve introduced sophisticated DevOps, Automation, Robotics, state-of-security eCommerce applications, and Application Modernization solutions to Digitally enable them to survive in the ever-competitive marketplace.

Now that we are all on the same page that shift of businesses during Digital Transformation doesn’t mean adopting Digital Technologies to simplify business operations but also carefully choosing them and strategically modifying them to fit well with the culture, value and business goals. Information Technology leaders play a crucial role to whip the magic potion to create gold. Placing the organization on par will only be possible when your Digital Transformation makes enough sense to your employees and customers.

Opinions don’t matter, data does – A framework to collect measurable data for DevOps

DevOps started as part of the agile movement and making great strides since then. As we take stock of the progress from the time it started, we see organizations adopting it at a rapid pace. But companies are able to realize only a fraction of DevOps potential and benefits. Adapting to an era of always connected and Data-driven apps, organizations have to build solid muscles around executing Data-driven DevOps for multiplying DevOps benefits. More honestly, software deliveries are no longer guess estimates or instincts; they need a bolster from real-time data from machines, customers, tests and IT operations.

In our experience, with most of the organizations there’s a lack of understanding between traditional reporting and advanced analytics that involves Machine Learning. One of the leading financial advisory firm did heavy investments in the DevOps program, when we did a thorough examination, most of their processes were below average. Client sprints did not match the capacity of the teams and teams were not operating on the defined acceptance criteria basis. They regularly mentioned using DevOps tools, but when asked data, they presented manually extracted reports. The question is, how deeply your data is driving DevOps, and not DevOps experts’ opinions? In this article, we argue the next-phase in DevOps is real Data and not tools and technologies and no more opinions.

Build a framework to collect data for DevOps

All good in theory, but how? Moving from waterfall to agile and then introducing DevOps was itself not a simple proposition for organizations, now enabling Data in every software decision requires major changes towards organizational mindset–from room full of opinions to data-advocated opinions and IT infrastructure to capture data in real-time.

Even in DevOps Data Beats Opinions

Most of you remember chasing release timelines of features that’s not even get used by customers. Or for another instance, changing the environments without understanding the data flow and running into compliance issues. These were normal scenarios where people backed their decisions with opinions or human memories and ran into problems.

The profound shift in the development practices and customer demands need re-examination of DevOps practices. While the primary goals of agility, speed, efficiency and collaboration remains same, Data has to get intertwined in DevOps decisions. Building or maintaining Data-driven application with DevOps requires greater collaboration between different teams and data from different sources. Development, Operations, Security and Governance, Data Science, Marketing, Customer service, Product Management, and Leadership everyone has to get involved.

The three important data categories to mine insights to deliver strategically superior services to customers are–Data from Machines, Data from Customers and Data from Tests. Data from these three areas can bring in quick improvements in the release cycles. Qentelli has worked directly with C-suite to identify which data to capture and how to use them to optimize applications. We have done it without pausing their development efforts, keeping regular deliveries unaffected.

Automate your Delivery Pipeline

It’s no coincidence that organizations are automating more than ever. Organizations have seen how their counterparts are scoring big with Data-driven approach, instead flipping monthly reports or referring to outdated knowledge bases. Automation is essential to support data democratization. Automation creates a continuous stream of data to feed into applications for learning and improving and even produce self-learning and self-healing systems. Radically balanced and future-ready automation initiatives need to integrate with Analytics engine, a core of deriving insights from the infused data.

Consider Infrastructure Managers working on building network infrastructure using manual tools like excel sheets, tedious to manage and prone to human error. The rise in application and network complexity is compounding this challenge. Through automation-assisted “replicated build” which mimics required configuration for developers, testers, pre-production, the same manager can cycle through countless configurations and this can be the base for infusing AI to learn, improve, suggest and predict with every iteration.

Build or Buy – Analytics Engine

Organizations realized data troves created in the DevOps pipeline holds the real worth for the businesses to improve their Delivery Operations. But is it possible for Development or IT or QA teams to understand what they stand for and how they can be mined to derive real business intelligence? Mostly, No! AI can be of help here.

With AI taking concrete shape in businesses, building or buying an Analytics engine for DevOps pipeline gives endless benefits to businesses. The potential use cases of applying Analytics to DevOps data are many, measuring customer engagement after a feature release to real-time visibility of usage, performance, build pass/fail percentage, reliability, measuring errors percentage, etc.

DevOps Dashboard

After applying analytics to the DevOps data, organizations should see results in a comprehensive Dashboard. This dashboard summarizes the view of day-to-day operations and development activities. With up-to-the-minute insights about how applications are performing, and development activities are progressing, it becomes easy to drive delivery decisions with data rather gut feelings or intuitions or human-decided priorities.

For DevOps Dashboard, there are a lot of options mushrooming in the concentrated DevOps tool chain market. Most of them claim to provide full visibility of DevOps data but selecting them requires a targeted approach towards what businesses are expecting from DevOps–be a development philosophy or a strategic differentiator.

Building dashboard can be an intensive exercise and lies outside the expertise area of organizations. It is always advisable to opt for a tool that can be integrated with the existing and diverse DevOps Tool Chain and/or can be customized completely. Qentelli has worked with clients having a plethora of DevOps tools and struggling to get a complete view of multiple projects and teams.

We developed The Engineering Dashboard (TED), our proprietary tool to solve these challenges of distributed tool chain, projects and teams and remove the complexities of aggregating data, stopping organizations from driving their DevOps with data. TED, our Engineering Dashboard is a one-stop unified cognitive dashboard to aggregate, analyse and alert based on data produced within DevOps pipeline.

Reworking on DevOps with Data

Business leaders can look beyond the rear-view mirror of faster deliveries, as DevOps move on from opinions to data. Instead, they can harness the power of Data and Artificial Intelligence to drive application and business decisions based on real-time and predictive insights. They can use Data-driven DevOps to up their game: effortlessly for new feature releases, assuring IT alignment for new initiatives, block security threats and improving application performance. The end results? DevOps is no longer a battle to release faster in the face of growing data complexity. It becomes an opportunity to build Data-driven culture within an organization to benefit customers with more innovation and faster deliveries. Talk to us in more detail about TED and how it can complement your current development tool chain at info@qentelli.com.

Reimagining Test Automation in the age of Digital Transformation

In the digital age, applications are the primary mode of consumption for nearly all products and services and brand differentiation lies in providing seamless, omnichannel customer service experience. Organizations have already adopted lean and DevOps ways of development and they have to afford new approaches to test new ecosystem of complex, highly interconnected, APIs and cloud-driven applications. IT leaders cannot pick the two out of Speed, Quality and Cost. They need all three–Speed, Quality and Cost for creating all-inclusive digital journeys.

Companies like Netflix are already changing how media consumption works and all credits to their engineering practices, right from development to testing. The company has gone from a manual mode to continuous, fully automated and high-volume testing. We are not talking here to replicate their engineering practices.

It is impractical to replicate the automation framework of any organization as there are stark differences in applications, technology stack, leadership style, team structure and size. Testing automation should happen in incremental ways to achieve required maturity. Every IT leader has to develop a unique blueprint for testing their applications to ensure a fully functioning digital customer experience. This blog talks how teams should re-imagine their test automation approach to complement their digital transformation journey.

The path to test automation in the age of Digital Transformation

Test automation is a buzzword but still not a household practice in organizations. Digital tsunami requires pro-responsive test strategies focused to deliver differentiated and high-touch services to reinforce brand identity in the digital era. The change in testing strategies need to touch myriad of processes, from challenging the status quo of established testing model to cater to the need for speed in dealing with changing development course with customer and partner feedback. Some of these changes that are required for test automation in the age of Digital Transformation are –

Focus on business tests – Teams are required to zoom out of the code and testing details; and take a closer look at the important business-level problems and write tests to solve these business problems vis-à-vis deliver seamless digital experience. One way of doing this is looking beyond the requirements and scope documents and build tests by looking at the real-time and operational data on how users are interacting with the application. Yes, it is possible. There are tools available in the market that gives idea metrics about end-user interaction. Teams need to understand these and develop tests accordingly.

Digital transformation revolves around customer-centricity and thus test strategies should focus on writing the test cases from a customer’s point of view. Behavior Driven Development (BDD) encourages use of simple language to blur the lines between engineering and business teams. BDD holds relevance for a digital future as it focuses on the outcomes and not the product important for the success of digital transformation.

Introduce Continuous Testing into your development – Digital transformation is beyond responsiveness and agility, its pro-adaptation to the future and related outcomes. Continuous testing ensures that engineering team are proactively testing every new feature in the development stage.

To introduce continuous testing in development, engineering heads have to introduce automation and leverage tools available for environment provisioning to test continuously at developer-machine level. Organizations often term test data management and generation as challenges because they are highly manual and time-consuming. These are the areas where organizations should use automation heavily to have testing running in parallel with the development. Test automation is required to test new codes continuously and this ensures a timely feedback about the bugs and issues to fix them early in the cycle.

Improving processes with Automation – Human-driven processes are prone to error, forgetfulness and skipping when they are redundant and highly manual. But with machines programmed for a specific function, there are zero possibilities of skipping or forgetting any test to run or data to generate.

Automating tests is pivotal to achieve continuous testing and make them apt for Digital age. Achieving 100% automation is an ambitious target because of constant changing requirements but achieving 85 to 90% test automation is doable even for the complex applications. Some areas of automation are test data generation, test data and environment management, running test suites and generating reports.

Test automation is a time and effort taking task. It’s very important to identify right tests to automate at the first attempt so that efforts and time do not go waste. Testing teams must create mini regression suite covering critical user journeys of high business value and run them first. Once teams have confidence about mini regression suite, the complete test suite is run, and results are collected to act upon among teams. There needs to be a defined acceptance criterion for every user story to ensure story is completed and functioning as expected.

Allocate budget for test automation – Testing requires same treatment and importance as given to the development activities. Testing is development cum testing as testers are developing code to test application under test. Businesses have to invest in right tools and technologies to make every-step automated and trouble-free. Teams devoting separate budgets have clear goals to be achieved out of testing and how to contribute to the software development lifecycle which makes it as a critical function.

Testing smartly with AI –Newer development methodologies around user interaction are uncovering the limitations of test automation. Testers write test scripts with guesses of how end users are interacting with the application. Though it’s a good way as testers are thinking from end-user perspective but successful test automation scripts must cover the end to end user journeys accurately. This is the larger issue with the current test automation practice.

Enters AI that can speed up the test automation practices by applying algorithms to large amounts of data produced by testing activities. Moving from manual and partial tests to matured CI integrated end-to-end functional suite includes a lot of manual and repetitive tasks. Organizations can use human capabilities to explore the areas of Automation, AI and Big Data in application usability, feature and integrations and test data analysis. Further, AI-powered test automation creates a knowledge base for self-learning and taking proactive actions.

A critical function, driving Digital Transformation

Organizations using AI for testing are surely creating a niche advantage for themselves rather than their counterparts by using data instead approximates and automation–two imperatives of digital transformation.

Qentelli’s test automation strategies are well-suited for DevOps and Agile environments to provide ROI to clients. Our test automation services cover web and mobile applications to get new digital services of client faster to the market. Do you feel manual testing is holding you back from right time releases? It’s time to engage deeper with us to address existential challenges in software deliveries at info@qentelli.com.