Skip to main content
Modernization Strategy for
sun logo

Transforming Monolithic Application to Micro Services and Frontend

Executive Summary

Qentelli thanks Sun Life for providing us the opportunity to present our solution to the RFP for Modernizing your Monolithic Application to Microservices and Micro Frontend. We understand that Sun Life’s objective is to modernize the existing legacy application in line with the Goals and Strategy as part of the Digitalization journey.

Qentelli is a Digital and Cloud Technology Company. Being a Technology company, Qentelli teams powered by our Innovation Teams, backed by our Digital Center of Excellence have deep expertise and experience in delivering Digital Transformation solutions for several Fortune 100 customers. The solutions include Modernization, Cloud Native, Event Driven and Micro Architectures that bring rich Digital Experiences. 

Our Digital Transformation practice has been the key partner in working with several customers in the BFSI sector which includes Traditional and Digital Banks, Financial Services providers for Residential, Commercial, Auto, Marine industries and Insurance in Life, Auto and other sectors. 

Some of the work that we have done, which are relevant to this proposal include: 

Platform Modernization for a Global leader in Financial Services for modernizing their Legacy to Microservices & Frontend based application on AWS (multi-tenant, multi-lingual) with the aim of improving end user experience and accelerating time to market. 

New Application Development on Microservices & Frontend for the World’s largest financial consulting organization.

Digital Transformation for Largest Regional Bank in US by modernizing their revenue generating monolithic application into microservices architecture on Azure Platform.

Proposal

This proposal articulates the technical approach to modernization strategy that provides agile, rapid, low-cost migration and transformation capability. The proposal focuses on: 

 

Qentelli (Kwen–Tel–Lee) is a Digital and Cloud Technology Company. Our Intellectual Property which includes AI-based products/tools, frameworks, methodology, and process playbooks helps accelerate and deliver Digital Transformation, Cloud Adoption, DevOps, and Quality Engineering solutions to our customers.

 

 

Application modernization for Sun Life: Transforming Monolithic Application to Micro Services and Frontend.

 

Proposed Technical Approach 

The process of transforming  Sun Life Application (monolithic application) into microservices is a form of application modernization. 

We propose an architecture based on microservices. The fundamental concept is to split functionalities into cohesive verticals — not by technological layers, but by implementing a specific domain. The following diagram depicts the overall layout of the solution. Following are extensive explanations of the decomposition of the respective frontend and backend layouts.

 

 

Through AWS Route 53, we can employ Geolocation routing, which enables the selection of resources based on the geographic location of the user. By using geolocation routing, we can localize content and display some or all of the website (micro frontends) in the user's language. Additionally, geolocation routing can be used to restrict content distribution to only the assigned locations.

Geolocation works by mapping IP addresses to locations. However, not all IP addresses are mapped to geographic locations, therefore even if geolocation records are created for all seven continents, Amazon Route 53 would still receive DNS queries from locations it cannot identify. Thus, we must develop a default record that handles searches from IP addresses that are not mapped to any location, as well as requests from locations for which geolocation records have not been established. If there is no default record, Route 53 returns "no answer" for queries from those locations.

Micro frontend for different Locations:

Overview

The micro-frontend architecture introduces microservice development principles to frontend applications. In a micro-frontend architecture, development teams independently build and deploy “child” frontend applications. These applications are combined by a “parent” frontend application that acts as a container to retrieve, display, and integrate various child applications. In this parent/child model, the user interacts with what appears to be a single application. In reality, the users are interacting with several independent applications, published by different teams.

The most challenging aspect of the micro-frontend architecture pattern is integrating child applications with their parent.  Prioritizing the user experience is critical for any frontend application. This refers to ensuring that a user can seamlessly navigate from one child application to another inside the parent application in the context of micro-frontends. It is critical to avoid disruptive behavior such as page refreshes or multiple logins. 

Parent/child integration entails the parent application retrieving and displaying child applications dynamically when the parent application is loaded.

 

In the proposed architecture, each service team will be running a separate, identical stack to build their application and will use AWS developer tools and Amazon CloudFront to deploy the application to S3. The CI/CD pipelines utilize shared components such as CSS libraries, API wrappers, or custom modules stored in AWS Code Artifact. This drives consistency across parent and child applications.


When retrieving the parent application, the user should be prompted to log in to Okta and retrieve JWTs. The parent application retrieves the child applications from CloudFront and renders them within the parent application following a successful login. Alternatively, the parent application can elect to render the child applications on demand when the user navigates to a particular route. The child applications should not necessitate re-authentication. They must be configured to either use the JWT obtained by the parent application or discreetly retrieve a new JWT from Okta.

Benefits of Micro frontends

Deployment

This figure represents the high-level recommended infra architecture for deploying and delivering, micro frontends on AWS.

a. Hosting 

Amazon Simple Storage Service is the most effective alternative to containers for Micro frontends on AWS (S3). Using Amazon S3, we can store all your frontend static assets, including HTML, JavaScript, CSS, Fonts &, etc. One of the key advantages of using Amazon S3 is that the underlying infrastructure provides 99.99% for high availability for serving micro frontends by default.

However, a frequent constraint exists. In other words, Amazon S3 can only hold static Micro frontend artifacts (no server-side rendering). Nevertheless, it is compatible with the majority of modern frontend frameworks, such as AngularJS, ReactJS, and VueJS, which can be leveraged to build micro frontends.

A few best practices for using S3 with micro frontends:
 

b. Serving and Caching

Amazon CloudFront is one of the core services that plays multiple roles while serving external Micro frontends. The Content Delivery Network (CDN) capabilities to cache Micro frontends closer to the end-users are one of the prominent features we receive by default. In addition, Amazon CloudFront has gateway functionality for routing requests to various Micro frontends. This feature is particularly useful - it eliminates the need for different gateway services by allowing us to route both micro frontend and microservice requests through a centralized point.

c. Deployment Life Cycle - High Availability

When combining Amazon S3 and CloudFront, we must consider invalidating the CloudFront cache for each deployment, unless we generate unique filenames for each new deployment package.

If needed, invalidating the CloudFront cache is possible by using CloudFront CLI commands. These commands can be executed in the build pipeline for new deployments.

In addition, it is essential to manage the high availability of Micro frontends during deployment. CloudFront caching prevents the potential of downtime if a client hits the Micro frontend during deployment.

d. Deployment Pipeline

The goal of this setup is to ensure that individual micro-frontend repo changes trigger individual code pipelines. This encourages team independence - if only a micro frontend is modified, we only want to trigger its related pipeline and not all the others. This results in faster feedback loop so if anything breaks, the team(s) can work on it immediately. 
 

An AWS Code Pipeline gets started, once a code change is associated. This includes four main steps:

 

Step 1: Decouple by Domain Driven Design (DDD)

Microservices should be designed around business capabilities, not horizontal layers such as data access or messaging. Additionally, microservices must have loose coupling and high functional cohesion. Microservices are loosely coupled - changing one service does not require updating the other services simultaneously. 

A microservice is cohesive if it has a single, well-defined purpose, such as managing user accounts or processing payments.

The DDD approach can be retroactively applied to an existing application as follows:

Step 2: Prioritize Service/Module for Migration

An ideal starting point to decouple services is to identify the loosely coupled modules in the monolithic application and can choose a loosely coupled module as one of the first candidates to convert to a microservice. 

To complete a dependency analysis of each module, consider the following:

Migrating a module with heavy data dependencies is usually a nontrivial task. If we migrate features first and migrate the related data later, we might be temporarily reading from and writing data to multiple databases leading to inconsistency. Therefore, we must account for data integrity and synchronization challenges.

We recommend extracting modules that have different resource requirements compared to the rest of the monolith. For example, if¬¬ a module contains an in-memory database, it can be converted into a service that can subsequently be deployed on hosts with more RAM. When turning modules with particular resource requirements into services, you can scale your application much more easily.

In addition to business criticality, thorough test coverage, the security posture of the application, and organizational buy-in can influence the migration priority of services. Based on the evaluations, you can rank services.

Step 3: Extract a service from the Monolith

After identifying the optimal candidate for a service, we must determine a means for microservice and monolithic modules to coexist. One way to manage this coexistence is to implement an adapter that facilitates the compatibility between the modules. Over time, the microservice absorbs the load and the monolithic component is eliminated. This progressive procedure reduces the risk of migrating from a monolithic application to a microservice as it allows for the gradual detection of bugs and performance concerns.

Typically, monolithic applications have their own monolithic databases. One of the principles of a microservices architecture is to have one database for each microservice. Therefore, when modernizing monolithic applications into microservices, split the identified monolithic database according to the service boundaries.

To determine where to split a monolithic database, first, analyze the database mappings. As part of the service extraction analysis, gather some insights on the microservices that have to be created. Analyzing database consumption and mapping tables and other database objects to the new microservices can be accomplished using the same method.

This picture depicts the transition from monolithic to microservices database architecture.

However, splitting a monolithic database is complex because there might not be a clear separation between database objects. Additionally, you must address concerns like data synchronization, transactional integrity, joins, and latency.

Pros & Cons of Microservices database are:

Criteria

Pros

Cons

 

 

 

Loosely coupled schema

Deployment of service changes happens independently and rapidly.

If the entire data model is not designed, this might involve changes to already deployed microservices.

Deployment Effort

Since it is a schema with loose coupling, deployment will be optimized to save time and resources.

In certain scenarios, a shift to Production may necessitate additional effort.

Scalability

Scaling individual services is simple.

This might involve redundant data storage.

Optimizing DB environment/ performance tuning

Each DB can have a different configuration.

Increased maintenance tasks and effort.

Coexistence of SQL and No-SQL Architecture

Faster Data Retrieval

Infrastructure overhead needed to implement

Communication / Data integrity

Easy to implement

Redundant data will be needed & Integrity can be easily compromised.

Debugging

Simple to implement for something within the microservices.

Implementing an integrated functionality is challenging.

Testing

Simple to execute in small chunks.

Global testing is not possible.

DB Vendor lock-in

Provides flexibility to have different vendors

Complexity and cost in maintaining multiple vendors.

Approach for Data Migration

Data migration from Monolithic to Microservices DB

As data migration is a crucial aspect of the successful implementation of a new application, the following should be considered:

Typical monolithic applications are built using different layers—a user interface (UI) layer, a business layer, and a persistence layer. A central idea of a microservices architecture is to split functionalities into cohesive verticals — not by technological layers, but by implementing a specific domain.

This figure represents the high-level recommended architecture, based on microservices on AWS. In our case Micro frontend-based web/mobile app use REST APIs for communicating with the back end.

a.API Gateway

APIs are the front door of microservices, which means that APIs serve as the entry point for applications logic behind a set of programmatic interfaces, typically RESTful web services API. This API accepts and processes calls from clients, and might implement functionality such as traffic management, request filtering, routing, caching, authentication, and authorization.


Architecting, deploying, monitoring, continuously improving, and maintaining an API can be a time-consuming task. Sometimes different versions of APIs need to be run to assure backward compatibility for all clients. The different stages of the development cycle (for example, development, testing, and production) further multiply operational efforts.

Authorization is a critical feature for all APIs, but it is usually complex to build and involves repetitive work. When an API is published and becomes successful, the next challenge is to manage and monitor.

Other important challenges include throttling requests to protect the backend services, caching API responses, handling request and response transformation, and generating API definitions and documentation.

Amazon API Gateway addresses these challenges and reduces the operational complexity of creating and maintaining RESTful APIs. API Gateway allows you to create your APIs programmatically by importing Swagger definitions, using either the AWS API or the AWS Management Console. API Gateway serves as a front door to any web application running on Amazon EC2 or Amazon ECS.

b. Server Less Microservices

A common approach to reducing operational efforts for deployment is container-based deployment. In the above architecture approach, Docker containers are used with Fargate, so it is not necessary to care about the underlying infrastructure. 

In addition to DynamoDB, Amazon Aurora Serverless is used, which is an on-demand, auto-scaling configuration for Amazon Aurora, where the database will automatically start up, shut down, and scale capacity up or down based on your application's needs.

c.    Disaster Recovery

These microservices will be implemented using the Twelve-Factor Application patterns. This means that the main focus for disaster recovery should be on the downstream services that maintain the state of the application. For example, these can be file systems, databases, or queues.

The disaster recovery strategy will be planned based on the recovery time objective and recovery point objective. 

The recovery time objective is the maximum acceptable delay between the interruption of service and restoration of service. This objective determines what is considered an acceptable time window when service is unavailable and is defined by the organization. 

The recovery point objective is the maximum acceptable amount of time since the last data recovery point. This objective determines what is considered an acceptable loss of data between the last recovery point and the interruption of service and is defined by the organization.

d.    High Availability 

Amazon ECR hosts images in a highly available and high-performance architecture, enabling it to reliably deploy images for container applications across Availability Zones.
 

Tech-stack & Tools

Proposed Organizational Structure and Expertise

Qentelli's innate framework drives every digital transformation initiative.  Specifically, it consists of the following phases, each of which has its own set of Entry and Exit criteria. The Program management team is responsible for ensuring that the Entry and Exit criteria deliverables are in place to guarantee the success of the program.

Pre Delivery

Entry Criteria

Customer Business Need:
Requirements understanding Documents/ Legacy Systems from the Client

Exit Criteria

Requirement format as Features / Epics
Estimation (Resources/Timeline)
Budget
Market Readiness Understanding

Immersion

Entry Criteria

SoW (Features/Epics outlined Clearly)

Exit Criteria

Project Kick-off (External) - Completion
Process Identification - SAFe or Standard Agile
App Dev Orientation
Project Plan for Discovery

Discovery

Entry Criteria

Program Kick-off (Internal)

Exit Criteria

Master Program/Project Plan (presented to the client)
Resource Roles (Based on Account need)
Client-approved UX mock-ups
Program Increment (PI) Plan Outcome -
      Estimation Orientation with client
      PI & Sprint Backlog (2 sprints)
      PI Story Point Estimation

Implementation

Entry Criteria

PI Project plan,
User stories (DOR),
Team Readiness - Scrum Teams
Scrum ceremonies

Exit Criteria

Phase/PI/Project Closure report - Signed by Client
Audit & Compliance Report (Internal & External)
Audit Checklist
Risk register

Production

Entry Criteria

Release management plan
     Production Release pre-requisites
     Release cycle cadence

 

Exit Criteria

 Release document

Post Production/Support

Entry Criteria

Prod Support Contract

 

Exit Criteria

Contract End Date/Closure

The program comprises multiple stakeholders - including Vice President of Engineering, Director of Delivery, Program managers, Architects, Project managers / Scrum masters, and one or more Scrum teams.


The Scrum team comprises:

The Architecture team comprises:

The total number of scrum teams would be determined based on the size of the program.

The following technology-based resources:

Role

Skillset

FED developers

ReactJS, AngularJS, VueJS

Automation QA engineers

C# & Selenium

Accessibility Engineer

Functional testing

DevOps Engineer

AWS, CI/CD setup, Docker Kubernetes, Terraform

UX Engineer

Figma, UX designing

UI Engineer

HTML, CSS, Bootstrap

Program Delivery and Governance 

Qentelli will adopt a Continuous Engineering approach, using the Program Increment Model that groups a set of Sprints into an “Increment”, which is a potential “Release”.

Key Notes/Dependencies for the Engineering phase are outlined below:

The overall Engineering Model is shown below: 

Note: Each of the above areas, viz., Continuous Planning, Development and Integration, Testing and Deployment is addressed in the following sections: 

 

Continuous Planning through Product Backlog: 

The below table outlines the Inputs, Activities, Outcomes and Deliverables during the Planning activity 

Inputs

Key Activities

Outcomes

Deliverables

 

Continuous Development and Integration:

At Qentelli we have software developers that are best in breed with extensible knowledge on the latest technologies and are well versed in DevOps, Continuous Integration and Continuous Delivery. The image shown in the Pipeline Orchestration Section below, describes the various activities that will be part of the development phase. We follow the below core principles for our development practices:

Pipeline Orchestration:

A completely automated CI Pipeline can significantly reduce the time needed to move a unit of code through different stages and environments. The following diagram shows the typical Pipeline that is fully automated till deployment to Production.

Steps: 

Deployment Strategy:

 

Sample Deployment Pipeline Strategy:

Inputs

Key Activities

Outcomes

Deliverables

·        User Stories

·        Mock-ups

·        Test / Behaviour Driven Development

·        Branching Strategy

·        Unit Tests authoring

·        Automated Unit tests

·        Code Scans

·        Code Coverage

·        Continuous Integration Implementation

·        Code reviews

·         Coding standards

·        Branching Strategy

·        Continuous Integration

·        Unit test reports

·        Code Coverage report

·        CI and Automation coverage report

·        Test Readiness report

Continuous Testing

The key to building quality into our software is making sure we can get fast feedback on the impact of changes. In order to build quality into software, we need to adopt a different approach. Our goal is to run many different types of tests—both manual and automated—continually throughout the delivery process. 

Test Types

Qentelli will cover the following Test Types during the Program. Automated functional and non-functional test cases will be integrated with CI pipeline as part of the sprints. End to End Performance and Security tests will be executed at the end again.

Continuous Testing through Automation

Qentelli will use an Automation-first approach for its QA function. Tests will be written within the sprint using the BDD approach (Gherkin language) and integrated within the CI-CD pipeline.

Non-Functional Testing:

While application functionality is critical, there are other factors that contribute significantly to the user experience. Performance of the application is essential to a good user experience, while addressing security concerns are essential from an individual and legal perspective.

This section demonstrates our approach towards Performance and Security Testing; however, the actual tests are embedded within the CI-CD pipeline described above.

Security Test approach:

All the tools mentioned in the below images are representative. Best suitable tool will be recommended for Sun Life at the start of the engagement

Package/Release

Recommended approach: On Demand Infra: Qentelli will programmatically provision and configure components, such as servers, databases, firewalls, etc., Such infrastructure must be treated like any other code base. This includes version control and automated testing, but also:

Pipeline as code Architecture:

 

Strategy:

Pipeline as code Architecture:

In these days, not only having Infrastructure code for Docker, AWS and K8s is in play, while one could also write scripts and version them. Gitlab pipelines can also be written and versioned on SCM systems. Plugins allow you to configure your pipelines for Gitlab in this file and it can be placed in your project’s repository and as soon a new commit is found Gitlab pulls those changes and loads the pipeline configuration from the Gitlab file. Any new changes to the Gitlab file will immediately reflect on Gitlab as it will be treated as a new commit on the repository, and you can verify the changes immediately.

Application Monitoring and Governance:

Strategy:

DevOps and CI/CD is not complete without the one last puzzle that fits into the bigger picture. This is the phase where both the Developers and Operations team come together. This is the phase of Continuous Monitoring and it is used in a lot of phases. For example, see below:

 

Key Initiatives:

Deployment:

We will leverage Deployment automation which allows applications to be deployed across the various environments used in the development process, as well as the final production environments. This results in more efficient, reliable, and predictable deployments. Solutions that automate your deployment processes improve the productivity of both the Dev and Ops teams and enable them and the business to develop faster, accomplish more, and ultimately build better software that is deployed more frequently and functions more reliably for the end-user.

Inputs

Key Activities

Outcomes

Deliverables

Release Candidate

Environments

IaC scripts

Deployment Strategy

Pipeline setup

Artifact Repository

Deployment Scripts

CI Tool

After completion of the Program and warranty Phase, Sun Life should be able to manage the Product on its own. Hence this stage is very important for Sun Life personnel to understand the technology, development model, Data management etc.

Transition of the Product to Sun Life requires transferring 3 types of Activities as below:

Work Transition

Asset Transition

Knowledge Transition

The process for Transition will be through 3 phases: 

Our approach to Agile software development consists of the 12 principles from the Agile Manifesto for a successful delivery of each increment of the end-product. This approach enables Sun Life’s change in requirements at any state in the development cycle promoting customer satisfaction at the forefront. We make sure to enable continuous collaboration between our teams and the stakeholders at Sun Life. 

We will follow the philosophy of SCRUM development lifecycle and the various phases of the project management are described below.

User Story Creation:

User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system. They are written throughout the agile project. Usually a story-writing workshop is held near the start of the agile project. 

In order to create good user stories, start by remembering to INVEST in good user stories. INVEST is an acronym which encompasses the following concepts which make up a good user story:

Sprint planning plays an important role in the universe of Agile development methodologies which prioritize responsiveness, flexibility, and continuous improvement, and seek to help organizations excel at managing work in a world where priorities and markets are shifting rapidly. 

As customer requirements and competition evolve, sprint planning determines what work to tackle next.

In software, sprint planning determines which product changes or features can be delivered and how to roll them out most efficiently in the next iteration. This planning process ensures that the team tackles a realistic amount of work and accomplishes the most important work first. 

This helps the team determine how much work to bring into the sprint. By splitting product backlog items into small, discrete tasks and then roughly estimating them during sprint planning, the team is better able to assess the workload. This increases the likelihood of the team finishing all they say they will.
Identifying tasks and estimating them during sprint planning helps team members better coordinate their work.

Story points: Points are used to assign each engineering task a set amount of time. People commonly mentioned using Fibonacci series numbers to help offset the uncertainty that comes with larger stories. Fibonacci uses 1, 2, 3, 5, 8 instead of using 1, 2, 3, 4, 5 for assigning points.

T-shirt sizing: Assigning small, medium or large labels for each engineering initiative to indicate its complexity.

Time buckets: Simply using days and hours to estimate, instead of t-shirt sized buckets or story points.

The Definition of Done ensures everyone on the Team knows exactly what is expected of everything the Team delivers. It ensures transparency and quality fit for the purpose of the product and organization. Each Scrum Team has its own Definition of Done or consistent acceptance criteria across all User Stories. A Definition of Done drives the quality of work and is used to assess when a User Story has been completed.

This is an opportunity for the development team to discuss what they accomplished the day before, what they will be working on for the day and what obstacles are impeding their progress. The Scrum Master is usually the facilitator. The meeting is meant for all team members to share their input on the sprint and get a clear understanding on what work was already completed, what issues need resolving and what is left to do.

This will give good insight into the progress of the sprint and give an early indicator on if the commitments and sprint goal are being met.

At the end of each sprint, the team is responsible for providing a working piece of software that is potentially shippable and get feedback from the product owner and other stakeholders. The result will be weighed against the initial sprint goal and the team can use this time to provide their suggestions on what was accomplished.

This is an opportunity for the team to reflect on how they did in the sprint and determine ways in how they can improve. The team collaboratively suggests what they should start doing, stop doing and continue doing.  Any actions items identified will be fed back into the Sprint Planning meeting to accommodate any new changes into the next Sprint.

 

We believe accurate reporting is a key element in fostering quality communication within an organization, especially large, complex, global enterprises such as Sun Life. Without quality communication, employees have trouble distinguishing important from unimportant business data. Regular status reports, including those that support quality assurance teams and processes, help ensure that everyone is doing work that is up to your company's quality standards. We leverage Project Management tools present in the market where reporting such charts is easier and is part of the Sprint lifecycle. They help provide real-time dashboards that can be visualized by all the personnel on the project.

Metrics that will be measured that empowers quality and sustainability of the end product using the Agile Program Management:

Metric

How does it help?

Sprint Burndown

Visualizes how many story points have been completed during the sprint and how many remain, and helps forecast if the sprint scope will be completed on time

Velocity

Measures how many story points were completed by a team, on average, over the past few sprints. It can be used to predict the team’s output in the upcoming sprints

Lead Time

Measures the total time from the moment a story enters the system (in the backlog), until it is completed as part of a sprint, or released to customers

Cycle Time

A very simple metric that can raise a red flag when items within sprints across your entire system are not moving forward

Code Coverage

Measures the percentage of your code which is covered by unit tests. It can be measured by the number of methods, statements, branches or conditions which are executed as part of a unit test suite

Static Code Analysis

An automated process that can provide insights into code quality and clean code from simple errors redundancies

Failed Deployment

Measures the number of deployments (either to test, production environments, or both)

Escaped Defects

Identifies the number of bugs discovered only after a build or release enters production

o Qentelli will continuously capture knowledge generated during the Program Implementation. This would cover:
o Discussions on user stories, evaluation of alternatives
o Meeting minutes
o Architectural decisions
o Design decisions
o Specialized technical knowledge
o Documentation within source code
o Review comments/fixes
o Program status reviews
o Retrospectives and Demos

Tools to capture and disseminate information collected on the above are: 

o Confluence as the primary repository
o Auto-generation of source code comments
o SharePoint site for larger documents
 

The Governance Model is a baseline of key elements that are required for project governance based on the project's scope, timeline, complexity, risk, and, stakeholders. A communication plan will be developed once all the stakeholders have been identified and their interests and expectations have been defined. A well-formulated communication plan delivers concise, efficient and timely information to all pertinent stakeholders. A high-level overview of the Governance layers we follow is outlined in the diagram below:

The table below summarizes the various reports that will be published periodically for engagement governance. These reports will be made accessible to Sun Life on a real time basis through the project management tools.

Quarterly

Monthly

Weekly

Daily

Executive Management

Project/Program Team

Squads

Communication Plan:

Qentelli will defined and implement a communication plan during this project to define the right audiences, messaging, and tactical communication activities to support the project and provides direction to develop and encourage business-wide commitment to the vision of the project. The communication plan also includes an escalation process that is required to address issues / risks and assist with remediation and provide absolute transparency during the engagement. The Plan includes:

For timely resolution of all issues, the following Escalation matrix will be used: 

Escalation Level

Qentelli Team

Escalation Categories and Governance

Level 1

Development Manager

Quality Manager

Enterprise Architect

·        Project Issues

·        Deviation from Standards, deliverables, adherence to schedule, resource performance issues

Level 2

Delivery Manager

 

·        Change Disputes

·        Performance Issues (customer dissatisfaction deficiencies, issues around expectations)

·        Quality issues (not resolved in level 1)

Level 3

Delivery Manager

SVP Delivery

·        Continuous Slippage on agreed milestones

·        Unable to deliver on the agreed action items

·        Unresolved deficiencies around service offerings

·        Issues around customer satisfaction /expectation Management

·        Quality Issues (Not resolved in level 2)

·        Monthly Reviews on service Delivery

Level 4

SVP Delivery

CDO

·        Executive Review

·        Satisfaction Surveys

·        Quarterly/Half yearly Leadership meet for service performance review

·        Unresolved issues around quality (Level 3 and below)

·        Audits and compliance

·        Escalation Management

·        Strategy and thought leadership

A systematic approach to OCM is beneficial when change requires people throughout an organization to learn new behaviors and skills. By formally setting expectations, employing tools to improve communication, and proactively seeking ways to reduce misinformation, stakeholders are more likely to buy into a change initially and remain committed to the change throughout.

Successful OCM strategies include: 

 

Some of the risks and mitigation plan are presented below. We would maintain a Risk log and review the same every week with the key stakeholders from Sun Life.

Risk ID

Risk Description

Impact on

Prevention

Mitigation

1

Knowledge of scope identified during Discovery Phase may be insufficient

Schedule

2

Challenges in Decoupling current Architecture

Schedule, Quality

3

Resistance from Sun Life Staff in participation

Schedule, Quality

 

4

Program Schedule Adherence

Schedule, Quality

 

5

Regression issues when Refactoring code

Quality

 

Commercials

 

Commercials mentioned here are a ballpark based on the high-level understanding of the project scope. Planning has been performed based on the capacity and timelines which serve as two fixed known constants. Estimates will be revised after discovery phase once scope is uncovered for iterative planning. Meanwhile, in the unknown areas team capacity and timelines will be used for planning purposes.

 

DISCOVERY PHASE

Factor

Detail

Duration

8 weeks or 4 sprints

Number of squads

2

Number of story points delivered

320

Total Cost

US $159,680

IMPLEMENTATION PHASE

Duration

24 weeks or 12 sprints

Number of squads

4

Number of story points delivered

1920

Total Cost

US $927,360

TOTAL PROGRAM

Duration

32 weeks or 16 sprints

Number of squads

2 during Discovery and 4 during Implementation

Number of story points delivered

2240

Total Cost

US $1,087,040

 

Team Cost (USD) – Discovery

 

SKILLSET

COST/HR (USD)

COST / 4 Weeks (USD)

DISCOVERY (8 weeks or 4 sprints)

   

Squad 1

Squad 2

Application Architect

$35

$5600

1

 

Associate Architect

$32

$5120

1

 

Delivery Manager

$32

$5120

1

 

Scrum Master

$28

$4480

1

 

UX Designer

$24

$3840

1

1

UI Developer

$28

$4480

1

1

UI Developer

$28

$4480

1

1

UI Developer

$28

$4480

1

1

Automation Engineer

$25

$4000

1

1

Automation Engineer

$25

$4000

1

1

DevOps / Release Engineer

$28

$4480

1

1

Total Squad Cost / 4 sprints

  

$100,160

$59,520

Total Cost

  

$159,680

2 squads for 4 sprints

 

Team Cost (USD) – Implementation

 

SKILLSET

COST / HR

COST /      4 WEEKS

IMPLEMENTATION (24 weeks or 12 sprints)

   

Squad 1

Squad 2

Squad 3 (Optional)

Squad 4 (Optional)

Application Architect

$35

$5600

1

 

1

 

Associate Architect

$32

$5120

1

 

1

 

Delivery Manager

$32

$5120

1

   

Scrum Master

$28

$4480

1

 

1

 

UX Designer

$24

$3840

1

1

1

1

UI Developer

$28

$4480

1

1

1

1

UI Developer

$28

$4480

1

1

1

1

UI Developer

$28

$4480

1

1

1

1

Automation Engineer

$25

$4000

1

1

1

1

Automation Engineer

$25

$4000

1

1

1

1

DevOps / Release Engineer

$28

$4480

1

1

1

1

Total Squad Cost / 12 sprints

  

$300,480

$178,560

$269,760

$178,560

Total Cost

  

$927,360

4 squads for 12 sprints

Proposed Release Cycle and Delivery Milestones

Qentelli executes projects using both Standard Agile and Scaled Agile methodologies.

Scaled Agile would be optimal for Sun Life as it appears to be a huge initiative consisting of multiple applications that would eventually require extensive cross-team communication.

Scaled Agile Execution Model:

  • PI duration: 6 sprints (Each phase is a PI)
  • PI planning exercise (2-3 days) – Occurs every quarter
  • Product Backlog: Comprise of Epics/Features with highest priority & ROI for the quarter
  • Sprint Backlog: User stories for the scrum teams
  • Sprint Execution Model: 5 sprints of development, 1 sprint of hardening
  • Architectural Runway: Architectural solution readiness for the PI.
  • Sprint Ceremonies: Sprint planning meeting, Sprint retro, Grooming, Sprint review, Daily Stand-up.

Key Assumptions

Additional Insights 

Why Qentelli?