Application modernization for Sun Life Asia
to
Transforming Monolithic Application to Micro Services and Frontend
- Dallas, TX

Executive Summary
Qentelli LLC. (Qentelli) thanks Leonardo247 for providing us the opportunity to respond to the RFP for the Loss-Run Datamart.
We understand the objective of the project is to allow Leonardo Executives, Managers and decision-makers access to specific data available and quickly access critical insights without wasting time searching through an entire data warehouse.
Engineering Excellence – Architecting and developing a high-quality Datamart solution can be easily scaled and extended through intelligent and thoughtful design, high code quality standards and modern Software Engineering practices
Collaborative culture – Co-creation of Business Value through close collaboration with Business and IT stakeholders at Leonardo through the following shared values:
1. Build Trust through transparent communications
2. Fix issues, learn from them and move quickly – blameless post-mortems
3. Track, learn, Calibrate and Continually improve
Operations mindset – Keep in mind the complexity of operating the system, during the design phase. A complex system with bells and whistles is fun to design and develop but can be difficult to operate
Company Overview
Qentelli is a technology company that accelerates digital and cloud transformation journeys through implementation of DevOps, Automation, Agile transformation, AI and Deep learning.
We help clients to deliver software faster, more efficiently and affordably. Qentelli is headquartered is in Dallas, TX with a global presence. The services teams are powered by the Innovation Group that provides the thought leadership, problem-solving and tools and plug-ins needed for modern applications.
We believe that our proven expertise in Digital Transformation, makes Qentelli the vendor of choice for this initiative. Qentelli Executive Leadership team assures Leonardo247 that it fully aligns with and will go the extra mile to help achieve the above objectives.
Solution Overview
To meet the goals for this RFP, Qentelli will build a fully Managed Software Development Services group, to function as an extension to Leonardo247’s IT team. The Qentelli team will work in close collaboration with the various stakeholders involved in this project. The Qentelli teams will formulate and establish a set of standardized processes, methodologies for the various stages of engineering life cycle, recommend technologies and tools, CI CD best practices and quality check gates to improve the overall quality of code delivered by the teams. To ensure success Qentelli will introduce a robust Governance model to enable monitoring and control for the overall engagement.
Below is representation of the logical model of the solution designed for the DataMart.
The above diagram shows the happy path scenario when the uploaded files match the Common Data Model and processing that is setup. Each lambda handles a specific file (Insurance Carrier, Property and other parameters necessary for that file).
As part of the RFP process, Qentelli Solution architects reviewed the scope and the responses to the questions we posted in designing the solution. We request the solution to be reviewed as a reflection of our thought process with the limited knowledge we have at this point – the final solution will evolve during the project.
We assure Leonardo that we will have full ownership and accountability in ensuring that the Solution meets the objectives from functional, cost and quality perspectives.

Proposed Technical Approach
The process of transforming Sun Life Application (monolithic application) into microservices is a form of application modernization.
We propose an architecture based on microservices. The fundamental concept is to split functionalities into cohesive verticals — not by technological layers, but by implementing a specific domain. The following diagram depicts the overall layout of the solution. Following are extensive explanations of the decomposition of the respective frontend and backend layouts.

Through AWS Route 53, we can employ Geolocation routing, which enables the selection of resources based on the geographic location of the user. By using geolocation routing, we can localize content and display some or all of the website (micro frontends) in the user's language. Additionally, geolocation routing can be used to restrict content distribution to only the assigned locations.
Geolocation works by mapping IP addresses to locations. However, not all IP addresses are mapped to geographic locations, therefore even if geolocation records are created for all seven continents, Amazon Route 53 would still receive DNS queries from locations it cannot identify. Thus, we must develop a default record that handles searches from IP addresses that are not mapped to any location, as well as requests from locations for which geolocation records have not been established. If there is no default record, Route 53 returns "no answer" for queries from those locations.
Micro frontend for different Locations:
- The user requests the website domain, which in our instance is the CloudFront distribution domain.
- CloudFront CDN receives the HTTP request and returns the cached version of the resource if it exists.
- CloudFront sets the cache key and geolocation headers upon a cache miss.
- Depending on the cache behavior defined for the CloudFront distribution, a Lambda function verifies the viewer's geographic location. If the viewer is in the United States, the URL is appended with the location suffix and the request is forwarded to the appropriate micro frontend in the S3 bucket.
- The S3 bucket containing the requested resource is returned.
Overview
The micro-frontend architecture introduces microservice development principles to frontend applications. In a micro-frontend architecture, development teams independently build and deploy “child” frontend applications. These applications are combined by a “parent” frontend application that acts as a container to retrieve, display, and integrate various child applications. In this parent/child model, the user interacts with what appears to be a single application. In reality, the users are interacting with several independent applications, published by different teams.
The most challenging aspect of the micro-frontend architecture pattern is integrating child applications with their parent. Prioritizing the user experience is critical for any frontend application. This refers to ensuring that a user can seamlessly navigate from one child application to another inside the parent application in the context of micro-frontends. It is critical to avoid disruptive behavior such as page refreshes or multiple logins.
Parent/child integration entails the parent application retrieving and displaying child applications dynamically when the parent application is loaded.

In the proposed architecture, each service team will be running a separate, identical stack to build their application and will use AWS developer tools and Amazon CloudFront to deploy the application to S3. The CI/CD pipelines utilize shared components such as CSS libraries, API wrappers, or custom modules stored in AWS Code Artifact. This drives consistency across parent and child applications.
When retrieving the parent application, the user should be prompted to log in to Okta and retrieve JWTs. The parent application retrieves the child applications from CloudFront and renders them within the parent application following a successful login. Alternatively, the parent application can elect to render the child applications on demand when the user navigates to a particular route. The child applications should not necessitate re-authentication. They must be configured to either use the JWT obtained by the parent application or discreetly retrieve a new JWT from Okta.
Benefits of Micro frontends
- Independent artifacts: Teams can independently deploy their frontend applications with minimum impact on other services in a micro-frontend architecture. These changes will be reflected within the parent application.
- Autonomous squad: Each squad is an authority in its own field. For instance, employees of the policy information team possess specialized knowledge. This consists of the policy information service's data models, business needs, API calls, and user interactions.
- Flexible technology choices: Autonomy allows each team to make technology choices that are independent of other teams. For instance, the policy information service team may build its micro-frontend with Vue.js, while the online transactions team could develop its frontend with Angular.
- Scalable development: Micro-frontend development teams are smaller and can function independently of other teams.
- Easier maintenance: Small, focused frontend repositories facilitate long-term maintenance and testing by making them easier to understand.
Deployment

This figure represents the high-level recommended infra architecture for deploying and delivering, micro frontends on AWS.
a. Hosting
Amazon Simple Storage Service is the most effective alternative to containers for Micro frontends on AWS (S3). Using Amazon S3, we can store all your frontend static assets, including HTML, JavaScript, CSS, Fonts &, etc. One of the key advantages of using Amazon S3 is that the underlying infrastructure provides 99.99% for high availability for serving micro frontends by default.
However, a frequent constraint exists. In other words, Amazon S3 can only hold static Micro frontend artifacts (no server-side rendering). Nevertheless, it is compatible with the majority of modern frontend frameworks, such as AngularJS, ReactJS, and VueJS, which can be leveraged to build micro frontends.
A few best practices for using S3 with micro frontends:
- Use separate buckets for each Micro frontend - doing so will allow you to isolate the lifecycles of each Micro frontend. It will also enable you to offer more specific access permissions to each team with increased granularity.
- Do not directly serve micro frontends using Static Site Hosting – when serving the Micro frontends from Amazon S3, using Static Site Hosting will expose the frontend to the public without any added controls like geographical restrictions. In addition, it will force the use of cross-origin request sharing (CORS) without exposing the entire application through a single domain.
- Separate dynamically uploaded artifacts from Amazon S3 buckets hosting micro frontends.
b. Serving and Caching
Amazon CloudFront is one of the core services that plays multiple roles while serving external Micro frontends. The Content Delivery Network (CDN) capabilities to cache Micro frontends closer to the end-users are one of the prominent features we receive by default. In addition, Amazon CloudFront has gateway functionality for routing requests to various Micro frontends. This feature is particularly useful - it eliminates the need for different gateway services by allowing us to route both micro frontend and microservice requests through a centralized point.
c. Deployment Life Cycle - High Availability
When combining Amazon S3 and CloudFront, we must consider invalidating the CloudFront cache for each deployment, unless we generate unique filenames for each new deployment package.
If needed, invalidating the CloudFront cache is possible by using CloudFront CLI commands. These commands can be executed in the build pipeline for new deployments.
In addition, it is essential to manage the high availability of Micro frontends during deployment. CloudFront caching prevents the potential of downtime if a client hits the Micro frontend during deployment.
d. Deployment Pipeline
The goal of this setup is to ensure that individual micro-frontend repo changes trigger individual code pipelines. This encourages team independence - if only a micro frontend is modified, we only want to trigger its related pipeline and not all the others. This results in faster feedback loop so if anything breaks, the team(s) can work on it immediately.

An AWS Code Pipeline gets started, once a code change is associated. This includes four main steps:
- The Code Pipeline: This manages the Code repo connection and fetches the associated source code.
- The Code Build: This builds the receiving source code into a build artifact.
- The Code Deploy: This step takes the build artifact from previous steps and deploys them to a single Simple Storage Service (S3). Each micro-frontend will be stored in an independent folder so that they can be deployed individually.
- The Code Build Cache Invalidation: The last step is yet another Code Build step that ensures that the CloudFront cache gets invalidated every time we publish and deploy new artifacts on S3.
- Repo Set Up: This gives an option of setting up a single mono repository with each micro frontend contained within a subfolder or individual repositories for each micro frontend.
Step 1: Decouple by Domain Driven Design (DDD)
Microservices should be designed around business capabilities, not horizontal layers such as data access or messaging. Additionally, microservices must have loose coupling and high functional cohesion. Microservices are loosely coupled - changing one service does not require updating the other services simultaneously.
A microservice is cohesive if it has a single, well-defined purpose, such as managing user accounts or processing payments.
The DDD approach can be retroactively applied to an existing application as follows:
- Establish a shared vocabulary between all stakeholders.
- Identify the relevant modules in the monolithic application and apply the shared vocabulary to those modules.
- Define contexts, where you apply explicit boundaries to the identified modules with clearly defined responsibilities. The identified contexts are candidates for refactoring into smaller microservices.

Step 2: Prioritize Service/Module for Migration
An ideal starting point to decouple services is to identify the loosely coupled modules in the monolithic application and can choose a loosely coupled module as one of the first candidates to convert to a microservice.
To complete a dependency analysis of each module, consider the following:
- The type of dependency: dependencies from data or other modules.
- The scale of the dependency: how a change in the identified module might impact other modules.
Migrating a module with heavy data dependencies is usually a nontrivial task. If we migrate features first and migrate the related data later, we might be temporarily reading from and writing data to multiple databases leading to inconsistency. Therefore, we must account for data integrity and synchronization challenges.
We recommend extracting modules that have different resource requirements compared to the rest of the monolith. For example, if¬¬ a module contains an in-memory database, it can be converted into a service that can subsequently be deployed on hosts with more RAM. When turning modules with particular resource requirements into services, you can scale your application much more easily.
In addition to business criticality, thorough test coverage, the security posture of the application, and organizational buy-in can influence the migration priority of services. Based on the evaluations, you can rank services.
Step 3: Extract a service from the Monolith
After identifying the optimal candidate for a service, we must determine a means for microservice and monolithic modules to coexist. One way to manage this coexistence is to implement an adapter that facilitates the compatibility between the modules. Over time, the microservice absorbs the load and the monolithic component is eliminated. This progressive procedure reduces the risk of migrating from a monolithic application to a microservice as it allows for the gradual detection of bugs and performance concerns.

Typically, monolithic applications have their own monolithic databases. One of the principles of a microservices architecture is to have one database for each microservice. Therefore, when modernizing monolithic applications into microservices, split the identified monolithic database according to the service boundaries.
To determine where to split a monolithic database, first, analyze the database mappings. As part of the service extraction analysis, gather some insights on the microservices that have to be created. Analyzing database consumption and mapping tables and other database objects to the new microservices can be accomplished using the same method.

This picture depicts the transition from monolithic to microservices database architecture.
However, splitting a monolithic database is complex because there might not be a clear separation between database objects. Additionally, you must address concerns like data synchronization, transactional integrity, joins, and latency.
Pros & Cons of Microservices database are:
Criteria | Pros | Cons |
|
|
|
Loosely coupled schema | Deployment of service changes happens independently and rapidly. | If the entire data model is not designed, this might involve changes to already deployed microservices. |
Deployment Effort | Since it is a schema with loose coupling, deployment will be optimized to save time and resources. | In certain scenarios, a shift to Production may necessitate additional effort. |
Scalability | Scaling individual services is simple. | This might involve redundant data storage. |
Optimizing DB environment/ performance tuning | Each DB can have a different configuration. | Increased maintenance tasks and effort. |
Coexistence of SQL and No-SQL Architecture | Faster Data Retrieval | Infrastructure overhead needed to implement |
Communication / Data integrity | Easy to implement | Redundant data will be needed & Integrity can be easily compromised. |
Debugging | Simple to implement for something within the microservices. | Implementing an integrated functionality is challenging. |
Testing | Simple to execute in small chunks. | Global testing is not possible. |
DB Vendor lock-in | Provides flexibility to have different vendors | Complexity and cost in maintaining multiple vendors. |
Approach for Data Migration

Data migration from Monolithic to Microservices DB
As data migration is a crucial aspect of the successful implementation of a new application, the following should be considered:
- In addition to data for functionality, reference/lookup data should be transferred.
- Ensure inbound data migration is scheduled to return modified/added Microservice data to the monolithic DB.
- Source data should be directly extracted from DB, REST API, or Excel/CSV.
- In a microservice architecture, the ETL process should be utilized to ensure data is stored in the most recent DB version.
- Develop a detailed functional specification document to capture what has to be converted.
- Prepare a document outlining the migration strategy.
- Define the key success factors.
- Collaborate with the business and technical teams over data cleansing.
- Create a comprehensive migration architecture.
- Prepare a detailed migration strategy or plan outlining every aspect of the migration.
- Analyze the tools and strategies for data migration.
- Analyze the data validation details and data conversion testing strategy.
- Determine the Acceptance Criteria for a successful migration.
- Identify the potential threat and risk mitigation.
- Identify source data and dependent data from source systems.
- Assess if formats and data sensitivity require encryption.
- Define data volume and the resulting constraints.
- Connect with customers to clean up data or to get core data.
- Develop a data migration project plan.
Typical monolithic applications are built using different layers—a user interface (UI) layer, a business layer, and a persistence layer. A central idea of a microservices architecture is to split functionalities into cohesive verticals — not by technological layers, but by implementing a specific domain.

This figure represents the high-level recommended architecture, based on microservices on AWS. In our case Micro frontend-based web/mobile app use REST APIs for communicating with the back end.
a.API Gateway
APIs are the front door of microservices, which means that APIs serve as the entry point for applications logic behind a set of programmatic interfaces, typically RESTful web services API. This API accepts and processes calls from clients, and might implement functionality such as traffic management, request filtering, routing, caching, authentication, and authorization.
Architecting, deploying, monitoring, continuously improving, and maintaining an API can be a time-consuming task. Sometimes different versions of APIs need to be run to assure backward compatibility for all clients. The different stages of the development cycle (for example, development, testing, and production) further multiply operational efforts.
Authorization is a critical feature for all APIs, but it is usually complex to build and involves repetitive work. When an API is published and becomes successful, the next challenge is to manage and monitor.
Other important challenges include throttling requests to protect the backend services, caching API responses, handling request and response transformation, and generating API definitions and documentation.
Amazon API Gateway addresses these challenges and reduces the operational complexity of creating and maintaining RESTful APIs. API Gateway allows you to create your APIs programmatically by importing Swagger definitions, using either the AWS API or the AWS Management Console. API Gateway serves as a front door to any web application running on Amazon EC2 or Amazon ECS.
b. Server Less Microservices
A common approach to reducing operational efforts for deployment is container-based deployment. In the above architecture approach, Docker containers are used with Fargate, so it is not necessary to care about the underlying infrastructure.
In addition to DynamoDB, Amazon Aurora Serverless is used, which is an on-demand, auto-scaling configuration for Amazon Aurora, where the database will automatically start up, shut down, and scale capacity up or down based on your application's needs.
c. Disaster Recovery
These microservices will be implemented using the Twelve-Factor Application patterns. This means that the main focus for disaster recovery should be on the downstream services that maintain the state of the application. For example, these can be file systems, databases, or queues.
The disaster recovery strategy will be planned based on the recovery time objective and recovery point objective.
The recovery time objective is the maximum acceptable delay between the interruption of service and restoration of service. This objective determines what is considered an acceptable time window when service is unavailable and is defined by the organization.
The recovery point objective is the maximum acceptable amount of time since the last data recovery point. This objective determines what is considered an acceptable loss of data between the last recovery point and the interruption of service and is defined by the organization.
d. High Availability
Amazon ECR hosts images in a highly available and high-performance architecture, enabling it to reliably deploy images for container applications across Availability Zones.
Organizational Overview and Technical Strengths
Qentelli LLC. is a Digital and Cloud Company headquartered in Dallas, Texas. The core of the offerings we bring to our customers is all things automation within the engineering lifecycle, minimizing the need for human intervention and improving the agility and velocity of the overall application engineering lifecycle, thus enabling Continuous Delivery.
What we do is summarized in the infographic below:
6 Patents, 9 tools, a book on Continuous Delivery all speak for themselves when it comes to Thought Leadership, we bring in Quality Engineering, CI/CD and DevOps.
About Client
Founded in 1950, it is a network of young chief executives with approximately 24,000 members across 130 countries. Qentelli has partnered for an Engineering Transformation where we have re-architected the entire legacy portal.
Tech-stack:
Restful APIs
.NET Stack
Azure Cloud platform
VUE.JS
CI / CD Enablement
Jane.ai
Requirement
UI / UX Refresh – Re-architecture of entire portal with focus on user membership at the core
Omni-platform experience – Leverage modern technologies to ensure cross-browser and multi-screen compatibility across devices
Automation and DevOps – Enable application development, security, infrastructure as code, and operations into a continuous, end-to-end, highly automated delivery cycle
Solution Highlights
Re-architected the entire backend using – Microservices, Cloud-enablement, and Serverless architecture
Designed the User Stories with member centricity
Data store implementation for microservices and golden SQL synchronization
Architecture redesign from scratch with an event driven approach
Modular UI designs across – Application screens, Application actions and Application components and configuration
Artificial Intelligence using Jane.ai that consumes data across the various data sources in the application to enable predictions and recommendations for the members of the portal
Benefits
High – Available environments – enabling business continuity
Increased number of deployments that helped decrease the change failure rate
Analytics Driven Design - Covered all member interactions and formerly unstructured sources into a useful, actionable format to optimise customer experiences
Reduction in operational cost - Introduced on-demand environments
Member centric Approach – Enhanced member experience consistently across all touchpoints and channels of interaction
About Client
Founded in 1950, it is a network of young chief executives with approximately 24,000 members across 130 countries. Qentelli has partnered for an Engineering Transformation where we have re-architected the entire legacy portal.
Tech-stack:
Restful APIs
.NET Stack
Azure Cloud platform
VUE.JS
CI / CD Enablement
Jane.ai
Requirement
UI / UX Refresh – Re-architecture of entire portal with focus on user membership at the core
Omni-platform experience – Leverage modern technologies to ensure cross-browser and multi-screen compatibility across devices
Automation and DevOps – Enable application development, security, infrastructure as code, and operations into a continuous, end-to-end, highly automated delivery cycle
Solution Highlights
Re-architected the entire backend using – Microservices, Cloud-enablement, and Serverless architecture
Designed the User Stories with member centricity
Data store implementation for microservices and golden SQL synchronization
Architecture redesign from scratch with an event driven approach
Modular UI designs across – Application screens, Application actions and Application components and configuration
Artificial Intelligence using Jane.ai that consumes data across the various data sources in the application to enable predictions and recommendations for the members of the portal
Benefits
High – Available environments – enabling business continuity
Increased number of deployments that helped decrease the change failure rate
Analytics Driven Design - Covered all member interactions and formerly unstructured sources into a useful, actionable format to optimise customer experiences
Reduction in operational cost - Introduced on-demand environments
Member centric Approach – Enhanced member experience consistently across all touchpoints and channels of interaction
As per our understanding, the following is the scope of the project, considered for our Solution Design
- Design a data mart where files from Insurance carriers will be loaded into a Common Data Model (CDM).
- If new columns are introduced that are not previously mapped or if there are issues during data load/mapping, the system will inform the uploading user and flag it for manual review by the administrator
- UI Portal to upload files, map data, view reports and perform analytics
- Meta data driven approach to map source-> target columns, identify new columns, etc.,
- Input files would be in Excel, CSV and PDF. Incident files will also be provided in Excel
- Files would be loaded every 6 months or can be on-demand at a higher frequency
- Format of the files will be standard for each carrier
- Each file would have information regarding the property, sub-property, claim details, include claim amount
- As column headings could be different, data mart to have a metadata approach to standardize/cleanse columns as far as possible (per carrier)
- Exception handler while data load which will have UI for various types of Exceptions:
- Any file format that is not in Excel, CSV, PDF
- If same # of columns are found but column names are different : this can be due to new column/replaced column
- Wrong data is stored by data type (i.e., alpha numeric in data field)
- Duplicate claims
- Though we can have loss run report for a property for very long duration, Data mart is expected to capture only for first 5 years (i.e., if the data comes for beyond 5 yrs, that can be handled like any other exception
- Store data that are not in the main data model (unmapped data, data beyond 5 yrs, etc.,) for future use
Out of Scope
- AWS account and infrastructure for Dev, Test, Staging and Production environments will be provided by Leonardo247
- Non-functional tests such as Performance and Security
- Backup and DR implementation and testing
- User Acceptance Testing (any defect fixes from UAT will be addressed)
- No new Identity and Access Management system will be designed for this platform – existing implementations such as Okta or Cognito will be re-used if possible
Qentelli will use the Well-Architected Framework published by AWS for the Technical Architecture and Development. Some of the best practices from our experience are also incorporated in the above Solution Architecture, as described below:
- Use of AWS-native services as much as possible to reduce Infra & operational complexity
- Decentralized Data Management
- Smart Endpoints and Dumb Pipes
- Loose Coupling and High Cohesion
- Integrated Authentication
- Eventual Consistency
- High Fault Tolerance
Key Technology components used in the Architecture:
- Serverless architecture – the platform usage is not continuous and will rarely hit peak loads. During certain periods in the year, there might be a need to scale the architecture
- React-based UI for simplicity and extensibility. Can be extended for mobile apps easily in the future
- API gateway to invoke the right services as well as manage Authentication. Extensibility for normal microservices can also be achieved easily with this approach
A high-level functional architecture for the Solution is shown below:
Datamart Design Considerations
- Create a Common Data Model with a common schema that can be extended dynamically
- Master list of columns to be created. For each column list, probability of mapping names to be created
- User will be able to configure column list for each insurance carrier from above master column list. At this point, it might be possible to add category/sub-category details at column/s level, so that user can use this for slicing and dicing in the future
- When a file from a specific carrier is loaded, source file columns’ will be compared
- If the columns are matching, it will consider that the “mapping” is successful. If column/s are not matching, it will move new column to the Exception log, where human intervention is expected to approve
- UI will be provided for user to intervene and approve any new columns (for each carrier). This will add to the Master list of columns as well as to the insurance carrier specific metadata
- Loading of files will have 2 stages. When the columns are 100% matches, the data will automatically be loaded. User intervention will be needed when there is any deviation
Workflow
A simplified workflow for a typical user is shown below:
Solution Architecture
Based on our initial understanding, an indicative Solution Architecture is visualized in the following diagram. Please note that as the project is started, there will be a more detailed evaluation of the needs and an updated Architecture will be developed.
If an exception such as the below occurs, then the exception is routed through a separate handler and is displayed for the admin to determine what action should be taken.
Potential types of exceptions:
- Metadata changes – additions, deletions, updates to existing metadata (such as column names, order of columns etc.)
- Value Changes – changes in how values are sent in the file (such as full values for state name instead of abbreviations)
For metadata changes, the admin will be able to add new columns from the UI. Deleted columns are typically ignored to avoid loss of data previously captured (no changes are made to the Common Data Model, but ingestion file is updated to ignore the missing columns).
The red lines/arrows represent a typical exception workflow.
The core considerations for the above architecture are:
- Python or similar language for file processing, data extraction and mapping to the storage schema
- Use of Lambdas for the following reason:
- The data ingestion is typically done infrequently, so use of other architectures like an API model would be operationally expensive
- The platform needs to perform Data-intensive processing with very less workflows and business logic, so a microservice architecture would not suitable
- Exception handling can be achieved via flat files with context or posting to SQS -> SNS. An exception handler will need to be wired to the UI for approvals
- All ingested files to be stored in an S3 Bucket for future reference before any processing
- Using AWS Aurora or other DBs preferred by Leonardo247 for Relational Data storage helps in reducing license costs and operational complexity, In future, it is easier to push the data into Redshift for large datasets
An indicative timeline based on the scope and complexity is outlined below. The timeline is liable to change during the project based on the complexity, changes in business requirements, technical requirements – as we complete workshops and review the user stories, we may present an updated timeline.
All necessary access to Leonardo systems, Detailed user stories [with Definition of done], along with all file types, Business rules, current and future reports, existing documentation and any other information identified by Qentelli will be provided before the project start date. Any delays from Leonardo in providing requirements, content, clarifications, approvals, UAT, defect prioritization or any other activity may lead to delays in the schedule – such delays will increase the estimated pricing as well.
This section highlights our delivery approach and all the vital activities that we do in terms of phases.
A standard Team (squad) has members from Product/Business, Project Manager/Scrum Master, Architect, Developers, SDETs/QA and Ops
Members of this team can be from Leonardo or Qentelli, based on functional and technical knowledge
Qentelli will staff the Architecture, Engineering and QA roles, while Leonardo will provide Product Ownership/Functional SME, Infrastructure and Operational support
The Team structure for this project from Qentelli
Solution Architect | Offshore |
UI Designer | Offshore |
Web Developer | Offshore |
DB Developer | Offshore |
QA Engineers | Offshore |
Continuous Development and Integration
At Qentelli our software developers with extensive knowledge on the latest technologies and are well versed in DevOps, Continuous Integration and Continuous Delivery. The image below describes the various activities that will be part of the development phase. We follow the below core principles for our development practices:
- Test-Driven Development
- Rigorous, regular refactoring
- Continuous integration
- SOLID principles for Design/Development
- S - Single-Responsibility
- O - Open-closed
- L - Liskov substitution
- I - Interface segregation
- D - Dependency Inversion
- Pair programming
- Single Repository
- Secure Development Practices based on guidelines from Microsoft and OWASP
- Git based branching strategy
Pipeline Orchestration:
A completely automated CI Pipeline can significantly reduce the time needed to move a unit of code through different stages and environments. The following diagram shows the typical Pipeline that is fully automated till deployment to Production.
NOTE: Due to time and cost considerations, fully automated pipelines will not be available in Phase 1.
Program Delivery & Governance
Program Delivery Model
Our approach to Agile software development consists of the 12 principles from the Agile Manifesto for a successful delivery of each increment of the end-product. This approach enables Leonardo247’s change in requirements at any state in the development cycle promoting customer satisfaction at the forefront. We make sure to enable continuous collaboration between our teams and the stakeholders at Leonardo247.
We will follow the philosophy of SCRUM development lifecycle and the various phases of the project management are described below.
User Story Creation: User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system. They are written throughout the agile project. Usually a story-writing workshop is held near the start of the agile project.


Governance Model
The Governance Model is a baseline of key elements that are required for project governance based on the project's scope, timeline, complexity, risk, and stakeholders. A communication plan will be developed once all the stakeholders have been identified and their interests and expectations have been defined.
A well-formulated communication plan delivers concise, efficient and timely information to all pertinent stakeholders. A high-level overview of the Governance layers we follow is outlined in the diagram below:
Proposed Organizational Structure and Expertise
The following technology-based resources:
Role | Skillset |
FED developers | ReactJS, AngularJS, VueJS |
Automation QA engineers | C# & Selenium |
Accessibility Engineer | Functional testing |
DevOps Engineer | AWS, CI/CD setup, Docker Kubernetes, Terraform |
UX Engineer | Figma, UX designing |
UI Engineer | HTML, CSS, Bootstrap |
The program comprises multiple stakeholders - including Vice President of Engineering, Director of Delivery, Program managers, Architects, Project managers / Scrum masters, and one or more Scrum teams.
The Scrum team comprises:
- 3 BED developers (Sun Life)
- 3 FED developers
- 3 Automation QA engineers
- 1 Accessibility engineer
- 2 DevOps engineer
- 1 Product Owner (Can be shared among 2 Scrum teams)
The Architecture team comprises:
- Database Architects
- Application Architects
- Infrastructure / DevOps Architect (Sun Life)
- NFR (Performance & Security Architect - Sun Life)
The total number of scrum teams would be determined based on the size of the program.
The program comprises multiple stakeholders - including Vice President of Engineering, Director of Delivery, Program managers, Architects, Project managers / Scrum masters, and one or more Scrum teams.
The Scrum team comprises:
- 3 BED developers (Sun Life)
- 3 FED developers
- 3 Automation QA engineers
- 1 Accessibility engineer
- 2 DevOps engineer
- 1 Product Owner (Can be shared among 2 Scrum teams)
The Architecture team comprises:
- Database Architects
- Application Architects
- Infrastructure / DevOps Architect (Sun Life)
- NFR (Performance & Security Architect - Sun Life)
The total number of scrum teams would be determined based on the size of the program.
Dependency | Support needed from Leonardo247 | Stakeholders in Leonardo247 |
Knowledge Transition for Qentelli team | KT workshops on
| Platform Architect Platform specialist Solution Architects |
Access to workspace, systems, servers, for Qentelli personnel in the US and India |
| Leonardo247 IT Team Product Team approvals |
Feedback for all deliverables and approvals to proceed | Timely review, feedback and approvals for deliverables from Qentelli | All Leonardo247 stakeholders including but not limited to:
|
We have a proven framework for successful Digital Innovation
Technology: We assess the current technology stack before upgrading or introducing new. Our efforts empower clients to develop digitally mature, sustainable, and scalable modern applications.
People: Acknowledging the most important part of the organization - Talent; and giving them a silo-free work environment creates miracles. Our areas of expertise include skill building and strategic sourcing.
Culture: We believe it is important for Digital firms to have Culture and Leadership that promotes creativity. Over a period, we have developed capability to fill the culture gaps to accelerate digital turns for clients.
AI/ML: We make existing systems AI/ML receptive and add smart systems with a potential of scalability and capacity to create automated and intelligent processes. This reduces the average time spent on day-to-day activities and magnify the efficiency.
Process: Digital processes created by Qentelli are Sustainable, scalable, hyper customer-focused and well-defined. We introduce these practices after clear integration between data and relevant processes to speed up the cycles and standardize them.
Metrics: Measuring and improving the right aspects of the development life cycle drive the customer value. We help clients to choose the right metrics to measure their progress and continuously use feedback as the input for improved outcomes.
Program Delivery and Governance
Qentelli will adopt a Continuous Engineering approach, using the Program Increment Model that groups a set of Sprints into an “Increment”, which is a potential “Release”.
Key Notes/Dependencies for the Engineering phase are outlined below:
- The scope of work outlined is organized into multiple streams of work that is addressed by Cross-Functional Teams
- Continuous Integration across teams will be adopted to resolve any integration issues from the start instead of a large Product Integration Phase at the end
- Existing tools for Engineering and Management will be used. Additional tools needed for the Development and QA phases will be discussed and procured by Sun Life.
- New Environments will be needed to avoid conflicts with existing repositories and deployments – Sun Life will provide new environments for Product Engineering as per the Plan provided during the initial Planning phase
- Early and timely feedback from Sun Life is critical to achieve the milestones of this Program – Time will be requested in advance from the respective stakeholders for such reviews
The overall Engineering Model is shown below:

Note: Each of the above areas, viz., Continuous Planning, Development and Integration, Testing and Deployment is addressed in the following sections:
Continuous Planning through Product Backlog:
- Qentelli will setup one or more Product Backlogs to capture Epics and user stories across all workstreams.
- Focused workshops will be with Product Owners and other key staff of Sun Life to write the user stories.

The below table outlines the Inputs, Activities, Outcomes and Deliverables during the Planning activity
Inputs | Key Activities | Outcomes | Deliverables |
|
|
|
|
Continuous Development and Integration:
At Qentelli we have software developers that are best in breed with extensible knowledge on the latest technologies and are well versed in DevOps, Continuous Integration and Continuous Delivery. The image shown in the Pipeline Orchestration Section below, describes the various activities that will be part of the development phase. We follow the below core principles for our development practices:
- Test-Driven Development
- Rigorous, regular refactoring
- Continuous integration
- SOLID principles for Design/Development
- Pair programming
- Single Repository
- Secure Development Practices based on guidelines from Microsoft and OWASP
- Git based branching strategy
Pipeline Orchestration:
A completely automated CI Pipeline can significantly reduce the time needed to move a unit of code through different stages and environments. The following diagram shows the typical Pipeline that is fully automated till deployment to Production.

Steps:
- Once the developer checks-in the code to the version control repository, Gitlab (example) will check for any new commits at an interval of 5 mins
- As and when a new commit is found, CI tool will kick off a build which then build the code. Once the build is passed, a set of unit tests will be run. Pos this, static analysis comes into play.
- Once the static analysis stage passes, single-user performance tests are run and the build is then packaged, and pushed to an artifact repository of choice
- Several deployment scripts are authored by the Infra/Ops teams who will preconfigure the scripts and define essential infrastructure policies, security policies and create infrastructure as code scripts and push them to a version control
- These IaC scripts will then be run in CI server, and the build is promoted to Dev environment. Here a suite of Smoke tests will be run. The outcome of the Smoke suite will define if further tests need to be run or not
- After the Smoke suite passes, the build is automatically deployed into QA environment, where System tests, Security tests, Performance tests are run. These scripts are configured as jobs within the CI server
- After this, the build is promoted to the UAT environment where the Sun Life will test the application and provide any necessary feedback
- Post the acceptance, the build is promoted to pre-prod and Prod environments automatically
Deployment Strategy:
- In the CI tool the deployment pipeline is setup such that the build is automatically promoted to various environments automatically
- Deployment scripts are configured in CI tool to provision various environments
- Entire environment config will be version controlled
- CI tool is configured to poll every 5 mins for new commits and as soon as the tool finds a new change the deployment pipeline is kickstarted
Sample Deployment Pipeline Strategy:

Inputs | Key Activities | Outcomes | Deliverables |
· User Stories · Mock-ups | · Test / Behaviour Driven Development · Branching Strategy · Unit Tests authoring · Automated Unit tests · Code Scans · Code Coverage · Continuous Integration Implementation · Code reviews | · Coding standards · Branching Strategy · Continuous Integration | · Unit test reports · Code Coverage report · CI and Automation coverage report · Test Readiness report |
Continuous Testing
The key to building quality into our software is making sure we can get fast feedback on the impact of changes. In order to build quality into software, we need to adopt a different approach. Our goal is to run many different types of tests—both manual and automated—continually throughout the delivery process.
Test Types
Qentelli will cover the following Test Types during the Program. Automated functional and non-functional test cases will be integrated with CI pipeline as part of the sprints. End to End Performance and Security tests will be executed at the end again.

Continuous Testing through Automation
Qentelli will use an Automation-first approach for its QA function. Tests will be written within the sprint using the BDD approach (Gherkin language) and integrated within the CI-CD pipeline.

After completion of the Program and warranty Phase, Sun Life should be able to manage the Product on its own. Hence this stage is very important for Sun Life personnel to understand the technology, development model, Data management etc.
Transition of the Product to Sun Life requires transferring 3 types of Activities as below:
Work Transition
- Work-in-Progress feature requests/defect fixes
- Features planned for short term release
- Support requests from internal and client services teams
Asset Transition
- Existing APIs, DBs, Plug-ins, Front-end Code, API gateway operation
- Unit tests, Functional and non-functional tests, previous test results
- Tools for SCM, Test Management, Knowledge Management
- CI-CD Pipeline with all jobs, integrations, scripts etc.
Knowledge Transition
- Documentation of Features, architecture, design and source-code
- Processes for User story creation, Planning, Development, QA, deployment and Release
The process for Transition will be through 3 phases:
- KT Workshops, where dedicated time will be used to provide knowledge Transfer on defined areas
- High-Guidance Parallel Phase, where Qentelli team will own the responsibility of the development lifecycle, with participation from Sun Life Team
- Low-guidance Parallel Phase, where Sun Life team takes over the ownership of the Product with support from Qentelli team

Our approach to Agile software development consists of the 12 principles from the Agile Manifesto for a successful delivery of each increment of the end-product. This approach enables Sun Life’s change in requirements at any state in the development cycle promoting customer satisfaction at the forefront. We make sure to enable continuous collaboration between our teams and the stakeholders at Sun Life.

We will follow the philosophy of SCRUM development lifecycle and the various phases of the project management are described below.
User Story Creation:
User stories are short, simple descriptions of a feature told from the perspective of the person who desires the new capability, usually a user or customer of the system. They are written throughout the agile project. Usually a story-writing workshop is held near the start of the agile project.
In order to create good user stories, start by remembering to INVEST in good user stories. INVEST is an acronym which encompasses the following concepts which make up a good user story:
- Planning:
Sprint planning plays an important role in the universe of Agile development methodologies which prioritize responsiveness, flexibility, and continuous improvement, and seek to help organizations excel at managing work in a world where priorities and markets are shifting rapidly.
As customer requirements and competition evolve, sprint planning determines what work to tackle next.
In software, sprint planning determines which product changes or features can be delivered and how to roll them out most efficiently in the next iteration. This planning process ensures that the team tackles a realistic amount of work and accomplishes the most important work first.
- Estimation:
This helps the team determine how much work to bring into the sprint. By splitting product backlog items into small, discrete tasks and then roughly estimating them during sprint planning, the team is better able to assess the workload. This increases the likelihood of the team finishing all they say they will.
Identifying tasks and estimating them during sprint planning helps team members better coordinate their work.
- Estimation Techniques
Story points: Points are used to assign each engineering task a set amount of time. People commonly mentioned using Fibonacci series numbers to help offset the uncertainty that comes with larger stories. Fibonacci uses 1, 2, 3, 5, 8 instead of using 1, 2, 3, 4, 5 for assigning points.
T-shirt sizing: Assigning small, medium or large labels for each engineering initiative to indicate its complexity.
Time buckets: Simply using days and hours to estimate, instead of t-shirt sized buckets or story points.
- Definition of Done
The Definition of Done ensures everyone on the Team knows exactly what is expected of everything the Team delivers. It ensures transparency and quality fit for the purpose of the product and organization. Each Scrum Team has its own Definition of Done or consistent acceptance criteria across all User Stories. A Definition of Done drives the quality of work and is used to assess when a User Story has been completed.
- Daily Standups:
This is an opportunity for the development team to discuss what they accomplished the day before, what they will be working on for the day and what obstacles are impeding their progress. The Scrum Master is usually the facilitator. The meeting is meant for all team members to share their input on the sprint and get a clear understanding on what work was already completed, what issues need resolving and what is left to do.
This will give good insight into the progress of the sprint and give an early indicator on if the commitments and sprint goal are being met.
- Sprint Demos:
At the end of each sprint, the team is responsible for providing a working piece of software that is potentially shippable and get feedback from the product owner and other stakeholders. The result will be weighed against the initial sprint goal and the team can use this time to provide their suggestions on what was accomplished.
- Sprint Retrospectives:
This is an opportunity for the team to reflect on how they did in the sprint and determine ways in how they can improve. The team collaboratively suggests what they should start doing, stop doing and continue doing. Any actions items identified will be fed back into the Sprint Planning meeting to accommodate any new changes into the next Sprint.
- Reporting & Tracking:
We believe accurate reporting is a key element in fostering quality communication within an organization, especially large, complex, global enterprises such as Sun Life. Without quality communication, employees have trouble distinguishing important from unimportant business data. Regular status reports, including those that support quality assurance teams and processes, help ensure that everyone is doing work that is up to your company's quality standards. We leverage Project Management tools present in the market where reporting such charts is easier and is part of the Sprint lifecycle. They help provide real-time dashboards that can be visualized by all the personnel on the project.
Metrics that will be measured that empowers quality and sustainability of the end product using the Agile Program Management:
Metric | How does it help? |
Sprint Burndown | Visualizes how many story points have been completed during the sprint and how many remain, and helps forecast if the sprint scope will be completed on time |
Velocity | Measures how many story points were completed by a team, on average, over the past few sprints. It can be used to predict the team’s output in the upcoming sprints |
Lead Time | Measures the total time from the moment a story enters the system (in the backlog), until it is completed as part of a sprint, or released to customers |
Cycle Time | A very simple metric that can raise a red flag when items within sprints across your entire system are not moving forward |
Code Coverage | Measures the percentage of your code which is covered by unit tests. It can be measured by the number of methods, statements, branches or conditions which are executed as part of a unit test suite |
Static Code Analysis | An automated process that can provide insights into code quality and clean code from simple errors redundancies |
Failed Deployment | Measures the number of deployments (either to test, production environments, or both) |
Escaped Defects | Identifies the number of bugs discovered only after a build or release enters production |
Reporting & Tracking:
o Qentelli will continuously capture knowledge generated during the Program Implementation. This would cover:
o Discussions on user stories, evaluation of alternatives
o Meeting minutes
o Architectural decisions
o Design decisions
o Specialized technical knowledge
o Documentation within source code
o Review comments/fixes
o Program status reviews
o Retrospectives and Demos
Tools to capture and disseminate information collected on the above are:
o Confluence as the primary repository
o Auto-generation of source code comments
o SharePoint site for larger documents
The Governance Model is a baseline of key elements that are required for project governance based on the project's scope, timeline, complexity, risk, and, stakeholders. A communication plan will be developed once all the stakeholders have been identified and their interests and expectations have been defined. A well-formulated communication plan delivers concise, efficient and timely information to all pertinent stakeholders. A high-level overview of the Governance layers we follow is outlined in the diagram below:

The table below summarizes the various reports that will be published periodically for engagement governance. These reports will be made accessible to Sun Life on a real time basis through the project management tools.
Quarterly | Monthly | Weekly | Daily |
Executive Management | Project/Program Team | Squads | |
|
|
|
|
Communication Plan:
Qentelli will defined and implement a communication plan during this project to define the right audiences, messaging, and tactical communication activities to support the project and provides direction to develop and encourage business-wide commitment to the vision of the project. The communication plan also includes an escalation process that is required to address issues / risks and assist with remediation and provide absolute transparency during the engagement. The Plan includes:
- Identifying Stakeholder information needs and information channels.
- Identifying key messages necessary to effectively communicate the project strategy, goals and benefits.
- Identifying tactical communication needs including audiences, messages, channels, responsible parties, escalation channel and reporting.
For timely resolution of all issues, the following Escalation matrix will be used:
Escalation Level | Qentelli Team | Escalation Categories and Governance |
Level 1 | Development Manager Quality Manager Enterprise Architect | · Project Issues · Deviation from Standards, deliverables, adherence to schedule, resource performance issues |
Level 2 | Delivery Manager | · Change Disputes · Performance Issues (customer dissatisfaction deficiencies, issues around expectations) · Quality issues (not resolved in level 1) |
Level 3 | Delivery Manager SVP Delivery | · Continuous Slippage on agreed milestones · Unable to deliver on the agreed action items · Unresolved deficiencies around service offerings · Issues around customer satisfaction /expectation Management · Quality Issues (Not resolved in level 2) · Monthly Reviews on service Delivery |
Level 4 | SVP Delivery CDO | · Executive Review · Satisfaction Surveys · Quarterly/Half yearly Leadership meet for service performance review · Unresolved issues around quality (Level 3 and below) · Audits and compliance · Escalation Management · Strategy and thought leadership |
A systematic approach to OCM is beneficial when change requires people throughout an organization to learn new behaviors and skills. By formally setting expectations, employing tools to improve communication, and proactively seeking ways to reduce misinformation, stakeholders are more likely to buy into a change initially and remain committed to the change throughout.
Successful OCM strategies include:
- Agreement on a common vision for change -- no competing initiatives.
- Strong executive leadership to communicate the vision and sell the business case for change.
- A strategy for educating employees about how their day-to-day work will change.
- A concrete plan for how to measure whether the change is a success -- and follow-up plans for both successful and unsuccessful results.
- Rewards, both monetary and social, that encourage individuals and groups to take ownership for their new roles and responsibilities.
Some of the risks and mitigation plan are presented below. We would maintain a Risk log and review the same every week with the key stakeholders from Sun Life.
Risk ID | Risk Description | Impact on | Prevention | Mitigation |
1 | Knowledge of scope identified during Discovery Phase may be insufficient | Schedule |
|
|
2 | Challenges in Decoupling current Architecture | Schedule, Quality |
|
|
3 | Resistance from Sun Life Staff in participation | Schedule, Quality |
|
|
4 | Program Schedule Adherence | Schedule, Quality |
|
|
5 | Regression issues when Refactoring code | Quality |
|
|

Our team comes with the experience and knowledge
Qentelli thanks Sun Life for providing us the opportunity to present our solution to the RFP for Modernizing your Monolithic Application to Microservices and Micro Frontend. We understand that Sun Life’s objective is to modernize the existing legacy application in line with the Goals and Strategy as part of the Digitalization journey.
Qentelli is a Digital and Cloud Technology Company. Being a Technology company, Qentelli teams powered by our Innovation Teams, backed by our Digital Center of Excellence have deep expertise and experience in delivering Digital Transformation solutions for several Fortune 100 customers. The solutions include Modernization, Cloud Native, Event Driven and Micro Architectures that bring rich Digital Experiences.
Our Digital Transformation practice has been the key partner in working with several customers in the BFSI sector which includes Traditional and Digital Banks, Financial Services providers for Residential, Commercial, Auto, Marine industries and Insurance in Life, Auto and other sectors.
Some of the work that we have done, which are relevant to this proposal include:
- Platform Modernization for a Global leader in Financial Services for modernizing their Legacy to Microservices & Frontend based application on AWS (multi-tenant, multi-lingual) with the aim of improving end user experience and accelerating time to market.
- Digital Transformation for Largest Regional Bank in US by modernizing their revenue generating monolithic application into microservices architecture on Azure Platform.
- New Application Development on Microservices & Frontend for the World’s largest financial consulting organization.
What differentiates us
Qentelli is headquartered is in Dallas, TX with a global presence. The Services teams are powered by the Innovation Group that provides the thought leadership, problem solving, tools and plug-ins that not just bridge the gap conventional tools leave, but also accelerate value addition by enhancing the overall service delivery journey.
Our Intellectual Property which includes AI based products/tools, frameworks, methodology and process playbooks help accelerate and deliver Digital Transformation, Cloud Adoption, DevOps and Quality Engineering solutions to our customers.
We are excited to bring our proven expertise in Legacy Modernization for Financial services organization development, and we hope to be the vendor of choice to Sun Life. Along with my Executive Leadership team I assure Sun Life to go the extra mile to help achieve your digital objectives.
Sanjay Jupudi, President & Founder, Qentelli
sanjayj@qentelli.com | +1 469 600 0696
This proposal articulates the technical approach to modernization strategy that provides agile, rapid, low-cost migration and transformation capability. The proposal focuses on:
- Development and delivery services in tandem with a comprehensive agile organizational design structure that will deliver the envisioned modernized digital tools to the monolith approach across Sun Life.
- An end-to-end process consisting of automated tools, methods, and delivery - able to support a diverse array of technologies, geographies, regulatory requirements, operating models, and target environments.
- A strategy to implement Micro Frontend Development that includes a broad set of technical skills to support front-end and back-end developments including reusable components that can be leveraged across the web and mobile platforms.
- Creating a single codebase through PWA - to have a single build regardless of the device and across the different businesses.
- Implement patterns of modernization that lower total costs and mitigate functional and technological risks.
Define and track Metrics and KPIs to make sense of the data. You can get started with the out-of-the-box metrics or the recommended metrics, create custom formulas through a simple drag and drop UI, define cause-effect relationships and dependencies with the derived data. Visualizing metrics across time or categories helps in rapid decision-making. Get the metrics that matter most to your business.