NelsonHall: Banking Operations & Transformation blog feed https://research.nelson-hall.com//sourcing-expertise/banking-operations-transformation/?avpage-views=blog NelsonHall's Banking Operations & Transformation Program is designed for organizations considering, or actively engaged in, the outsourcing of banking industry-specific processes such as payments, loans, or securities processing. <![CDATA[RPA Operating Model Guidelines, Part 3: From Pilot to Production & Beyond – The Keys to Successful RPA Deployment]]>

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the third and final blog in a series presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at the stages of deployment, from pilot development, through design & build, to production, maintenance, and support.

Piloting & deployment – it’s all about the business

When developing pilots, it’s important to recognize that the organization is addressing a business problem and not just applying a technology. Accordingly, organizations should consider how they can make a process better and achieve service delivery innovation, and not just service delivery automation, before they proceed. One framework that can be used in analyzing business processes is the ‘eliminate/simplify/standardize/automate’ approach.

While organizations will probably want to start with some simple and relatively modest RPA pilots to gain quick wins and acceptance of RPA within the organization (and we would recommend that they do so), it is important as the use of RPA matures to consider redesigning and standardizing processes to achieve maximum benefit. So begin with simple manual processes for quick wins, followed by more extensive mapping and reengineering of processes. Indeed, one approach often taken by organizations is to insert robotics and then use the metrics available from robotics to better understand how to reengineer processes downstream.

For early pilots, pick processes where the business unit is willing to take a ‘test & learn’ approach, and live with any need to refine the initial application of RPA. Some level of experimentation and calculated risk taking is OK – it helps the developers to improve their understanding of what can and cannot be achieved from the application of RPA. Also, quality increases over time, so in the medium term, organizations should increasingly consider batch automation rather than in-line automation, and think about tool suites and not just RPA.

Communication remains important throughout, and the organization should be extremely transparent about any pilots taking place. RPA does require a strong emphasis on, and appetite for, management of change. In terms of effectiveness of communication and clarifying the nature of RPA pilots and deployments, proof-of-concept videos generally work a lot better than the written or spoken word.

Bot testing is also important, and organizations have found that bot testing is different from waterfall UAT. Ideally, bots should be tested using a copy of the production environment.

Access to applications is potentially a major hurdle, with organizations needing to establish virtual employees as a new category of employee and give the appropriate virtual user ID access to all applications that require a user ID. The IT function must be extensively involved at this stage to agree access to applications and data. In particular, they may be concerned about the manner of storage of passwords. What’s more, IT personnel are likely to know about the vagaries of the IT landscape that are unknown to operations personnel!

Reporting, contingency & change management key to RPA production

At the production stage, it is important to implement a RPA reporting tool to:

  • Monitor how the bots are performing
  • Provide an executive dashboard with one version of the truth
  • Ensure high license utilization.

There is also a need for contingency planning to cover situations where something goes wrong and work is not allocated to bots. Contingency plans may include co-locating a bot support person or team with operations personnel.

The organization also needs to decide which part of the organization will be responsible for bot scheduling. This can either be overseen by the IT department or, more likely, the operations team can take responsibility for scheduling both personnel and bots. Overall bot monitoring, on the other hand, will probably be carried out centrally.

It remains common practice, though not universal, for RPA software vendors to charge on the basis of the number of bot licenses. Accordingly, since an individual bot license can be used in support of any of the processes automated by the organization, organizations may wish to centralize an element of their bot scheduling to optimize bot license utilization.

At the production stage, liaison with application owners is very important to proactively identify changes in functionality that may impact bot operation, so that these can be addressed in advance. Maintenance is often centralized as part of the automation CoE.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on RPA, with your buy-side peers sharing their RPA experiences. To find out more, contact Matthaus Davies.  

This is the final blog in a three-part series. See also:

Part 1: How to Lay the Foundations for a Successful RPA Project

Part 2: How to Identify High-Impact RPA Opportunities

]]>
<![CDATA[RPA Operating Model Guidelines, Part 2: How to Identify High-Impact RPA Opportunities]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the second in a series of blogs presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at how to assess and prioritize RPA opportunities prior to project deployment.

Prioritize opportunities for quick wins

An enterprise level governance committee should be involved in the assessment and prioritization of RPA opportunities, and this committee needs to establish a formal framework for project/opportunity selection. For example, a simple but effective framework is to evaluate opportunities based on their:

  • Potential business impact, including RoI and FTE savings
  • Level of difficulty (preferably low)
  • Sponsorship level (preferably high).

The business units should be involved in the generation of ideas for the application of RPA, and these ideas can be compiled in a collaboration system such as SharePoint prior to their review by global process owners and subsequent evaluation by the assessment committee. The aim is to select projects that have a high business impact and high sponsorship level but are relatively easy to implement. As is usual when undertaking new initiatives or using new technologies, aim to get some quick wins and start at the easy end of the project spectrum.

However, organizations also recognize that even those ideas and suggestions that have been rejected for RPA are useful in identifying process pain points, and one suggestion is to pass these ideas to the wider business improvement or reengineering group to investigate alternative approaches to process improvement.

Target stable processes

Other considerations that need to be taken into account include the level of stability of processes and their underlying applications. Clearly, basic RPA does not readily adapt to significant process change, and so, to avoid excessive levels of maintenance, organizations should only choose relatively stable processes based on a stable application infrastructure. Processes that are subject to high levels of change are not appropriate candidates for the application of RPA.

Equally, it is important that the RPA implementers have permission to access the required applications from the application owners, who can initially have major concerns about security, and that the RPA implementers understand any peculiarities of the applications and know about any upgrades or modifications planned.

The importance of IT involvement

It is important that the IT organization is involved, as their knowledge of the application operating infrastructure and any forthcoming changes to applications and infrastructure need to be taken into account at this stage. In particular, it is important to involve identity and access management teams in assessments.

Also, the IT department may well take the lead in establishing RPA security and infrastructure operations. Other key decisions that require strong involvement of the IT organization include:

  • Identity security
  • Ownership of bots
  • Ticketing & support
  • Selection of RPA reporting tool.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held later this month: to find out more, contact Guy Saunders.  

In the third blog in the series, I will look at deploying an RPA project, from developing pilots, through design & build, to production, maintenance, and support.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 1: Laying the Foundations for Successful RPA]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the first in a series of blogs presenting key guidelines for organizations embarking on RPA, covering establishing the RPA framework, RPA implementation, support, and management. First up, I take a look at how to prepare for an RPA initiative, including establishing the plans and frameworks needed to lay the foundations for a successful project.

Getting started – communication is key

Essential action items for organizations prior to embarking on their first RPA project are:

  • Preparing a communication plan
  • Establishing a governance framework
  • Establishing a RPA center-of-excellence
  • Establishing a framework for allocation of IDs to bots.

Communication is key to ensuring that use of RPA is accepted by both executives and staff alike, with stakeholder management critical. At the enterprise level, the RPA/automation steering committee may involve:

  • COOs of the businesses
  • Enterprise CIO.

Start with awareness training to get support from departments and C-level executives. Senior leader support is key to adoption. Videos demonstrating RPA are potentially much more effective than written papers at this stage. Important considerations to address with executives include:

  • How much control am I going to lose?
  • How will use of RPA impact my staff?
  • How/how much will my department be charged?

When communicating to staff, remember to:

  • Differentiate between value-added and non value-added activity
  • Communicate the intention to use RPA as a development opportunity for personnel. Stress that RPA will be used to facilitate growth, to do more with the same number of people, and give people developmental opportunities
  • Use the same group of people to prepare all communications, to ensure consistency of messaging.

Establish a central governance process

It is important to establish a strong central governance process to ensure standardization across the enterprise, and to ensure that the enterprise is prioritizing the right opportunities. It is also important that IT is informed of, and represented within, the governance process.

An example of a robotics and automation governance framework established by one organization was to form:

  • An enterprise robotics council, responsible for the scope and direction of the program, together with setting targets for efficiency and outcomes
  • A business unit governance council, responsible for prioritizing RPA projects across departments and business units
  • A RPA technical council, responsible for RPA design standards, best practice guidelines, and principles.

Avoid RPA silos – create a centre of excellence

RPA is a key strategic enabler, so use of RPA needs to be embedded in the organization rather than siloed. Accordingly, the organization should consider establishing a RPA center of excellence, encompassing:

  • A centralized RPA & tool technology evaluation group. It is important not to assume that a single RPA tool will be suitable for all purposes and also to recognize that ultimately a wider toolset will be required, encompassing not only RPA technology but also technologies in areas such as OCR, NLP, machine learning, etc.
  • A best practice for establishing standards such as naming standards to be applied in RPA across processes and business units
  • An automation lead for each tower, to manage the RPA project pipeline and priorities for that tower
  • IT liaison personnel.

Establish a bot ID framework

While establishing a framework for allocation of IDs to bots may seem trivial, it has proven not to be so for many organizations where, for example, including ‘virtual workers’ in the HR system has proved insurmountable. In some instances, organizations have resorted to basing bot IDs on the IDs of the bot developer as a short-term fix, but this approach is far from ideal in the long-term.

Organizations should also make centralized decisions about bot license procurement, and here the IT department which has experience in software selection and purchasing should be involved. In particular, the IT department may be able to play a substantial role in RPA software procurement/negotiation.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s extensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held in November: to find out more, contact Matthaus Davies.  

 

In the second blog in this series, I will look at RPA need assessment and opportunity identification prior to project deployment.

 

]]>
<![CDATA[HCL Targets Industry-Specific Processes with RPA - Significant Presence Developing in Banking Sector]]> HCL began its robotics program in late 2013. Since then HCL has invested ~$1.5m in robotics, (ToscanaBot Automation Framework), via its HCL ToscanaBot center of excellence, which currently employs a team of ~25 personnel and is planned to grow to 50+ personnel by 2016. HCL estimates that its robotics practice currently has an FTE impact of around 2,000 with this expected to grow to ~8,000 FTE impact by 2016.

HCL is offering robotics both in the form of robotics software plus operations to its new & existing BPO contracts. HCL is typically deploying robotics in two forms:

  • Virtualized workforce, directly replacing the agent with robotics (~70% of current activity by value). Here HCL estimates that 50%-70% efficiency gains are achievable

  • Assisted decisioning, empowering the agent by providing them with additional information through non-invasive techniques (~30% of current activity by value) and achieving estimated efficiency gains of 20%-30%.

In general HCL is aiming to co-locate its robots with client systems to avoid the wait times inherent in robots accessing client systems using a surface integration techniques a through virtual desktop infrastructure (VDI).

The principal sectors currently being targeted by HCL for RPA are retail banking, investment banking, insurance, and telecoms, with the company also planning to apply robotics to the utilities sector, supply chain management, and finance & accounting. Overall, origination support is a major theme in the application of RPA by HCL. In addition, the company has applied robotics to track-and-trace in support of the logistics sector.

HCL currently has ~10 RPA implementations & pilots underway. Examples of where RPA has been applied by HCL include:

Account Opening for a European Bank

Prior to the application of robotics, the agent, having checked that the application data was complete and that the application was eligible, was required to enter duplicate application data separately into the bank’s money laundering and account opening systems,.

The implementation of robotics still requires agents to handle AML and checklist verification manually but applying automated data entry by ToscanaBot robots and presentation layer integration thereafter removed the subsequent data entry by agents leading to a refocusing of the agents on QC-related activities and an overall reduction of 42% in agent headcount.

Change of Address for a European Bank

This bank’s “change of address” process involved a number of mandatory checks including field checks and signature verification. However, it then potentially involved the agent in accessing a range of systems covering multiple banking products such as savings accounts, credit card, mortgage, and loan. This led to a lengthy agent training cycle since the agent needed to be familiar with each system supporting each of the full range of products offered by the bank.

While as in the first example, the agent is still required to perform the initial verification checks on the customer, robotics is then used to poll the various systems and present the relevant information to the agent. Once the agent authorizes, the robot now updates the systems. This has led to a 54% reduction in agent headcount.

Financial Reporting for a Large U.S. Bank

HCL has carried out a pilot with a large U.S. bank to address the challenges inherent to the financial reporting process. Through this pilot HCL proposes to replace manual activities covering data acquisition, data validation, and preparation of the financial reporting templates. In this pilot HCL estimates that it has achieved 54% reduction in the human effort and double digit reduction in the error rates. FTEs are now largely responsible for making the manual adjustments (subject to auditor, client and fund specs) and reviewing the Robot output instead of the usual maker/checker activities.

Fund Accounting for U.S. Bank

HCL has also carried out a pilot to address fund accounting processes with a U.S. bank. The principle was again to concentrate the agent activity on review and exception handling and to use robots for data input where possible. Here once RPA was implemented, following the introduction of workflow to facilitate hand-offs between agents and robots, the following steps were handled by agents:

  • Upload investor transactions

  • Review cash reconciliation

  • Review monetary value reconciliation

  • Review net asset value package

Robotics now handled the following steps:

  • Book trade & non-trade

  • Prepare cash reconciliation

  • Price securities

  • Prepare monetary value reconciliation

  • Book accruals

  • Prepare net asset value package.

This shows the potential to automate 60% of those activities formerly handled by agents.

In addition, HCL has implemented assisted decisioning for a telecoms operator, with robots accessing information from three systems: call manager, knowledge management, & billing, and in support of order management for a telecoms operator. In the latter case, order management data entry required knowledge of a different system for each region, again making agent training a significant issue for the company.

HCL’s robotic automation software is branded ToscanaBot, and as an integral part of the Toscana Suite which also includes HCL’s BPM/workflow software.

ToscanaBot is based on partner robotic software. The current partners used are Blue Prism and jacada, with in addition Automation Anywhere currently being onboarded. In the future, HCL plans to additionally partner with IPsoft and Celaton as the market becomes more sophisticated and increasingly embraces artificial cognition within RPA.

HCL aims to differentiate its robotics capability by:

  • Combining robotics within a portfolio of transformational tools including for example ICR/OCR, BPM, text mining & analytics, and machine learning. In particular, HCL is looking to incorporate more intelligence into its robotics offerings, including enhancing its ability to convert non-digital documents to digital format and convert unstructured data to structured data

  • Process and domain knowledge, HCL has so far largely targeted specialist industry-specific processes requiring significant domain knowledge rather than horizontal services and is working on creating add-ons for specific core software applications/ERPs, to facilitate integration between ToscanaBot and these core domain-specific applications

  • Creation of IP on top of partner software products.

Within BPO contracts, HCL is aiming to offer outcome-based pricing in conjunction with robotics, but in some instances the company has just sold the tools to the client organization or provided robotics as part of a wider ADM service.

Overall, HCL may be lagging behind some of its competitors in the application of RPA to horizontal processes such as F&A, though HCL is applying RPA to its own in-house finance & accounting process, but is at the forefront in the application of RPA to industry-specific processes where the company has strong domain knowledge in areas such as banking and supply chain management.

]]>
<![CDATA[Disruptive Forces and Their Impact on BPO: Part 7 - High Velocity BPO - What the Client Always Wanted]]> This is the final in a series of short blogs that look at various disruptive forces and their impact on BPO. The impact of all these disruptive factors is that BPO is now changing into something that the client has always wanted namely “High Velocity BPO”.

In its early days, BPO was a linear and lengthy process with knowledge transfer followed by labor arbitrage, followed by process improvement and standardization, followed by application of tools and automation. This process typically took years, often the full lifetime of the initial contract. More recently, BPO has speeded up with standard global process models, supported by elements of automation, being implemented in conjunction with the initial transition and deployment of global delivery. This timescale for “time to value” is now being speeded up further to enable a full range of transformation to be applied in months rather than years. Overall, BPO is moving from a slow-moving mechanism for transformation to High Velocity BPO. Why take years when months will do?

Some of key characteristics of High Velocity BPO are shown in the chart below:

Attribute

Traditional BPO

High-Velocity BPO

 Objective

 Help the  purchaser fix  their processes

 Help the purchaser contribute to  wider business goals

 Measure of  success

 Process  excellence

 Business success, faster

 Importance of  cost reduction

 High

 Greater, faster

 Geographic  coverage

 Key countries

 Global, now

 Process  enablers &  technologies

 High  dependence on  third-parties

 Own software components  supercharged with RPA

 Process  roadmaps

 On paper

 Built into the components

 Compliance

 Reactive  compliance

 Predictive GRC management

 Analytics

 Reactive process  improvement

 Predictive & driving the  business

 Digital

 A front-office  “nice-to-have”

 Multi-channel and sensors  fundamental

 Governance

 Process-  dependent

 GBS, end-to-end KPIs

 

As a start point, High Velocity BPO no longer focuses on process excellence targeted at a narrow process scope. Its ambitions are much greater, namely to help the client achieve business success faster, and to help the purchaser contribute not just to their own department but to the wider business goals of the organization, driven by monitoring against end-to-end KPIs, increasingly within a GBS operating framework.

However, this doesn’t mean that the need for cost reduction has gone away. It hasn’t. In fact the need for cost reduction is now greater and faster than ever. And in terms of delivery frameworks, the mish-mash of third-party tools and enablers is increasingly likely to be replaced by an integrated combination of proprietary software components, probably built on Open Source software, with built in process roadmaps, real-time reporting and analytics, and supercharged with RPA.

Furthermore, the role of analytics will no longer be reactive process improvement but predictive and driving real business actions, while compliance will also become even more important.

But let’s get back to the disruptive forces impacting BPO. What forms will the resulting disruption take in both the short-term and the long-term?

 Disruption

 Short-term impact

 Long-term impact

 Robotics

 Gives buyers 35%  cost reduction fast
 Faster introduction  of non-FTE based  pricing

 No significant impact on  process models or technology

 Analytics

 Already drives  process  enhancement

 Becomes much more  instrumental in driving business  decisions

 Potentially makes BPO vendors  more strategic

 Labor  arbitrage on  labor  arbitrage

 Ongoing reductions  in service costs and  employee attrition


 Improved business  recovery

 “Domestic BPO markets”  within emerging economies  become major growth  opportunity

 Digital

 Improved service at  reduced cost

 Big opportunity to combine  voice, process, technology, &  analytics in a high-value end-  to-end service

 BPO  “platform  components”

 Improved process  coherence

 BPaaS service delivery without  the third-party SaaS

 The Internet  of Things

 Slow build into  areas like  maintenance

 Huge potential to expand the  BPO market in areas such as  healthcare

 GBS

 Help organizations  deploy GBS

 Improved end-to-end  management and increased  opportunity

Reduced friction of service transfer

 

Well robotics is here now and moving at speed and giving a short-term impact of around 35% cost reduction where applied. It is also fundamentally changing the underlying commercial models away from FTE-based pricing. However, robotics does not involve change in process models or underlying systems and technology and so is largely short-term in its impact and is a cost play.

Digital and analytics are much more strategic and longer lasting in their impact enabling vendors to become more strategic partners by delivering higher value services and driving next best actions and operational business decisions with very high levels of revenue impact.

BPO services around the Internet of Things will be a relatively slow burn in comparison but with the potential to multiply the market for industry-specific BPO services many times over and to enable BPO to move into critical services with real life or death implications.

So what is the overall impact of these disruptive forces on BPO? Well while two of the seven listed above have the potential to reduce BPO revenues in the short-term, the other five have the potential to make BPO more strategic in the eyes of buyers and significantly increase the size and scope of the global BPO market.

 

Part 1 The Robots are Coming - Is this the end of BPO?

Part 2 Analytics is becoming all-pervasive and increasingly predictive

Part 3 Labor arbitrage is dead - long live labor arbitrage

Part 4 Digital renews opportunities in customer management services

Part 5 Will Software Destroy the BPO Industry? Or Will BPO Abandon the Software Industry in Favor of Platform Components?

Part 6 The Internet of Things: Is this a New Beginning for Industry-Specific BPO?

]]>
<![CDATA[Disruptive Forces and Their Impact on BPO: Part 6 - The Internet of Things: Is this a New Beginning for Industry-Specific BPO?]]> In our discussion, we’ve missed out lots of fashionable disruptors like mobile and cloud and these are indeed important elements within BPO. However, let’s be more futuristic still and consider the impact of the Internet of Things. Some examples of current deployment of the Internet of Things are as follows

 Sector

 Examples

 Telemedicine

 Monitoring heart operation patients post-op

 Insurance

 Monitoring driver behavior for policy charging

 Energy & utilities

 Identifying pipeline leakages

 Telecoms

 Home monitoring/management - the "next big  thing” for the telecoms sector

 Plant & equipment

 Predictive maintenance

 Manufacturing

 Everything-as-a-service

So, for example, sensors are already being used to monitor U.S. heart operation patients post-op from India to detect warning signs in their pulses, while a number of insurance companies are using telematics to monitor driver behaviour in support of policy charging. Elsewhere sensors are increasingly being linked to analytics to provide predictive maintenance in support of machinery from aircraft to mining equipment, and home monitoring seems likely to be the next “big thing” for the telecoms sector. And in the manufacturing sector, there is an increasing trend to sell “everything as a service” as an alternative to selling products in their raw form.

This is a major opportunity that has the potential to massively increase the market for industry-specific or middle-office BPO way beyond its traditional more administrative role.

However, it has a number of implications for BPO vendors in that the buyers for these sensor-dependent services are often not the traditional BPO buyer, these services are often real-time in nature and have a high level of requirement for 24X7 delivery, and strong analytics capability is likely to be a pre-requisite. In addition, these services arising out of the Internet of Things potentially take the meaning of risk/reward to a whole new level, as many of them potentially have real life or death implications. Some work for the lawyers on both sides here.

Coming next: High-Velocity BPO – What the client always wanted!

Previous blogs in this series:

Part 1 The Robots are Coming - Is this the end of BPO?

Part 2 Analytics is becoming all-pervasive and increasingly predictive

Part 3 Labor arbitrage is dead - long live labor arbitrage

Part 4 Digital renews opportunities in customer management services

Part 5 Will Software Destroy the BPO Industry? Or Will BPO Abandon the Software Industry in Favor of Platform Components?

]]>
<![CDATA[HCL Launches Enterprise Function as a Service to Support Financial Services Firms in Creation of Utility Models]]> HCL has launched a service called EFaaS, Enterprise Function as a Service, to address reducing the operations costs of organizations through creation of specialized utilities. The service is initially targeted at capital markets firms, retail banks, and insurance companies and at the finance, procurement, HR, risk & compliance, legal and marketing functions.

The EFaaS service has arisen from HCL’s Next Gen BPO tenets, namely domain orientation, innovation and improvement focused, based on output/outcome/flexible constructs, utilizing HCL’s Integrated Global Delivery Model (IGDM), and addressing risk and compliance. In particular, the EFaaS service aims to deliver business function services as utilities by undertaking elements of business operations transformation, IT standardization (e.g. SAP/Oracle transformation, unified chart of accounts, reduced reporting platforms, data warehouses etc.), platform transformation, and infrastructure consolidation and to achieve 25%-35% cost reduction within each utility. Accordingly, HCL is:

  • Looking to create domain-specific global shared centers
  • Use business outcome based constructs to put “skin in the game” in transforming the organization’s enterprise function
  • Focusing on enhancing risk and compliance and HCL will engage with global accounting firms for SAS compliance
  • In addition to cost reduction benefits, the carve-out of a business function utility aims to deliver increased business agility, enhanced controls, and faster scalability.

HCL has a five-step approach, typically spread over 24-30 months, to implementing EFaaS, namely:

  • Due diligence and risk assessment, in conjunction with a Big 4 consulting partner, including developing process maps, integrated IT-BPO roadmap, co-governance model
  • Process consolidation, including functional alignment, adjusting grade mix and location mix, and shared services utility creation
  • Commercialization, including market assessment, asset monetization, and revenue sharing arrangement
  • Carve out and transition, including carve out, transition, rebadging, and organization change management
  • Platform transformation, including creating common data model, data & platform consolidation, new platform implementation and analytics.

HCL is working with global strategic partners in the development of these utilities, with partners assisting in:

  • Benchmarking with world class enterprise functions
  • Cost/benefits evaluation
  • Performance and change management frameworks
  • Stakeholder assessments and leadership alignment
  • Communications strategy.

HCL initially targeted a number of major banks, all of which are looking to achieve multi-billion dollars of cost take-out from their operations. In particular, these banks typically face the following issues:

  • How to carve-out non-core business functions
  • How to boost their controls and put in strong compliance & control environment.
  • How to manage complex IT environments typically involving use of the major ERPs plus a number of regulatory point solutions.

HCL has so far signed two contracts for EFaaS, both in the banking sector. In HCL’s initial contract for EFaaS, the contract scope covered four principal business processes within the client organization:

  • External reporting, for example to the FCA and Bank of England
  • Management reporting
  • Cost utility, covering allocations/adjustments/accruals
  • Regional and Group reporting & consolidation under U.S. GAAP, IFRS, U.K., GAAP and multiple local GAAPs.

Across these four process areas, HCL undertook a multi-year contract, undertaking to take out 35% of cost, while simplifying the IT environment with no up-front IT investment required by the client organization. In addition, the client organization was looking to establish a private utility or utilities across these functions that could then be taken to wider banking organizations.

In response, HCL established a private utility for the client organization across all four of these process areas and identified external reporting as the area which could be most readily replicated and taken to market. In addition, the process knowledge but not the technology aspects of the “cost utility” processes could be replicated, whereas management reporting is typically very specific to each bank and can’t be readily replicated. Accordingly, while private utilities have been established for the initial banking client organization across all four target process areas, only external reporting is being commercialized to other organizations at this stage.

Process improvement and service delivery location shifts have been made across all four process areas. For example, prior to the contract with HCL, 60% of the reporting was done by the bank in Excel. HCL has standardized much of this reporting using various report writing tools. In addition, HCL has implemented workflow in support of the close process, enabling the life-cycle of the close process to be established as an online tool and increasing transparency on a global basis.

Within the external reporting function, the approach taken by HCL has been to use Axiom software to establish and pre-populate templates for daily, monthly, and quarterly external reporting, extracting the appropriate data from SAP and Oracle ERPs.

In terms of delivery, HCL is creating global hubs in India (~80% of activity) with regional centers in the U.S. in Cary and in Europe in Krakow.  HCL has also put in place training in support of local country regulations, for example the differences between U.S. GAAP and U.K. GAAP.

HCL is continuing to take EFaaS to market by targeting major banking and insurance firms, initially approaching existing accounts. In terms of geographies, HCL is selectively targeting major banks and insurers in U.S., U.K., and Continental Europe.

The banks and insurers are expected to retain their existing ERPs. However, HCL perceives that it can assist banks and insurers in adoption of best-in-class chart-of-accounts design and governance and best practices around data management and simplifying the various instances of ERPs.

In general, within its EFaaS offering, HCL is prepared to fund projects for banks and insurers that involve cost take-out and where HCL can take fees downstream based on criteria where HCL has control of the outcomes.

HCL perceives “speed of replication” to be a key differentiator of the EFaaS approach, and the EFaaS framework initially used has now been replicated for another banking institution in support of their finance operations and external reporting processes.

This service is a timely response to the needs of capital markets firms in particular, that have been seeking to take considerable costs out of their operations and to carve-out and commercialize non-core functions into separate third-party-owned utilities. It is likely that capital markets firms will carve-out a relatively large number of narrowly-focused utilities with some of these being successfully commercialized by third-parties. The retail banks are likely to follow this pattern subsequently, though probably to a lesser extent than capital markets firms.

In addition to Finance as an enterprise function, HCL’s EFaaS model will be subsequently developed to target other enterprise functions such as procurement, HR, risk & compliance, legal, and marketing functions.

]]>