NelsonHall: IT Services blog feed https://research.nelson-hall.com//sourcing-expertise/it-services/?avpage-views=blog NelsonHall's IT Services program is a research service dedicated to helping organizations understand, adopt, and optimize adaptive approaches to IT services that underpin and enable digital transformation within the enterprise. <![CDATA[Capgemini Enterprise Automation Fabric Moves Beyond IT Incident Management to Driving Business KPIs]]> While more efficient management of IT KPIs and incidents remains highly important, and Capgemini’s Enterprise Automation Fabric addresses these challenges, it now goes further and enables organizations to relate the impact of missed IT KPIs and incidents to individual business KPIs.

Not all IT KPIs are created equal, so Enterprise Automation Fabric incorporates a 3-level CMDB linking business processes, applications, and IT infrastructure. This mapping of business KPIs to application KPIs to infrastructure KPIs enables organizations to identify the potential business consequences of particular IT incidents. For example, for a retailer in the Netherlands, Enterprise Automation Fabric can predict the impact on shipping volumes if a particular issue happens with the IT infrastructure and this is not addressed within, say, 48 hours.

In general, Enterprise Automation Fabric can be set up to trigger automation or generate an alert to a business owner if a particular business KPI is identified as being at risk.

So, what is Capgemini Enterprise Automation Fabric?

As the name implies, Enterprise Automation Fabric is a reference toolset and framework consisting of a series of components that can be integrated with an organization’s current IT management investments. The toolset consists of interwoven third-party and Capgemini assets. It supports the management of the entire IT estate across cloud infrastructure, on-premise data center infrastructure, end-user computing, and applications. As appropriate, it links with the client’s existing monitoring solutions or utilizes its own preferred options.

Key components of the Enterprise Automation Fabric architecture include:

  • Its CMDB (configuration management database)
  • AIOps. This is central to “observability” with the CMDB structure captured by Capgemini’s AIOps solution, typically Splunk, augmented by proprietary Capgemini assets. AIOps covers functions such as anomaly detection, event correlations, and service impact assessment
  • ITSM. Infrastructure management across multi-layer components, covering functions such as ticket management, incident management, and service request management
  • Automation utilizing unattended and attended bots.

Once an anomaly is identified, Capgemini’s ITSM layer, built on ServiceNow, creates an incident. Capgemini data-driven assets augment ServiceNow in areas such as assisted resolution and intelligent dispatcher.

Once the incident is captured in the ITSM, it can be addressed using automation solutions. These include both intrusive automation, such as RPA and managing infrastructure as code, and human-in-the-loop automation. In infrastructure, autonomous automation can currently handle around 50% of incidents without human involvement, typically exceeding 50% in the service request space. For application-related incidents, the level of autonomous resolution is typically in the range of 20%-40%.

Enterprise Automation Fabric includes infrastructure-related automation bots for health checks, service requests, remediation, and reporting across:

  • Servers (228 bots used across client engagements)
  • SAP (82 bots)
  • Storage & backup (66 bots)
  • Network (~20 bots).

Enterprise Automation Fabric Capture is a Capgemini asset that can be deployed to speed up incident identification and resolution in SAP environments. It allows SAP users to capture all the details on the screen, including the error code, in a structured Excel format and create an incident with extensive pre-populated structured data in the ITSM.

Enterprise Automation Fabric is cloud-native but has an on-premise option. This latter option is particularly relevant for organizations requiring business measurement data to remain within their onsite environments.

Capgemini Reduced Alerts by 86% for Consumer Electronics Company

The complexity of this company’s IT estate had steadily increased over time, leading to high-priority production incidents (P1 alerts), particularly impacting the company’s SAP Order Management applications and infrastructure.

Capgemini deployed an AIOps solution to integrate various monitoring tools across applications and IT infrastructure and introduced a single dashboard for improved visibility across all the monitoring and custom application alerts. This significantly reduced the number of alerts by, for example, identifying and suppressing false alerts and avoiding duplication of alerts, enabling the team to focus on a smaller number of genuine alerts.

At the start of the engagement, the company was experiencing around eight P1 alerts per month, and Capgemini was able to eliminate P1 alerts over six months. Overall, an 86% reduction in alerts was achieved.

Capgemini also created a catalog of automation scripts to assist in resolving the issues identified by AIOps and developed knowledge articles to augment the capability of the team to resolve issues that could not be automated and required manual resolution.

In a similar exercise for a major airport, which had previously managed its systems manually, Capgemini streamlined its IT operations and cut alert queues by half. The auto-healing solutions helped boost efficiency and achieved a 10% increase in SLA response times, reducing the manual workload significantly. The net result was a 98% reduction in incident turnaround time, largely due to now correlating 88% of the previously unrelated alerts.

While organizations can adopt Capgemini’s Enterprise Automation Fabric on an incremental basis, the keys to its successful application lie in its multi-tier observability capability, its ability to resolve incidents autonomously before they impact users, and its ability to link business performance to application and infrastructure KPIs. The overall Enterprise Automation Fabric also meshes with existing client investments, reusing key assets from the client where appropriate.

Capgemini is now enhancing the existing Fabric components with GenAI proficiency, applying GenAI widely from incident routing to automated response and alert resolution to deliver enhanced efficiency within the framework. NelsonHall will bring you more updates as Capgemini incorporates the capabilities of GenAI into Enterprise Automation Fabric.

]]>
<![CDATA[The Move to B2B Platforms: Q&A with Manuel Sevilla, Capgemini CDO]]>

 

Platforms have been increasingly important in B2C digital transformation in recent years and have been used to disintermediate and create a whole raft of well-known consumer business opportunities. B2B platforms have been less evident during this period outside the obvious ecosystems built up in the IT arena by the major cloud and software companies. However, with blockchain now emerging to complement the increasing power of cognitive and automation technologies, the B2B platform is now once again on the agenda of major corporations.

One IT services vendor assisting corporations in establishing B2B platforms to reimagine certain of their business processes is Capgemini, where B2B platform development is a major initiative alongside smart automation. In this interview, NelsonHall CEO John Willmott talks with Manuel Sevilla, Capgemini’s Chief Digital Officer, about the company’s B2B platform initiatives.

 

JW: Manuel, welcome. As Chief Digital Officer of Capgemini, what do you regard as your main goals in 2019?

MS: I have two main goals:

  • First, automation. We’re looking to automate all our clients’ businesses in a smart way, transforming their services using combinations of RPA, AI, and use of APIs to move their processes to smart automation
  • Second, to build B2B platforms that enable customers to explore new business models. I see this as a key development in the industry over the next few years, fueled by the need for third-party involvement in establishing peer-to-peer blockchain-based B2B platforms.

JW: What do you see as the keys to success in building a B2B platform?

MS: The investment required to establish a B2B platform is significant by nature and has to be seen in the long-term. This significant and long-term investment is required across the following three areas:

  • Obviously, building the platform requires a significant investment since, in a B2B environment, the platform must have the ability to scale and have a sufficient number of functionalities to provide enough value to the customers
  • Governance is critical to provide mechanisms for establishing direction and priorities in both the short and long-term
  • Building the ecosystem is absolutely critical for widespread platform adoption and maintaining the platform’s longevity.

JW: How do the ecosystem requirements differ for a B2B platform as opposed to a B2C platform?

MS: B2B and B2C are very different. In B2C environments, a partial solution is often sufficient for consumers to start using it. In B2B, corporates will not use a partial platform. For example, for corporates to input their private data, the platform has to be fully secured. Also, it is important to bring a service that delivers enough value either by simplifying and reducing process costs or by providing access to new markets, or both. For example, a B2B supply chain platform with a single auto manufacturer will undoubtedly fail. The big components suppliers will only join a platform that provides access to a range of auto manufacturers, not a separate platform for each manufacturer.

Building the ecosystem is perhaps the most difficult task when creating a B2B platform. The value of Capgemini is that the company is neutral and can take the lead in driving the initiatives to make the platform happen. Capgemini recognizes humbly that for a platform to scale, it needs not only a diverse range of partners but also that Capgemini cannot be the only provider; it is critical to involve Capgemini’s partners and competitors.

JW: How does governance differ for a B2B platform?

MS: In a fast-moving B2B environment, defining the governance has to proceed alongside building the ecosystem, and it is essential to have processes in place for taking decisions regarding the platform roadmap in both the short and long-term.

B2B platform governance is not the usual two-way client/vendor governance; it is much more complex. For a B2B platform, you need to have a clear definition of who is a member and how members take decisions. It then needs enough large corporates as founder members to drive initial functionalities and to ensure that the platform will bring value and will be able to scale. Once the platform has critical mass, then the governance mechanism needs to adapt itself to support the future scaling of the platform, often with an accompanying dilution of the influence of the founder members.

The governance for a B2B platform often involves creating a separate legal entity, which can be a consortium, a foundation, or even multiple legal entities.

JW: Can you give me an example of where Capgemini is currently developing a B2B platform?

MS: Capgemini is currently developing four B2B platforms, including one with the R3 consortium to build a B2B platform called KYC Trust that aims to solve the corporate KYC problem between corporates and banks. Capgemini started work on KYC Trust in early 2016 and it is expected to go into scaled production in the next 12-24 months.

JW: What is the corporate KYC problem and how is Capgemini addressing this?

MS: Corporate KYC starts with the data collection process, with, at present, each bank typically asking the corporate several hundred questions. As each bank typically asks its own unique questions, this creates a substantial workload for the corporate across banks. Typically, it takes a month to collect the information for each bank. Then, once a bank has collected the information on the corporate, it needs to check it, which means paying third-parties to validate the data. The bank then typically uses an algorithm to score the acceptability of the corporate as a customer. This process needs to be repeated regularly. Also, the corporate typically has to wait, say, 30 days for its account to be opened.

To simplify and speed up this process, Capgemini is now building the KYC Trust B2B platform. This platform incorporates a standard KYC taxonomy to remove redundancy from, and standardize, data requests and submission, and each corporate will store the documents required for KYC in its own nodes on the platform. Based on the requests received from banks, a corporate can then decide which documents will be shown to whom and when. All these transactions will be traceable in blockchain so that the usage of each document can be tracked in terms of which bank accessed it and when.

The advantage for a bank in onboarding a new corporate using this platform is that a significant proportion of the information required from a corporate will already exist, having already been supplied to another bank. The benefits to corporates include reducing the effort in submitting information and in being able to identify which information has been used by which bank and when, where, and how.

This will speed up the KYC process and simplify data collection operations. It will also simplify how corporates manage their own data such as shareholder information and information on new beneficial owners.

JW: How does governance work in the case of KYC Trust?

MS: A foundation will be established in support of the governance of KYC Trust. The governance has two main elements:

  • Establishing the basic rules, in particular, defining how a node can be operated and specifying the applications that can be run on top of the platform to create questionnaires and how the platform will integrate with banks’ own KYC platforms
  • Providing the means for corporates to submit information, enabling the mixing of data from multiple countries while respecting local regulations. This includes splitting the information submission between the various legal entities of each corporation with data potentially only hosted locally for each legal entity.

Key principles of the foundation are respect for openness and interoperability, since there cannot be a single B2B platform that meets all the business needs. In order to build scale, it is important to encourage interoperability with other B2B platforms, such as (in this case) the Global Legal Entity Identifier Foundation (GLEIF), to maximize the usefulness and adoption of the platform.

JW: How generally applicable is the approach that Capgemini has taken to developing KYC Trust?

MS: There are a lot of commonalities. Sharing of documents in support of certification & commitments is the first step in many business processes. This lends itself to a common solution that can be applied across processes and industries. Capgemini is building a structure that would allow platforms to be built in support of a wide range of B2B processes. For example, the structure used within KYC Trust could be used to support various processes within supply chain management. Starting with sourcing, it could be used to ensure, for example, that no children are being employed in a factory by asking the factory to submit a document certified by an NGO to this effect every six months. Further along the supply chain, it could also be used, for example, to support the correct use of clinical products sold by pharmaceutical companies.

And across all four B2B platforms currently being developed by Capgemini, the company is incorporating interoperability, openness, and a taxonomy as standard features.

JW: Thank you Manuel, and good luck. The emergence of B2B platforms will be a key development over the next few years as organizations seek to reimagine and digitalize their supply chains, and I look forward to hearing more about these B2B platform initiatives as they mature.

]]>
<![CDATA[Genpact Cora: A Unifying Framework to Accelerate Industrialization of New Digital Process Models]]>

 

Many of the pureplay BPS vendors have been moving beyond individual, often client-specific implementations of RPA and AI and building new digital process models to present their next generation visions within their areas of domain expertise. So, for example, models of this type are emerging strongly in the BFSI space and in horizontals such as source-to-pay.

A key feature of these new digital process models is that they are based on a design thinking-centric approach to the future and heavily utilize new technologies, with the result that the “to-be” process embodied within the new digital process model typically bears little relation to the current “as-is” process whether within a BPS service or within a shared service/retained entity.

These new digital process models are based on a number of principles, emphasizing straight-through processing, increased customer-centricity and proactivity, use of both internal and external information sources, and in-built process learning. They typically encompass a range of technologies, including cloud system of engagement platforms, RPA, NLP, machine learning and AI, computer vision, predictive and prescriptive analytics held together by BPM/workflow and command & control software.

However, while organizations are driving hard towards identifying new digital process models and next generation processes, there are a relatively limited number of examples of these in production right now, their implementations use differing technologies and frameworks, and the rate of change in the individual underlying technology components is potentially very high. Similarly, organizations currently focusing strongly on adoption of, say, RPA in the short-term realize that their future emphasis will be more cognitive and that they need a framework that facilitates this change in emphasis without a fundamental change in framework and supporting infrastructure.

Aiming for a Unifying Framework for New Digital Process Models

In response to these challenges, and in an attempt to demonstrate longevity of next generation digital process models, Genpact has launched a platform called “Genpact Cora” to act as a unifying framework and provide a solid interconnect layer for its new digital process models.

Genpact Cora is organized into three components:

  • Digital Core: dynamic workflow (based on PMNSoft acquisition), cloud-based systems of engagement, blockchain, mobility & ambient computing, and RPA
  • Data Analytics: advanced visualization, Big Data, data engineering, IoT
  • AI: conversational AI, computational linguistics, computer vision, machine learning & data science AI.

One of the aims of this platform is to provide a framework into which technologies and individual products can be swapped in and out as technologies change without threatening the viability of the overall process or the command and control environment, or necessitating a change of framework. Accordingly, the Genpact Cora architecture also encompasses an application program interface (API) design and an open architecture.

Genpact is then building its new digital process models in the form of “products” on top of this platform. Genpact new digital process model “products” powered by Cora currently support a number of processes, including wealth management, commercial lending, and order management.

However, in the many process areas where these “products” are not yet formed, Genpact will typically take a consulting approach, initially building client-specific digital transformations. Then, as the number of assignments in any specific process area gains critical mass, Genpact is aiming to use the resulting cumulative knowledge to build a more standardized new digital process model “product” with largely built-in business rules that just require configuring for future clients. And each of these “products” (or new digital process models) will be built on top of the Genpact Cora platform.

Launching “Digital Solutions” Service in Support of Retained Operations

Another trend started by the desire for digital process transformation and the initial application of RPA is that organizations are keen to apply new digital process models not just to outsourced services but to their shared services and retained organizations. However, there is currently a severe shortage of expertise and capability to meet this need. Accordingly, Genpact intends to offer its Genpact Cora platform not just within BPS assignments but also in support of transformation within client retained services. Here, Genpact is launching a new “Digital Solutions” service that implements new digital process models on behalf of the client shared services and retained organizations and complements its “Intelligent Operations” BPS capability. In this way, Genpact is aiming to industrialize and speed up the adoption of new digital process models across the organization by providing a consistent and modular platform, and ultimately products, for next generation process adoption.

]]>
<![CDATA[Wipro & Automation Anywhere: Extending Beyond Rule-Based RPA into New Digital Business Process Models]]>

 

Wipro began partnering with Automation Anywhere in 2014. Here I examine the partnership, looking at examples of RPA deployments across joint clients, at how the momentum behind the partnership has continued to strengthen, and at how the partners are now going beyond rule-based RPA to build new digital business process models.

Partnership Already has 44 Joint Clients

Wipro initially selected Automation Anywhere based on the flexibility and speed of deployment of the product and the company’s flexibility in terms of support. The two companies also have a joint go-to-market, targeting named accounts with whom Wipro already has an existing relationship, plus key target accounts for both companies. 

To date, Wipro has worked with 44 clients on automation initiatives using the Automation Anywhere platform, representing ~70% of its RPA client base. Of these, 17 are organizations where Wipro was already providing BPS services, 27 are clients where Wipro has assisted in-house operations or provided support in applying RPA to processes already outsourced to another vendor.

In terms of geographies, Wipro’s partnership with Automation Anywhere is currently strongest in the Americas and Australia. However, Automation Anywhere has recently been investing in a European presence, including the establishment of a services and support team in the U.K., and the two companies are now focusing on breaking into the major non-English-speaking markets in Continental Europe.

So let’s look at a few examples of deployments.

For an Australian telco, where Wipro is one of three vendors supporting the order management lifecycle, Wipro had ~330 FTEs supporting order entry to order provisioning. Wipro first applied RPA to these process areas, deploying 45 bots, replacing 143 FTEs. The next stage looked across the order management lifecycle. Since the three BPS vendors were handling different parts of the lifecycle, an error or missing information at one stage would result in the transaction being rejected further downstream. In order to eliminate, as far as possible, exceptions from the downstream BPS vendors, Wipro implemented "checker" bots which carry out validation checks on each transaction before it is released to the next stage in the process, sending failed transactions and the reasons for failure back to the processing bots for reprocessing and, where appropriate, the collection of missing information. This reduced the number of kick-backs by 73%.

Other clients where Wipro has used Automation Anywhere in RPA implementations include:

  • A U.S. bank, automating bank account statement reconciliation (~94% time-saving), the bounce-back process (~60% time-saving), and account activation and day 2 check (~50% time-saving)
  • A U.S.-based clothing manufacturer, automating journal processing: auto-selecting the template, auto data entry into the document, auto emails with offer, and auto data entry into ERP. Led to a 38% FTE reduction
  • A steel manufacturing company, automating invoice processing. Led to 50% FTE reduction
  • A European network equipment provider: order management (supporting CDR creation, invoice, OD creation, order entry, & order entry changes), achieving a 41% productivity improvement; also procurement across P2P and MDM processes, a 40% productivity improvement
  • A U.K. based telco, automating order management; achieved a £1.4m cost reduction and an 80% reduction in wait time
  • A multi-national medical devices company: automating 10 processes within P2P; replaced 61 FTEs and produced a ~13% productivity benefit.

Using The Partnership to Enhance Speed-to-Benefit within Rule-Based Processes

The momentum behind the partnership has continued to strengthen, with Wipro achieving a number of significant wins in conjunction with Automation Anywhere over the past three months, including a contract which will result in the deployment of in excess of 100 bots within a single process area over the next 6 months. In the last quarter, as organizations begin to scale their RPA roll-outs, Wipro has seen average RPA deal sizes increase from 25-40 bots to 75-100 bots.

Key targets for Wipro and Automation Anywhere are banking, global media & telecoms, F&A, and increasingly healthcare and Wipro has recently been involved in discussions with new organizations across the manufacturing, retail, and consumer finance sectors in areas such F&A, order management, and industry-specific processing.

Out of its team of ~450 technical and functional RPA FTEs (~600 FTEs if we include cognitive), Wipro has ~200 FTEs dedicated to Automation Anywhere implementations. This concentration of expertise is assisting Wipro in enhancing speed-to-benefit for clients, particularly in areas where Wipro has conducted multiple assignments, for example in:

  • Banking: payments, new account opening, and account maintenance
  • Insurance: accounts reconciliation and policy servicing
  • Capital markets: payable charges, reporting trading, verifications, trade settlements, static data maintenance.

Overall, Wipro has ~400 curated and non-curated bots in its library. This has assisted in halving implementation cycle times in these areas, to around four weeks.

Wipro also perceives the ease of deployment and ease of debugging of Automation Anywhere, facilitated by the structuring of its platform into separate orchestration and task execution bots, is another factor that has helped enhance speed-to-benefit.

Wipro’s creation of a sizeable team of Automation Anywhere specialists means it has the bandwidth to respond rapidly to new opportunities and to initiate new projects within 1-2 weeks.

Speed of support to architecture queries is another important factor both in architecting in the right way and in speed-to-market. Around a third (~100) of Automation Anywhere personnel are within its support and services organization, providing 24X7 support by phone and email, and ensuring a two-day resolution time. This is of particular importance to Wipro in support of its multi-geography RPA projects.

Extending the Partnership: Tightening Integration between Automation Anywhere & Wipro Platforms to Build New Digital Business Process Models

In addition to standard rule-based RPA deployments of Automation Anywhere, Wipro is also increasingly:

  • Using Automation Anywhere bots to handle unstructured data (currently deployed with ~25% of clients), and to provide process analytics
  • Integrating Automation Anywhere with Wipro platforms such as Base))), its BPM process interaction design and execution platform, via the API functionality within Automation Anywhere’s metabots. To date, Wipro has eight clients where it is using Automation Anywhere bots combined with its Base))) platform, in processes such as A/P, order management, A/R, and GL.

In an ongoing development of the partnership, Wipro will use Automation Anywhere cognitive bots to complement Wipro HOLMES, using Automation Anywhere for rapid deployments, probably linked to OCR, and HOLMES to support more demanding cognitive requirements using a range of customized statistical techniques for more complicated extraction and understanding of data and for predictive analytics.

Accordingly, Wipro is strengthening its partnership with Automation Anywhere both to deliver tighter execution of rule-based RPA implementations and as a key platform component in the creation of future digital business process models.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 3: From Pilot to Production & Beyond – The Keys to Successful RPA Deployment]]>

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the third and final blog in a series presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at the stages of deployment, from pilot development, through design & build, to production, maintenance, and support.

Piloting & deployment – it’s all about the business

When developing pilots, it’s important to recognize that the organization is addressing a business problem and not just applying a technology. Accordingly, organizations should consider how they can make a process better and achieve service delivery innovation, and not just service delivery automation, before they proceed. One framework that can be used in analyzing business processes is the ‘eliminate/simplify/standardize/automate’ approach.

While organizations will probably want to start with some simple and relatively modest RPA pilots to gain quick wins and acceptance of RPA within the organization (and we would recommend that they do so), it is important as the use of RPA matures to consider redesigning and standardizing processes to achieve maximum benefit. So begin with simple manual processes for quick wins, followed by more extensive mapping and reengineering of processes. Indeed, one approach often taken by organizations is to insert robotics and then use the metrics available from robotics to better understand how to reengineer processes downstream.

For early pilots, pick processes where the business unit is willing to take a ‘test & learn’ approach, and live with any need to refine the initial application of RPA. Some level of experimentation and calculated risk taking is OK – it helps the developers to improve their understanding of what can and cannot be achieved from the application of RPA. Also, quality increases over time, so in the medium term, organizations should increasingly consider batch automation rather than in-line automation, and think about tool suites and not just RPA.

Communication remains important throughout, and the organization should be extremely transparent about any pilots taking place. RPA does require a strong emphasis on, and appetite for, management of change. In terms of effectiveness of communication and clarifying the nature of RPA pilots and deployments, proof-of-concept videos generally work a lot better than the written or spoken word.

Bot testing is also important, and organizations have found that bot testing is different from waterfall UAT. Ideally, bots should be tested using a copy of the production environment.

Access to applications is potentially a major hurdle, with organizations needing to establish virtual employees as a new category of employee and give the appropriate virtual user ID access to all applications that require a user ID. The IT function must be extensively involved at this stage to agree access to applications and data. In particular, they may be concerned about the manner of storage of passwords. What’s more, IT personnel are likely to know about the vagaries of the IT landscape that are unknown to operations personnel!

Reporting, contingency & change management key to RPA production

At the production stage, it is important to implement a RPA reporting tool to:

  • Monitor how the bots are performing
  • Provide an executive dashboard with one version of the truth
  • Ensure high license utilization.

There is also a need for contingency planning to cover situations where something goes wrong and work is not allocated to bots. Contingency plans may include co-locating a bot support person or team with operations personnel.

The organization also needs to decide which part of the organization will be responsible for bot scheduling. This can either be overseen by the IT department or, more likely, the operations team can take responsibility for scheduling both personnel and bots. Overall bot monitoring, on the other hand, will probably be carried out centrally.

It remains common practice, though not universal, for RPA software vendors to charge on the basis of the number of bot licenses. Accordingly, since an individual bot license can be used in support of any of the processes automated by the organization, organizations may wish to centralize an element of their bot scheduling to optimize bot license utilization.

At the production stage, liaison with application owners is very important to proactively identify changes in functionality that may impact bot operation, so that these can be addressed in advance. Maintenance is often centralized as part of the automation CoE.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on RPA, with your buy-side peers sharing their RPA experiences. To find out more, contact Matthaus Davies.  

This is the final blog in a three-part series. See also:

Part 1: How to Lay the Foundations for a Successful RPA Project

Part 2: How to Identify High-Impact RPA Opportunities

]]>
<![CDATA[RPA Operating Model Guidelines, Part 2: How to Identify High-Impact RPA Opportunities]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the second in a series of blogs presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at how to assess and prioritize RPA opportunities prior to project deployment.

Prioritize opportunities for quick wins

An enterprise level governance committee should be involved in the assessment and prioritization of RPA opportunities, and this committee needs to establish a formal framework for project/opportunity selection. For example, a simple but effective framework is to evaluate opportunities based on their:

  • Potential business impact, including RoI and FTE savings
  • Level of difficulty (preferably low)
  • Sponsorship level (preferably high).

The business units should be involved in the generation of ideas for the application of RPA, and these ideas can be compiled in a collaboration system such as SharePoint prior to their review by global process owners and subsequent evaluation by the assessment committee. The aim is to select projects that have a high business impact and high sponsorship level but are relatively easy to implement. As is usual when undertaking new initiatives or using new technologies, aim to get some quick wins and start at the easy end of the project spectrum.

However, organizations also recognize that even those ideas and suggestions that have been rejected for RPA are useful in identifying process pain points, and one suggestion is to pass these ideas to the wider business improvement or reengineering group to investigate alternative approaches to process improvement.

Target stable processes

Other considerations that need to be taken into account include the level of stability of processes and their underlying applications. Clearly, basic RPA does not readily adapt to significant process change, and so, to avoid excessive levels of maintenance, organizations should only choose relatively stable processes based on a stable application infrastructure. Processes that are subject to high levels of change are not appropriate candidates for the application of RPA.

Equally, it is important that the RPA implementers have permission to access the required applications from the application owners, who can initially have major concerns about security, and that the RPA implementers understand any peculiarities of the applications and know about any upgrades or modifications planned.

The importance of IT involvement

It is important that the IT organization is involved, as their knowledge of the application operating infrastructure and any forthcoming changes to applications and infrastructure need to be taken into account at this stage. In particular, it is important to involve identity and access management teams in assessments.

Also, the IT department may well take the lead in establishing RPA security and infrastructure operations. Other key decisions that require strong involvement of the IT organization include:

  • Identity security
  • Ownership of bots
  • Ticketing & support
  • Selection of RPA reporting tool.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held later this month: to find out more, contact Guy Saunders.  

In the third blog in the series, I will look at deploying an RPA project, from developing pilots, through design & build, to production, maintenance, and support.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 1: Laying the Foundations for Successful RPA]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the first in a series of blogs presenting key guidelines for organizations embarking on RPA, covering establishing the RPA framework, RPA implementation, support, and management. First up, I take a look at how to prepare for an RPA initiative, including establishing the plans and frameworks needed to lay the foundations for a successful project.

Getting started – communication is key

Essential action items for organizations prior to embarking on their first RPA project are:

  • Preparing a communication plan
  • Establishing a governance framework
  • Establishing a RPA center-of-excellence
  • Establishing a framework for allocation of IDs to bots.

Communication is key to ensuring that use of RPA is accepted by both executives and staff alike, with stakeholder management critical. At the enterprise level, the RPA/automation steering committee may involve:

  • COOs of the businesses
  • Enterprise CIO.

Start with awareness training to get support from departments and C-level executives. Senior leader support is key to adoption. Videos demonstrating RPA are potentially much more effective than written papers at this stage. Important considerations to address with executives include:

  • How much control am I going to lose?
  • How will use of RPA impact my staff?
  • How/how much will my department be charged?

When communicating to staff, remember to:

  • Differentiate between value-added and non value-added activity
  • Communicate the intention to use RPA as a development opportunity for personnel. Stress that RPA will be used to facilitate growth, to do more with the same number of people, and give people developmental opportunities
  • Use the same group of people to prepare all communications, to ensure consistency of messaging.

Establish a central governance process

It is important to establish a strong central governance process to ensure standardization across the enterprise, and to ensure that the enterprise is prioritizing the right opportunities. It is also important that IT is informed of, and represented within, the governance process.

An example of a robotics and automation governance framework established by one organization was to form:

  • An enterprise robotics council, responsible for the scope and direction of the program, together with setting targets for efficiency and outcomes
  • A business unit governance council, responsible for prioritizing RPA projects across departments and business units
  • A RPA technical council, responsible for RPA design standards, best practice guidelines, and principles.

Avoid RPA silos – create a centre of excellence

RPA is a key strategic enabler, so use of RPA needs to be embedded in the organization rather than siloed. Accordingly, the organization should consider establishing a RPA center of excellence, encompassing:

  • A centralized RPA & tool technology evaluation group. It is important not to assume that a single RPA tool will be suitable for all purposes and also to recognize that ultimately a wider toolset will be required, encompassing not only RPA technology but also technologies in areas such as OCR, NLP, machine learning, etc.
  • A best practice for establishing standards such as naming standards to be applied in RPA across processes and business units
  • An automation lead for each tower, to manage the RPA project pipeline and priorities for that tower
  • IT liaison personnel.

Establish a bot ID framework

While establishing a framework for allocation of IDs to bots may seem trivial, it has proven not to be so for many organizations where, for example, including ‘virtual workers’ in the HR system has proved insurmountable. In some instances, organizations have resorted to basing bot IDs on the IDs of the bot developer as a short-term fix, but this approach is far from ideal in the long-term.

Organizations should also make centralized decisions about bot license procurement, and here the IT department which has experience in software selection and purchasing should be involved. In particular, the IT department may be able to play a substantial role in RPA software procurement/negotiation.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s extensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held in November: to find out more, contact Matthaus Davies.  

 

In the second blog in this series, I will look at RPA need assessment and opportunity identification prior to project deployment.

 

]]>
<![CDATA[Increased Adoption of Platform-Based Services and Emergence of DevOps within Resurgent SAP Outsourcing Point to Increasing Sophistication of U.K. Outsourcing Market]]> Analysis of NelsonHall’s U.K. outsourcing contracts for 2014 shows that outsourcing is becoming more sophisticated in a number of key areas across both service delivery and contracting. Within service delivery:

  • The role of Cloud was increasingly evident with for example the first HR BPO contracts based on Workday software emerging in BPO and IaaS contracts in support of SMEs becoming commonplace
  • DevOps also began to appear, particularly in support of transformational IT outsourcing contracts where application re-engineering is starting to be combined with simultaneous cloud-based IT infrastructure adoption
  • Within front-office services such as customer management services, multi-channel service delivery is now the norm with web chat and even social media channels commonplace.

The public sector remains at the forefront in driving more sophisticated commercial arrangements in the U.K., increasingly protecting themselves from administrative over-payments, flexing payments to adjust to levels of transactional activity, using third-party investment to drive transformation, and sharing access to contracts via framework agreements.

Starting with BPO, the rise in sophistication in HR outsourcing is demonstrated by:

  • The emergence of the first U.K. HR outsourcing contracts involving the implementation of the Workday HR SaaS platform and integration of proprietary payroll services with Workday. This software facilitates organizations in managing their global workforces and providing consistent employee and manager self-service via a single standardized platform across business units and geographies
  • The increasing inclusion of RPO within multi-process HR outsourcing contracts. Multi-process HR outsourcing contracts in the U.K. in recent years had typically reverted to just payroll services and employee administration. However, 2014 saw talent management functions led by RPO once again starting to be included within wider HR outsourcing contracts
  • Within standalone RPO, the increasing use of employer branding and the increasing dominance of social media channels such as LinkedIn and twitter rather than recruitment agencies for candidate attraction
  • The increasing sophistication of occupational health services with for example incorporation of multishore delivery including use of onshore medics combined with use of web-based health surveillance software in the provision of occupational health services in support of a North Sea drilling company
  • The increasing use of benefits portals both for cross-country benefits administration and also in support of employee selection of flexible benefits.

The increasing sophistication of customer management outsourcing is demonstrated by the increasing adoption of multi-channel delivery. Whereas relatively recently contact center outsourcing contracts in the U.K. were typically for voice only services, in 2014 it was the norm for customer management services contracts to be multi-channel in nature, with email, web chat, and even social media support commonplace. This was true across both the private and public sectors, with the e-government initiative ensuring that all local government customer services contracts announced were widely multi-channel in nature. At the same time, the move to digital is leading to the emergence of marketing BPO services with the provision of onshore creative design services becoming more commonplace.

U.K. organizations are continuing to adopt procurement outsourcing services as the pressure on costs continues. While procurement outsourcing remains concentrated around indirect procurement categories, it has expanded in scope to place increasing emphases on supplier relationship management and supplier and procurement performance management.

Within industry-specific BPO, the financial services and government sectors have been the mainstay of the U.K. outsourcing industry for many years. Here in 2014 there was an increasingly emphasis on platform-based services, such as within the major mortgage BPO contract awarded by the Co-operative Bank and within policy administration contracts in the insurance sector.

Within local government, the emphasis on local job creation continues to be a major feature of contracts outside London. However, in 2014 London authorities were increasingly adopting service delivery from outside London, typically from the North-East. Regardless of region, local authorities were becoming more sophisticated in their commercial approaches, including for example protecting themselves from administrative over-payments and flexing payments to adjust based on levels of transactional activity. Supplier investment is also increasingly being leveraged to fund transformations that will reduce service costs. Transformation to achieve ongoing and significant cost reduction remains even more firmly on the agenda.

Within IT outsourcing, use of both Cloud and DevOps became more prevalent during 2014.

IT infrastructure outsourcing contracts are increasingly being based around private and hybrid cloud transformation, with notable examples of adoption by WPP, in support of the increasing digitization of their business, Amey, and Unipart Automotive.

At the same time, the level of adoption of IT infrastructure management was at a 4-year high within local government in 2014, with migrations to cloud-based infrastructure also beginning to take place in this sector. As in BPO, the commercial management of these local government IT infrastructure management contracts was also showing increased maturity, with contracts continuing to include local job creation, apprenticeships, and training initiatives but also including the option to purchase services on behalf of additional public sector entities such as the emergency, education, and health services. Framework contracts were also evident in the purchasing of network management services by regional public sector groupings.

Mobile–enabling of apps also continued to gather pace in both the public and private sectors.

In the SME sector, particularly in high-tech businesses the adoption of Infrastructure-as-a-Service (IaaS) contracts is accelerating as SMEs take advantage of the speed-to-market and scalability of third-party cloud infrastructure. Elsewhere SaaS continued to be adopted in support of non-core and specialist processes such as CRM, laboratory information and housing management.

Elsewhere in IT infrastructure management, end-user workplace contracts continued apace with these contracts frequently now including tablets and thin clients in infrastructure refreshes, and with email and office applications increasingly being provided via the Cloud.

Within application management, SAP application management was noticeably back in fashion during 2014. While IT outsourcing in the U.K. typically remains unbundled with separate contracts and suppliers used for application management and IT infrastructure management, these SAP outsourcing contracts are increasingly going beyond standard application maintenance to combine application re-engineering with infrastructure re-engineering, with, for example, a number of these contracts in 2014 including upgrading of SAP systems and also providing SAP hosting using private cloud infrastructures. U.K. companies are typically not yet ready to use public cloud for core applications such as SAP, but they are increasingly adopting third-party private cloud implementation and hosting in conjunction with SAP application management. These contracts potentially mark the introduction of DevOps thinking into the U.K. with application and infrastructure transformations starting to be co-ordinated within a single contract within contracts with transformational intent.

So what does this mean for 2015? On the whole, more of the same. Most of the trends described above are at early stages of development. For example in cloud, core systems typically need to complete their migration to private cloud from where they will increasingly incorporate elements of hybrid cloud. At the same time, the role of cloud-based platforms will continue to rise in importance in non-core areas of BPO extending beyond HR which is currently the prime example.

DevOps will increase in importance, not in support of minor application upgrades and maintenance, but where businesses are transforming their applications, and particularly in support of transformations to digital businesses.

Commercially, the trend to transaction-based and usage based pricing will continue, with this continuing to be supplemented by gainshare based on supplier investment, particularly by organizations such as those in the public sector that have a strong need for cost reduction but lack the means to finance the required process and IT transformations themselves.

For further details, see http://www.arvato.co.uk/sites/default/files/index-2014.pdf  

]]>
<![CDATA[IBM Removes Major Inhibitor to Hybrid Cloud Adoption by Patenting Technique to Manage Location of Cloud Data]]> IBM has patented a technique to allow users to dynamically choose or change the location where public and private cloud data is stored.This is clearly a very important development in legitimizing cloud use for production applications, where country and industry regulators can take a very strong view on the geographies in which data, particularly customer data, can be stored.

Potentially this could have a major impact on the rate of hybrid cloud adoption in highly regulated industries such as financial services, utilities, and the government sector.

The technique allows companies to mark or tag their data and use an intelligent cloud management system to store files in the appropriate location. For example, if a business needs to ensure that all of its financial data is stored in a specific cloud data center, the associated files are tagged appropriately and the cloud management system ensures that the files are stored in the correct location(s).

]]>
<![CDATA[CSC Expands Partnership with IBM to Enhance Capability as Cloud Service Integrator and Manager]]> CSC has expanded its partnership with IBM to enhance its ability to offer hybrid cloud and mobility services. In particular, the partnership emphasizes CSC's continuation and development of its role as a cloud service integrator and manager, rather than as a direct provider of capital-intensive IaaS services. CSC has a large client base for traditional data center management, frequently using IBM servers, and now needs to assist these organizations in moving to hybrid cloud and as-a-Service environments.

Indeed, CSC is looking to reduce its revenue contribution from traditional IT infrastructure management services and instead drive revenue growth from an increased ‘as a service’ orientation. ‘Emerging Services’ of importance to CSC include:

  • Cloud enablement
  • Cyber security
  • Mobility based platforms
  • Application rationalization and modernization: CSC is looking for growth here to offset the cannibalization of traditional IT IM revenues as IaaS activity takes off
  • Service integration & orchestration
  • Virtualization services/Unified Communications.

In particular, the latest extension to CSC's partnership with IBM builds on CSC's acquisition of ServiceMesh, its subsequent partnership with Microsoft to integrate its ServiceMesh Agility Platform with Microsoft Systems Center, and its global partnership with AT&T, and involves incorporating IBM's SoftLayer IaaS service and Bluemix web and mobile application development platform into CSC's ServiceMesh Agility Platform. In return, IBM will add CSC's ServiceMesh Agility Platform to the IBM Cloud Marketplace. In particular, the new partnership speeds up CSC's implementation of its strategy in a number of key areas, including:

  • Cloud enablement and service integration, within its CSC Center of Excellence for IBM Products and Solutions, CSC plans to certify up to 2,000 consultants, project managers, architects and developers over three years on selected IBM technologies such as SoftLayer and Bluemix cloud services, as well as IBM developer and integration tools such as WebSphere and Cast Iron
  • Mobility-based platforms, where CSC will offer elements of the IBM MobileFirst application development portfolio service, combining the IBM MobileFirst portfolio with CSC's mobility services to offer a mobile healthcare program for its Lorenzo healthcare software product, and similar mobile programs in the banking and insurance markets
  • Application management and testing, where CSC will extend app development service to IBM mainframe Z environments. Additionally, CSC will add a DevOps tools to its portfolio for app development, and establish a global testing practice for IBM Rational services virtualization.
]]>