Cognizant has a long history of continuous improvement and innovation. However, its grassroots delivery teams were reticent in taking minor innovations to clients, so it underperformed in idea and innovation generation and ran beneath the radar, with clients typically unaware of innovations being undertaken.
However, in 2023, following the arrival of its new CEO, the company took the decision to focus on service improvement at scale, with its innovation program rebranded as Bluebolt and relaunched on April 21, 2023.
Cognizant believes its Bluebolt innovation program is already leading to greater levels of grassroots innovation activity, greater client recognition of innovation being carried out in their accounts, and increased associate engagement.
Indeed, the company’s ten key priorities in its 2024 Bluebolt roadmap include increasing client visibility, planning innovation days with client-impacting themes, increasing the pace and quality of ideas generated, and launching its innovation-as-a-service offering in conjunction with Cognizant Consulting.
Using Bluebolt Platform & Enabling Ecosystem to Ensure Scalability
Cognizant has developed a proprietary platform and enabling ecosystem to manage innovation processes, metrics, and documentation to ensure that innovation can be handled in a standardized and reliable manner at scale across its organization. The enabling ecosystem includes:
In addition, Cognizant has recently launched its generative AI-enabled innovation assistant, which assists associates in identifying frequent problems and exploring solutions. Its library includes all Cognizant’s innovations undertaken in the past five years, abstracting the details and removing client attribution. It has been used by ~10,000 associates so far and is now being enhanced to address RFP responses.
Using Innovation Days to Boost Joint Ideation
Cognizant classifies the types of innovation from its Bluebolt program into:
An example of a recent incremental solution was shortening the KYC process for a banking client; an example of an adjacency innovation is applying analytics to produce harvest timing recommendation, germination and yield prediction models for a biotechnology client; and an example of a transformative innovation is implementing digital chassis car-to-cloud services for a semiconductor client.
Hackathons and ideathons are key mechanisms in boosting these innovations. In 2023, Cognizant carried out 41 hackathons and 18 ideations involving clients, and 70 and 87, respectively, involving solely Cognizant associates.
In addition, Cognizant is increasingly using innovation days to generate immersive innovation experiences for clients. These joint ideation sessions can include client-centric, domain-specific product-aligned themes; account priorities to accelerate performance, e.g., by showcasing all Cognizant platforms and enablers; and global themes and challenges, including sustainability.
Within sustainability, the major themes are net zero pathways, sustainability and ESG reporting, sustainable product development and circular economy, sustainable manufacturing and operations, and sustainable supply chains. These sustainability themes have 235 sub-themes and 200 specific prompts that can be used with clients and prospects.
Achieving Client Satisfaction Uptick with Innovations & Improvements Following Bluebolt Introduction
Cognizant is experiencing an uptick in client satisfaction as a result of Bluebolt. Satisfaction with innovations and improvements delivered by associates has increased over the past 12 months from 4.18 to 4.36.
In the 12 months following its launch, the Bluebolt program has achieved:
Of the 130k ideas generated, ~5% are transformative, ~35% are adjacent, and ~60% are incremental. 8,927 of the 23k ideas implemented impacted client value streams and 321 were directly funded by clients. Elsewhere, clients may be willing to co-invest using the innovation funds typically incorporated within Cognizant’s larger contracts.
Cognizant is now targeting a major increase in impact from its Bluebolt initiative in 2024. It achieved 104k ideas generated by December 2023 and was initially targeting 200k ideas generated in 2024. This target has recently been reset to 500k. In addition, its 2024 targets include:
To increase the chances of achieving these targets and ensure that innovation is firmly embedded in the company, these goals have been included in the bonus plans of ~900 senior delivery leaders.
Further pressure is placed on account teams, and greater visibility of innovation within client accounts is being achieved by client Bluebolt journey summaries being provided to the CEO before his client meetings. These show the key innovation statistics, including the numbers of ideas generated and implemented, the top ideas generated by the ideathon, and client feedback.
The impact measures used include:
Bluebolt garage projects typically involve 8-10 associates for an 8-week period. So far, 133 ideas have been evaluated and shortlisted, 70 garage projects are in progress, and 13 MVPs have been produced.
]]>Not all IT KPIs are created equal, so Enterprise Automation Fabric incorporates a 3-level CMDB linking business processes, applications, and IT infrastructure. This mapping of business KPIs to application KPIs to infrastructure KPIs enables organizations to identify the potential business consequences of particular IT incidents. For example, for a retailer in the Netherlands, Enterprise Automation Fabric can predict the impact on shipping volumes if a particular issue happens with the IT infrastructure and this is not addressed within, say, 48 hours.
In general, Enterprise Automation Fabric can be set up to trigger automation or generate an alert to a business owner if a particular business KPI is identified as being at risk.
So, what is Capgemini Enterprise Automation Fabric?
As the name implies, Enterprise Automation Fabric is a reference toolset and framework consisting of a series of components that can be integrated with an organization’s current IT management investments. The toolset consists of interwoven third-party and Capgemini assets. It supports the management of the entire IT estate across cloud infrastructure, on-premise data center infrastructure, end-user computing, and applications. As appropriate, it links with the client’s existing monitoring solutions or utilizes its own preferred options.
Key components of the Enterprise Automation Fabric architecture include:
Once an anomaly is identified, Capgemini’s ITSM layer, built on ServiceNow, creates an incident. Capgemini data-driven assets augment ServiceNow in areas such as assisted resolution and intelligent dispatcher.
Once the incident is captured in the ITSM, it can be addressed using automation solutions. These include both intrusive automation, such as RPA and managing infrastructure as code, and human-in-the-loop automation. In infrastructure, autonomous automation can currently handle around 50% of incidents without human involvement, typically exceeding 50% in the service request space. For application-related incidents, the level of autonomous resolution is typically in the range of 20%-40%.
Enterprise Automation Fabric includes infrastructure-related automation bots for health checks, service requests, remediation, and reporting across:
Enterprise Automation Fabric Capture is a Capgemini asset that can be deployed to speed up incident identification and resolution in SAP environments. It allows SAP users to capture all the details on the screen, including the error code, in a structured Excel format and create an incident with extensive pre-populated structured data in the ITSM.
Enterprise Automation Fabric is cloud-native but has an on-premise option. This latter option is particularly relevant for organizations requiring business measurement data to remain within their onsite environments.
Capgemini Reduced Alerts by 86% for Consumer Electronics Company
The complexity of this company’s IT estate had steadily increased over time, leading to high-priority production incidents (P1 alerts), particularly impacting the company’s SAP Order Management applications and infrastructure.
Capgemini deployed an AIOps solution to integrate various monitoring tools across applications and IT infrastructure and introduced a single dashboard for improved visibility across all the monitoring and custom application alerts. This significantly reduced the number of alerts by, for example, identifying and suppressing false alerts and avoiding duplication of alerts, enabling the team to focus on a smaller number of genuine alerts.
At the start of the engagement, the company was experiencing around eight P1 alerts per month, and Capgemini was able to eliminate P1 alerts over six months. Overall, an 86% reduction in alerts was achieved.
Capgemini also created a catalog of automation scripts to assist in resolving the issues identified by AIOps and developed knowledge articles to augment the capability of the team to resolve issues that could not be automated and required manual resolution.
In a similar exercise for a major airport, which had previously managed its systems manually, Capgemini streamlined its IT operations and cut alert queues by half. The auto-healing solutions helped boost efficiency and achieved a 10% increase in SLA response times, reducing the manual workload significantly. The net result was a 98% reduction in incident turnaround time, largely due to now correlating 88% of the previously unrelated alerts.
While organizations can adopt Capgemini’s Enterprise Automation Fabric on an incremental basis, the keys to its successful application lie in its multi-tier observability capability, its ability to resolve incidents autonomously before they impact users, and its ability to link business performance to application and infrastructure KPIs. The overall Enterprise Automation Fabric also meshes with existing client investments, reusing key assets from the client where appropriate.
Capgemini is now enhancing the existing Fabric components with GenAI proficiency, applying GenAI widely from incident routing to automated response and alert resolution to deliver enhanced efficiency within the framework. NelsonHall will bring you more updates as Capgemini incorporates the capabilities of GenAI into Enterprise Automation Fabric.
]]>
Platforms have been increasingly important in B2C digital transformation in recent years and have been used to disintermediate and create a whole raft of well-known consumer business opportunities. B2B platforms have been less evident during this period outside the obvious ecosystems built up in the IT arena by the major cloud and software companies. However, with blockchain now emerging to complement the increasing power of cognitive and automation technologies, the B2B platform is now once again on the agenda of major corporations.
One IT services vendor assisting corporations in establishing B2B platforms to reimagine certain of their business processes is Capgemini, where B2B platform development is a major initiative alongside smart automation. In this interview, NelsonHall CEO John Willmott talks with Manuel Sevilla, Capgemini’s Chief Digital Officer, about the company’s B2B platform initiatives.
JW: Manuel, welcome. As Chief Digital Officer of Capgemini, what do you regard as your main goals in 2019?
MS: I have two main goals:
JW: What do you see as the keys to success in building a B2B platform?
MS: The investment required to establish a B2B platform is significant by nature and has to be seen in the long-term. This significant and long-term investment is required across the following three areas:
JW: How do the ecosystem requirements differ for a B2B platform as opposed to a B2C platform?
MS: B2B and B2C are very different. In B2C environments, a partial solution is often sufficient for consumers to start using it. In B2B, corporates will not use a partial platform. For example, for corporates to input their private data, the platform has to be fully secured. Also, it is important to bring a service that delivers enough value either by simplifying and reducing process costs or by providing access to new markets, or both. For example, a B2B supply chain platform with a single auto manufacturer will undoubtedly fail. The big components suppliers will only join a platform that provides access to a range of auto manufacturers, not a separate platform for each manufacturer.
Building the ecosystem is perhaps the most difficult task when creating a B2B platform. The value of Capgemini is that the company is neutral and can take the lead in driving the initiatives to make the platform happen. Capgemini recognizes humbly that for a platform to scale, it needs not only a diverse range of partners but also that Capgemini cannot be the only provider; it is critical to involve Capgemini’s partners and competitors.
JW: How does governance differ for a B2B platform?
MS: In a fast-moving B2B environment, defining the governance has to proceed alongside building the ecosystem, and it is essential to have processes in place for taking decisions regarding the platform roadmap in both the short and long-term.
B2B platform governance is not the usual two-way client/vendor governance; it is much more complex. For a B2B platform, you need to have a clear definition of who is a member and how members take decisions. It then needs enough large corporates as founder members to drive initial functionalities and to ensure that the platform will bring value and will be able to scale. Once the platform has critical mass, then the governance mechanism needs to adapt itself to support the future scaling of the platform, often with an accompanying dilution of the influence of the founder members.
The governance for a B2B platform often involves creating a separate legal entity, which can be a consortium, a foundation, or even multiple legal entities.
JW: Can you give me an example of where Capgemini is currently developing a B2B platform?
MS: Capgemini is currently developing four B2B platforms, including one with the R3 consortium to build a B2B platform called KYC Trust that aims to solve the corporate KYC problem between corporates and banks. Capgemini started work on KYC Trust in early 2016 and it is expected to go into scaled production in the next 12-24 months.
JW: What is the corporate KYC problem and how is Capgemini addressing this?
MS: Corporate KYC starts with the data collection process, with, at present, each bank typically asking the corporate several hundred questions. As each bank typically asks its own unique questions, this creates a substantial workload for the corporate across banks. Typically, it takes a month to collect the information for each bank. Then, once a bank has collected the information on the corporate, it needs to check it, which means paying third-parties to validate the data. The bank then typically uses an algorithm to score the acceptability of the corporate as a customer. This process needs to be repeated regularly. Also, the corporate typically has to wait, say, 30 days for its account to be opened.
To simplify and speed up this process, Capgemini is now building the KYC Trust B2B platform. This platform incorporates a standard KYC taxonomy to remove redundancy from, and standardize, data requests and submission, and each corporate will store the documents required for KYC in its own nodes on the platform. Based on the requests received from banks, a corporate can then decide which documents will be shown to whom and when. All these transactions will be traceable in blockchain so that the usage of each document can be tracked in terms of which bank accessed it and when.
The advantage for a bank in onboarding a new corporate using this platform is that a significant proportion of the information required from a corporate will already exist, having already been supplied to another bank. The benefits to corporates include reducing the effort in submitting information and in being able to identify which information has been used by which bank and when, where, and how.
This will speed up the KYC process and simplify data collection operations. It will also simplify how corporates manage their own data such as shareholder information and information on new beneficial owners.
JW: How does governance work in the case of KYC Trust?
MS: A foundation will be established in support of the governance of KYC Trust. The governance has two main elements:
Key principles of the foundation are respect for openness and interoperability, since there cannot be a single B2B platform that meets all the business needs. In order to build scale, it is important to encourage interoperability with other B2B platforms, such as (in this case) the Global Legal Entity Identifier Foundation (GLEIF), to maximize the usefulness and adoption of the platform.
JW: How generally applicable is the approach that Capgemini has taken to developing KYC Trust?
MS: There are a lot of commonalities. Sharing of documents in support of certification & commitments is the first step in many business processes. This lends itself to a common solution that can be applied across processes and industries. Capgemini is building a structure that would allow platforms to be built in support of a wide range of B2B processes. For example, the structure used within KYC Trust could be used to support various processes within supply chain management. Starting with sourcing, it could be used to ensure, for example, that no children are being employed in a factory by asking the factory to submit a document certified by an NGO to this effect every six months. Further along the supply chain, it could also be used, for example, to support the correct use of clinical products sold by pharmaceutical companies.
And across all four B2B platforms currently being developed by Capgemini, the company is incorporating interoperability, openness, and a taxonomy as standard features.
JW: Thank you Manuel, and good luck. The emergence of B2B platforms will be a key development over the next few years as organizations seek to reimagine and digitalize their supply chains, and I look forward to hearing more about these B2B platform initiatives as they mature.
]]>
Many of the pureplay BPS vendors have been moving beyond individual, often client-specific implementations of RPA and AI and building new digital process models to present their next generation visions within their areas of domain expertise. So, for example, models of this type are emerging strongly in the BFSI space and in horizontals such as source-to-pay.
A key feature of these new digital process models is that they are based on a design thinking-centric approach to the future and heavily utilize new technologies, with the result that the “to-be” process embodied within the new digital process model typically bears little relation to the current “as-is” process whether within a BPS service or within a shared service/retained entity.
These new digital process models are based on a number of principles, emphasizing straight-through processing, increased customer-centricity and proactivity, use of both internal and external information sources, and in-built process learning. They typically encompass a range of technologies, including cloud system of engagement platforms, RPA, NLP, machine learning and AI, computer vision, predictive and prescriptive analytics held together by BPM/workflow and command & control software.
However, while organizations are driving hard towards identifying new digital process models and next generation processes, there are a relatively limited number of examples of these in production right now, their implementations use differing technologies and frameworks, and the rate of change in the individual underlying technology components is potentially very high. Similarly, organizations currently focusing strongly on adoption of, say, RPA in the short-term realize that their future emphasis will be more cognitive and that they need a framework that facilitates this change in emphasis without a fundamental change in framework and supporting infrastructure.
Aiming for a Unifying Framework for New Digital Process Models
In response to these challenges, and in an attempt to demonstrate longevity of next generation digital process models, Genpact has launched a platform called “Genpact Cora” to act as a unifying framework and provide a solid interconnect layer for its new digital process models.
Genpact Cora is organized into three components:
One of the aims of this platform is to provide a framework into which technologies and individual products can be swapped in and out as technologies change without threatening the viability of the overall process or the command and control environment, or necessitating a change of framework. Accordingly, the Genpact Cora architecture also encompasses an application program interface (API) design and an open architecture.
Genpact is then building its new digital process models in the form of “products” on top of this platform. Genpact new digital process model “products” powered by Cora currently support a number of processes, including wealth management, commercial lending, and order management.
However, in the many process areas where these “products” are not yet formed, Genpact will typically take a consulting approach, initially building client-specific digital transformations. Then, as the number of assignments in any specific process area gains critical mass, Genpact is aiming to use the resulting cumulative knowledge to build a more standardized new digital process model “product” with largely built-in business rules that just require configuring for future clients. And each of these “products” (or new digital process models) will be built on top of the Genpact Cora platform.
Launching “Digital Solutions” Service in Support of Retained Operations
Another trend started by the desire for digital process transformation and the initial application of RPA is that organizations are keen to apply new digital process models not just to outsourced services but to their shared services and retained organizations. However, there is currently a severe shortage of expertise and capability to meet this need. Accordingly, Genpact intends to offer its Genpact Cora platform not just within BPS assignments but also in support of transformation within client retained services. Here, Genpact is launching a new “Digital Solutions” service that implements new digital process models on behalf of the client shared services and retained organizations and complements its “Intelligent Operations” BPS capability. In this way, Genpact is aiming to industrialize and speed up the adoption of new digital process models across the organization by providing a consistent and modular platform, and ultimately products, for next generation process adoption.
]]>
Wipro began partnering with Automation Anywhere in 2014. Here I examine the partnership, looking at examples of RPA deployments across joint clients, at how the momentum behind the partnership has continued to strengthen, and at how the partners are now going beyond rule-based RPA to build new digital business process models.
Partnership Already has 44 Joint Clients
Wipro initially selected Automation Anywhere based on the flexibility and speed of deployment of the product and the company’s flexibility in terms of support. The two companies also have a joint go-to-market, targeting named accounts with whom Wipro already has an existing relationship, plus key target accounts for both companies.
To date, Wipro has worked with 44 clients on automation initiatives using the Automation Anywhere platform, representing ~70% of its RPA client base. Of these, 17 are organizations where Wipro was already providing BPS services, 27 are clients where Wipro has assisted in-house operations or provided support in applying RPA to processes already outsourced to another vendor.
In terms of geographies, Wipro’s partnership with Automation Anywhere is currently strongest in the Americas and Australia. However, Automation Anywhere has recently been investing in a European presence, including the establishment of a services and support team in the U.K., and the two companies are now focusing on breaking into the major non-English-speaking markets in Continental Europe.
So let’s look at a few examples of deployments.
For an Australian telco, where Wipro is one of three vendors supporting the order management lifecycle, Wipro had ~330 FTEs supporting order entry to order provisioning. Wipro first applied RPA to these process areas, deploying 45 bots, replacing 143 FTEs. The next stage looked across the order management lifecycle. Since the three BPS vendors were handling different parts of the lifecycle, an error or missing information at one stage would result in the transaction being rejected further downstream. In order to eliminate, as far as possible, exceptions from the downstream BPS vendors, Wipro implemented "checker" bots which carry out validation checks on each transaction before it is released to the next stage in the process, sending failed transactions and the reasons for failure back to the processing bots for reprocessing and, where appropriate, the collection of missing information. This reduced the number of kick-backs by 73%.
Other clients where Wipro has used Automation Anywhere in RPA implementations include:
Using The Partnership to Enhance Speed-to-Benefit within Rule-Based Processes
The momentum behind the partnership has continued to strengthen, with Wipro achieving a number of significant wins in conjunction with Automation Anywhere over the past three months, including a contract which will result in the deployment of in excess of 100 bots within a single process area over the next 6 months. In the last quarter, as organizations begin to scale their RPA roll-outs, Wipro has seen average RPA deal sizes increase from 25-40 bots to 75-100 bots.
Key targets for Wipro and Automation Anywhere are banking, global media & telecoms, F&A, and increasingly healthcare and Wipro has recently been involved in discussions with new organizations across the manufacturing, retail, and consumer finance sectors in areas such F&A, order management, and industry-specific processing.
Out of its team of ~450 technical and functional RPA FTEs (~600 FTEs if we include cognitive), Wipro has ~200 FTEs dedicated to Automation Anywhere implementations. This concentration of expertise is assisting Wipro in enhancing speed-to-benefit for clients, particularly in areas where Wipro has conducted multiple assignments, for example in:
Overall, Wipro has ~400 curated and non-curated bots in its library. This has assisted in halving implementation cycle times in these areas, to around four weeks.
Wipro also perceives the ease of deployment and ease of debugging of Automation Anywhere, facilitated by the structuring of its platform into separate orchestration and task execution bots, is another factor that has helped enhance speed-to-benefit.
Wipro’s creation of a sizeable team of Automation Anywhere specialists means it has the bandwidth to respond rapidly to new opportunities and to initiate new projects within 1-2 weeks.
Speed of support to architecture queries is another important factor both in architecting in the right way and in speed-to-market. Around a third (~100) of Automation Anywhere personnel are within its support and services organization, providing 24X7 support by phone and email, and ensuring a two-day resolution time. This is of particular importance to Wipro in support of its multi-geography RPA projects.
Extending the Partnership: Tightening Integration between Automation Anywhere & Wipro Platforms to Build New Digital Business Process Models
In addition to standard rule-based RPA deployments of Automation Anywhere, Wipro is also increasingly:
In an ongoing development of the partnership, Wipro will use Automation Anywhere cognitive bots to complement Wipro HOLMES, using Automation Anywhere for rapid deployments, probably linked to OCR, and HOLMES to support more demanding cognitive requirements using a range of customized statistical techniques for more complicated extraction and understanding of data and for predictive analytics.
Accordingly, Wipro is strengthening its partnership with Automation Anywhere both to deliver tighter execution of rule-based RPA implementations and as a key platform component in the creation of future digital business process models.
]]>As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).
This is the third and final blog in a series presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at the stages of deployment, from pilot development, through design & build, to production, maintenance, and support.
Piloting & deployment – it’s all about the business
When developing pilots, it’s important to recognize that the organization is addressing a business problem and not just applying a technology. Accordingly, organizations should consider how they can make a process better and achieve service delivery innovation, and not just service delivery automation, before they proceed. One framework that can be used in analyzing business processes is the ‘eliminate/simplify/standardize/automate’ approach.
While organizations will probably want to start with some simple and relatively modest RPA pilots to gain quick wins and acceptance of RPA within the organization (and we would recommend that they do so), it is important as the use of RPA matures to consider redesigning and standardizing processes to achieve maximum benefit. So begin with simple manual processes for quick wins, followed by more extensive mapping and reengineering of processes. Indeed, one approach often taken by organizations is to insert robotics and then use the metrics available from robotics to better understand how to reengineer processes downstream.
For early pilots, pick processes where the business unit is willing to take a ‘test & learn’ approach, and live with any need to refine the initial application of RPA. Some level of experimentation and calculated risk taking is OK – it helps the developers to improve their understanding of what can and cannot be achieved from the application of RPA. Also, quality increases over time, so in the medium term, organizations should increasingly consider batch automation rather than in-line automation, and think about tool suites and not just RPA.
Communication remains important throughout, and the organization should be extremely transparent about any pilots taking place. RPA does require a strong emphasis on, and appetite for, management of change. In terms of effectiveness of communication and clarifying the nature of RPA pilots and deployments, proof-of-concept videos generally work a lot better than the written or spoken word.
Bot testing is also important, and organizations have found that bot testing is different from waterfall UAT. Ideally, bots should be tested using a copy of the production environment.
Access to applications is potentially a major hurdle, with organizations needing to establish virtual employees as a new category of employee and give the appropriate virtual user ID access to all applications that require a user ID. The IT function must be extensively involved at this stage to agree access to applications and data. In particular, they may be concerned about the manner of storage of passwords. What’s more, IT personnel are likely to know about the vagaries of the IT landscape that are unknown to operations personnel!
Reporting, contingency & change management key to RPA production
At the production stage, it is important to implement a RPA reporting tool to:
There is also a need for contingency planning to cover situations where something goes wrong and work is not allocated to bots. Contingency plans may include co-locating a bot support person or team with operations personnel.
The organization also needs to decide which part of the organization will be responsible for bot scheduling. This can either be overseen by the IT department or, more likely, the operations team can take responsibility for scheduling both personnel and bots. Overall bot monitoring, on the other hand, will probably be carried out centrally.
It remains common practice, though not universal, for RPA software vendors to charge on the basis of the number of bot licenses. Accordingly, since an individual bot license can be used in support of any of the processes automated by the organization, organizations may wish to centralize an element of their bot scheduling to optimize bot license utilization.
At the production stage, liaison with application owners is very important to proactively identify changes in functionality that may impact bot operation, so that these can be addressed in advance. Maintenance is often centralized as part of the automation CoE.
Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December
NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.
Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.
Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on RPA, with your buy-side peers sharing their RPA experiences. To find out more, contact Matthaus Davies.
This is the final blog in a three-part series. See also:
Part 1: How to Lay the Foundations for a Successful RPA Project
]]>
As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).
This is the second in a series of blogs presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at how to assess and prioritize RPA opportunities prior to project deployment.
Prioritize opportunities for quick wins
An enterprise level governance committee should be involved in the assessment and prioritization of RPA opportunities, and this committee needs to establish a formal framework for project/opportunity selection. For example, a simple but effective framework is to evaluate opportunities based on their:
The business units should be involved in the generation of ideas for the application of RPA, and these ideas can be compiled in a collaboration system such as SharePoint prior to their review by global process owners and subsequent evaluation by the assessment committee. The aim is to select projects that have a high business impact and high sponsorship level but are relatively easy to implement. As is usual when undertaking new initiatives or using new technologies, aim to get some quick wins and start at the easy end of the project spectrum.
However, organizations also recognize that even those ideas and suggestions that have been rejected for RPA are useful in identifying process pain points, and one suggestion is to pass these ideas to the wider business improvement or reengineering group to investigate alternative approaches to process improvement.
Target stable processes
Other considerations that need to be taken into account include the level of stability of processes and their underlying applications. Clearly, basic RPA does not readily adapt to significant process change, and so, to avoid excessive levels of maintenance, organizations should only choose relatively stable processes based on a stable application infrastructure. Processes that are subject to high levels of change are not appropriate candidates for the application of RPA.
Equally, it is important that the RPA implementers have permission to access the required applications from the application owners, who can initially have major concerns about security, and that the RPA implementers understand any peculiarities of the applications and know about any upgrades or modifications planned.
The importance of IT involvement
It is important that the IT organization is involved, as their knowledge of the application operating infrastructure and any forthcoming changes to applications and infrastructure need to be taken into account at this stage. In particular, it is important to involve identity and access management teams in assessments.
Also, the IT department may well take the lead in establishing RPA security and infrastructure operations. Other key decisions that require strong involvement of the IT organization include:
Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December
NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.
Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.
Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held later this month: to find out more, contact Guy Saunders.
In the third blog in the series, I will look at deploying an RPA project, from developing pilots, through design & build, to production, maintenance, and support.
]]>
As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).
This is the first in a series of blogs presenting key guidelines for organizations embarking on RPA, covering establishing the RPA framework, RPA implementation, support, and management. First up, I take a look at how to prepare for an RPA initiative, including establishing the plans and frameworks needed to lay the foundations for a successful project.
Getting started – communication is key
Essential action items for organizations prior to embarking on their first RPA project are:
Communication is key to ensuring that use of RPA is accepted by both executives and staff alike, with stakeholder management critical. At the enterprise level, the RPA/automation steering committee may involve:
Start with awareness training to get support from departments and C-level executives. Senior leader support is key to adoption. Videos demonstrating RPA are potentially much more effective than written papers at this stage. Important considerations to address with executives include:
When communicating to staff, remember to:
Establish a central governance process
It is important to establish a strong central governance process to ensure standardization across the enterprise, and to ensure that the enterprise is prioritizing the right opportunities. It is also important that IT is informed of, and represented within, the governance process.
An example of a robotics and automation governance framework established by one organization was to form:
Avoid RPA silos – create a centre of excellence
RPA is a key strategic enabler, so use of RPA needs to be embedded in the organization rather than siloed. Accordingly, the organization should consider establishing a RPA center of excellence, encompassing:
Establish a bot ID framework
While establishing a framework for allocation of IDs to bots may seem trivial, it has proven not to be so for many organizations where, for example, including ‘virtual workers’ in the HR system has proved insurmountable. In some instances, organizations have resorted to basing bot IDs on the IDs of the bot developer as a short-term fix, but this approach is far from ideal in the long-term.
Organizations should also make centralized decisions about bot license procurement, and here the IT department which has experience in software selection and purchasing should be involved. In particular, the IT department may be able to play a substantial role in RPA software procurement/negotiation.
Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December
NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.
Also, if you would like to find out more about NelsonHall’s extensive program of RPA & AI research, and get involved, please contact Guy Saunders.
Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held in November: to find out more, contact Matthaus Davies.
In the second blog in this series, I will look at RPA need assessment and opportunity identification prior to project deployment.
]]>
The public sector remains at the forefront in driving more sophisticated commercial arrangements in the U.K., increasingly protecting themselves from administrative over-payments, flexing payments to adjust to levels of transactional activity, using third-party investment to drive transformation, and sharing access to contracts via framework agreements.
Starting with BPO, the rise in sophistication in HR outsourcing is demonstrated by:
The increasing sophistication of customer management outsourcing is demonstrated by the increasing adoption of multi-channel delivery. Whereas relatively recently contact center outsourcing contracts in the U.K. were typically for voice only services, in 2014 it was the norm for customer management services contracts to be multi-channel in nature, with email, web chat, and even social media support commonplace. This was true across both the private and public sectors, with the e-government initiative ensuring that all local government customer services contracts announced were widely multi-channel in nature. At the same time, the move to digital is leading to the emergence of marketing BPO services with the provision of onshore creative design services becoming more commonplace.
U.K. organizations are continuing to adopt procurement outsourcing services as the pressure on costs continues. While procurement outsourcing remains concentrated around indirect procurement categories, it has expanded in scope to place increasing emphases on supplier relationship management and supplier and procurement performance management.
Within industry-specific BPO, the financial services and government sectors have been the mainstay of the U.K. outsourcing industry for many years. Here in 2014 there was an increasingly emphasis on platform-based services, such as within the major mortgage BPO contract awarded by the Co-operative Bank and within policy administration contracts in the insurance sector.
Within local government, the emphasis on local job creation continues to be a major feature of contracts outside London. However, in 2014 London authorities were increasingly adopting service delivery from outside London, typically from the North-East. Regardless of region, local authorities were becoming more sophisticated in their commercial approaches, including for example protecting themselves from administrative over-payments and flexing payments to adjust based on levels of transactional activity. Supplier investment is also increasingly being leveraged to fund transformations that will reduce service costs. Transformation to achieve ongoing and significant cost reduction remains even more firmly on the agenda.
Within IT outsourcing, use of both Cloud and DevOps became more prevalent during 2014.
IT infrastructure outsourcing contracts are increasingly being based around private and hybrid cloud transformation, with notable examples of adoption by WPP, in support of the increasing digitization of their business, Amey, and Unipart Automotive.
At the same time, the level of adoption of IT infrastructure management was at a 4-year high within local government in 2014, with migrations to cloud-based infrastructure also beginning to take place in this sector. As in BPO, the commercial management of these local government IT infrastructure management contracts was also showing increased maturity, with contracts continuing to include local job creation, apprenticeships, and training initiatives but also including the option to purchase services on behalf of additional public sector entities such as the emergency, education, and health services. Framework contracts were also evident in the purchasing of network management services by regional public sector groupings.
Mobile–enabling of apps also continued to gather pace in both the public and private sectors.
In the SME sector, particularly in high-tech businesses the adoption of Infrastructure-as-a-Service (IaaS) contracts is accelerating as SMEs take advantage of the speed-to-market and scalability of third-party cloud infrastructure. Elsewhere SaaS continued to be adopted in support of non-core and specialist processes such as CRM, laboratory information and housing management.
Elsewhere in IT infrastructure management, end-user workplace contracts continued apace with these contracts frequently now including tablets and thin clients in infrastructure refreshes, and with email and office applications increasingly being provided via the Cloud.
Within application management, SAP application management was noticeably back in fashion during 2014. While IT outsourcing in the U.K. typically remains unbundled with separate contracts and suppliers used for application management and IT infrastructure management, these SAP outsourcing contracts are increasingly going beyond standard application maintenance to combine application re-engineering with infrastructure re-engineering, with, for example, a number of these contracts in 2014 including upgrading of SAP systems and also providing SAP hosting using private cloud infrastructures. U.K. companies are typically not yet ready to use public cloud for core applications such as SAP, but they are increasingly adopting third-party private cloud implementation and hosting in conjunction with SAP application management. These contracts potentially mark the introduction of DevOps thinking into the U.K. with application and infrastructure transformations starting to be co-ordinated within a single contract within contracts with transformational intent.
So what does this mean for 2015? On the whole, more of the same. Most of the trends described above are at early stages of development. For example in cloud, core systems typically need to complete their migration to private cloud from where they will increasingly incorporate elements of hybrid cloud. At the same time, the role of cloud-based platforms will continue to rise in importance in non-core areas of BPO extending beyond HR which is currently the prime example.
DevOps will increase in importance, not in support of minor application upgrades and maintenance, but where businesses are transforming their applications, and particularly in support of transformations to digital businesses.
Commercially, the trend to transaction-based and usage based pricing will continue, with this continuing to be supplemented by gainshare based on supplier investment, particularly by organizations such as those in the public sector that have a strong need for cost reduction but lack the means to finance the required process and IT transformations themselves.
For further details, see http://www.arvato.co.uk/sites/default/files/index-2014.pdf
]]>Potentially this could have a major impact on the rate of hybrid cloud adoption in highly regulated industries such as financial services, utilities, and the government sector.
The technique allows companies to mark or tag their data and use an intelligent cloud management system to store files in the appropriate location. For example, if a business needs to ensure that all of its financial data is stored in a specific cloud data center, the associated files are tagged appropriately and the cloud management system ensures that the files are stored in the correct location(s).
]]>Indeed, CSC is looking to reduce its revenue contribution from traditional IT infrastructure management services and instead drive revenue growth from an increased ‘as a service’ orientation. ‘Emerging Services’ of importance to CSC include:
In particular, the latest extension to CSC's partnership with IBM builds on CSC's acquisition of ServiceMesh, its subsequent partnership with Microsoft to integrate its ServiceMesh Agility Platform with Microsoft Systems Center, and its global partnership with AT&T, and involves incorporating IBM's SoftLayer IaaS service and Bluemix web and mobile application development platform into CSC's ServiceMesh Agility Platform. In return, IBM will add CSC's ServiceMesh Agility Platform to the IBM Cloud Marketplace. In particular, the new partnership speeds up CSC's implementation of its strategy in a number of key areas, including: