NelsonHall: Business Process Transformation through RPA & AI blog feed https://research.nelson-hall.com//sourcing-expertise/digital-transformation-technologies-services/business-process-transformation-through-rpa-ai/?avpage-views=blog Insightful Analysis to Drive Your Business Process Transformation through RPA & AI. NelsonHall has developed the Business Process Transformation through RPA & AI as a dedicated service for organizations evaluating, or actively engaged in, transforming business processes through RPA & AI. <![CDATA[UiPath Launches Autopilot for Enhanced Bot Generation & Management]]>

 

NelsonHall recently attended UiPath’s Forward VI event in Las Vegas, at which the company launched its Autopilot capabilities. While the company launched Autopilot across all its platform components, including Studio, Assistant, Apps, mining, and Test Manager, this blog focuses on its use within Studio and Assistant. 

Autopilot for Studio

A constant inhibitor to automation has been the capacity of organizations to build the automations. In the early days, automation relied on skilled developers writing code to build the automation. Platform vendors such as UiPath sought to tackle this inhibitor with the launch of low- and no-code IDEs such as UiPath Studio. However, while that development launched this current wave of automation, companies were still hamstrung by the number of employees trained on these platforms who were able to deliver these results.

Seeking to increase the number of employees that could develop automations, each of the platform vendors focused on increasing support for citizen developers, with UiPath launching StudioX as a simplified version of Studio; indeed, at Forward VI, Deloitte announced that it is committing to having 10% of its 415k employees trained on UiPath as part of a citizen developer drive.

Still, these efforts have not been enough to overcome the capacity issues within clients. To address these issues, UiPath has launched Autopilot for Studio, embedding the capability for users to write requirements for bots in natural language text, from which Autopilot then generates an automation in Studio.

This will not only support citizen developers; experienced developers are often handed bots from citizen developers to ensure the bot is enterprise-grade. With Autopilot, we can expect that the quality of the citizen-developed bot is increased. The demonstrations of Autopilot at the event showed faster bot development than the typical automation developer.

Autopilot for Assistant

On the second day of Forward VI, we also saw a demo of Autopilot for Assistant. Assistant is the interface that lets desktop users see and run all the automations. For the most part, this interface has been selecting automations to run from a list of bots or leveraging the beta of Clipboard AI functionality to quickly and automatically copy information from one interface or image into fields within applications.

With the launch of Autopilot for Assistant, users can now interact through natural language with the interface. This chat interface allows Autopilot to find bots that can answer users’ requests and run desktop automations live on the machine.

Importantly, this is not where the magic ends: if a user asks for something that cannot be completed using existing bots, Autopilot can use the same intelligence used in Autopilot for Studio to create bots that can leverage connectors within Autopilot for Studio to accomplish the task, prompting the human user for confirmation along the way. If the process is successful and the user believes that this automation would be useful, they may add this to the automation hub along with an automatically generated description of the process and the steps performed so that developers can perform any ‘last mile’ work to build the bot, which can be referenced in the future.

In the demo, we saw Assistant being asked to connect with a specific person on LinkedIn. Autopilot interpreted the request, launched a browser, searched for the person, and before requesting the connection, asked the user to confirm the action. In this manner, desktop automation through UiPath assistant is less hamstrung by developers or even citizen developers.

How these developments compare to the competition

UiPath isn’t the first automation platform provider to launch a copilot for their IDEs: in May this year we saw Automation Anywhere launch its copilot capabilities, and last month, Microsoft showed its copilot for Power Automate; both strive to offer bot generation through natural language. 

Where UiPath is ahead of the competition is in its secondary use case above, building bots on the fly to support digital assistants. This capability could not only boost the capability and quality of citizen developers, but reduce the need for them entirely.

Another difference we see with these offerings is their pervasiveness, with UiPath launching variants of its copilot across each platform component, including document understanding, test automation, apps, and mining. 

UiPath also shared a vision for the future that we did not hear from the competition: using these generative AI capabilities to build auto-healing robots; i.e. automatically fixing aspects that have broken using bot descriptions and the AI behind Autopilot, so reducing the ongoing management cost of bots. 

]]>
<![CDATA[Capgemini Looks to Accelerate Process Transformation with the "Frictionless Enterprise"]]>

 

The value of automation using tools such as RPA, and more recently intelligent automation, has been accepted for years. However, there is still a danger in many automation projects that while each project is valuable in its own right, they become disconnected islands of automation with limited connectivity and lifespans. Accordingly, while elements of process friction have been removed, the overall end-to-end process can remain anything but friction-free.

Capgemini has developed the "Frictionless Enterprise" approach in response to this challenge, an approach the company is now applying across all Capgemini’s Business Services accounts.

What is the Frictionless Enterprise?

The Frictionless Enterprise is essentially a framework and set of principles for achieving end-to-end digital transformation of processes. The aim is to minimize friction in processes for all participants, including customers, suppliers, and employees across the entire value chain of a process.

However, most organizations today are far from frictionless. In most organizations, the processes were designed years ago, before AI achieved its current maturity level. Similarly, teams were traditionally designed to break people up into manageable groups organized by silo rather than by focusing on the horizontal operation they are there to deliver. Consequently, automation is often currently being used to address pain points in small process elements rather than transform the end-to-end process.

The Frictionless Enterprise approach requires organizations to be more radical in their process reengineering mindsets by addressing whole process transformation and by designing processes optimized for current and emerging technology.

Capgemini’s Business Services uses this approach to assist enterprises in end-to-end transformation from conception and design through to implementation and operation, with the engine room of Capgemini Business Services now focused on technology, rather than people, for transaction processing.

A change in mindset is critical for this to succeed. Capgemini is increasingly encouraging its clients to move from customer-supplier relationships to partnerships around shared KPIs and adopt dedicated innovation offices.

The five fundamentals of the Frictionless Enterprise

Capgemini views the Frictionless Enterprise as depending on five fundamentals: hyperscale automation, cloud agility, data fluidity, sustainable planet, and secure business.

Hyperscale automation

This ultimately means the ability to reach full touchless automation. Hyperscale automation depends on exploiting artificial intelligence and building a scalable and flexible architecture based on microservices and APIs.

Cloud agility

While the frictionless transformation approach is designed to work at the sub-process level and the overall process level, it is important that any sub-process changes are a compatible part of the overall journey.

Cloud agility emphasizes improving the process in ways that can be reused in conjunction with future process changes as part of an overall transformation. So any changes made to sub-processes addressing immediate pain points should be steps on the journey towards the final target end-to-end operating model rather than temporary throwaway fixes.

Accordingly, Capgemini aims to bring the client the tools, solutions, and skills that are compatible with the final target transformation. For example, tools must be ready to scale, and at present, API-based architectures are regarded as the best way to implement cloud-native integration. This has meant a change in emphasis in the selection and nature of relationships with partners. Capgemini now spends much more time than it used to with vendors, and Capgemini’s Business Services has a global sales officer with a mandate to work with partners. In addition, this effort is now much more focused, with Capgemini concentrating its efforts on a limited set of strategic partners. All the solutions chosen are API native, fully able to scale, AI at the core, and cloud-based. One example of a Capgemini partner is Kryon in RPA, since it can record processes as well as automate them.

Data fluidity

It's important within process transformations to use both internal and external data, such as IoT and edge data, efficiently and have a single version of the truth that is widely accessible. Accordingly, data lakes are a key foundational component in frictionless transformations.

However, while most enterprises have lots of data to leverage, they also have lots of data points that need to be fixed. Master data management is critical to successful transformation and remains an important part of transformed operations.

Digital twins are key to removing process friction and are used as the interface between how the business currently operates and how it needs to operate in the future. As well as providing an accurate view of the reality of current process execution, process mining also speeds up process transformation, enabling transformation consultants to focus on evaluation and prioritization of opportunities for change rather than collecting process data. Process mining can also help with maintaining best practice compliance post-transformation by monitoring how individual agents are using their systems, with the potential to guide them through proactive online training and removing the need to compensate for agent inefficiencies with automation.

Sustainable planet

It's also becoming extremely important when reviewing end-to-end processes to consider their impact on the planet across the whole value chain, including suppliers. For example, this covers both carbon impact and social aspects such as diversity, including ensuring a lack of bias in AI models. Sustainability is becoming increasingly important in financial reporting, and in response, Capgemini has added sustainability into its integrated architecture framework.

Secure business

Enterprises cannot undertake massive transformations unless they are guaranteed to be secure, and so the Frictionless Enterprise approach encompasses account security operations and cybersecurity compliance. Similarly, change management is of overwhelming importance within any transformation project, and the Frictionless Enterprise approach focuses on building trust and transparency with customers and partners to facilitate the transformation of the value chain.

A client example of Frictionless Enterprise adoption

Capgemini is helping a CPG company to apply the Frictionless Enterprise approach to its sales & distribution planning. The company was already upper quartile at each of the individual process elements such as supply planning and distribution planning in isolation, but the overall performance of its end-to-end planning process was inadequate. Accordingly, the company looked to improve its overall inventory and sales KPIs dramatically by reengineering its end-to-end order forecasting process. For example, improved prediction would help achieve more filled trucks, and improved inventory management has a direct impact on sustainability and levels of CO2 production.

The CPG company undertook planning quarterly, centrally forecasting orders. However, half of these central forecasts were subsequently changed by the company's local planners, firstly because the local planners had more detailed account information and did not believe the centrally generated forecasts, and secondly because quarterly forecasts were unable to keep up with day-to-day account developments.

So there was a big disconnect between the plan and the reality. To address this, Capgemini undertook a process redesign and proposed daily planning, entailing:

  • Planning overnight daily with machine learning used to forecast orders based on the levels of actual orders up until that point
  • Removing local planners' ability to change order forecasts but making them responsible for improving the quality of the master data underpinning the automated forecasts, such as identifying the correct warehouse used to deliver to a particular customer.

This process redesign involved comprehensive automation of the value chain and the use of a data lake built on Azure as the source of data for all predictions.

Capgemini has now been awarded a 5-year contract with a contractual goal of completing the transformation in three years.

]]>
<![CDATA[Capgemini’s Intelligent Process Automation, Part 2: Intelligent Transaction Routing & Digital Twins Maximize RoI Delivery]]>

 

Part 1 of this blog focused on Capgemini’s structured approach to workforce motivation and upskilling when transitioning to a Frictionless Enterprise that leverages a digitally augmented workforce. This second part looks at how, when adopting a digitally augmented workforce, it is critical to ensure optimized routing of incoming queries and transactions between humans and machines, and to ensure that the expected RoI is delivered from automation projects.

Intelligent Routing of Transactions between Workforce and Machines

Intelligent query access and routing is essential to successfully deploy a hybrid human/machine workforce to achieve the optimal allocation of transactions between personnel and machines. For example:

  • For a North American manufacturer, Capgemini combined RPA with multiple microservices from AWS and Google and Capgemini code to classify 41 categories of incoming accounts payable queries. If a classification is possible, the query is allocated either to a human or machine. Queries that go the machine route have their text analyzed using NLP, and actions are then triggered to collect the information necessary to answer the query. If the confidence level in the response exceeds 95%, the answer is sent automatically. If not, then the query and response are sent to a human for review and confirmation. This is an example of a digitally augmented workforce
  • For another client, Capgemini reduced the cost per query of procure-to-pay queries from 180 cents to 17 cents by using a digitally augmented workforce. The company’s AI Query Classifier uses NLP and ICR to extract the relevant information from the unstructured text, validate the query and automate ticket creation. Its AI Workload Distribution then orchestrates the process and decides whether each case goes through automated or human resolution
  • Elsewhere, a client had a large team serving billable transactions in 24 languages, but 30%-40% of the transactions they received were not relevant to this team. Capgemini implemented 90% automated identification & indexing for 21 of these 24 languages. The data is validated, further data retrieved where necessary, and then the data revalidated. Business rules are then applied to identify whether the transaction is handled manually or automated. Savings of ~75% of the total effort were achieved.

The use of machine translation is becoming increasingly important in these situations, and Capgemini is now working on machine language translation to reduce its dependency on nearshore centers employing large numbers of native speakers in multiple languages.

Preformed automation assets are also important in combining best practices and intelligent automation. Here, Capgemini has introduced 890 by Capgemini. This catalog of analytics services enables organizations to access analytical and AI solutions and datasets from within their own organization, from multiple curated third-party providers, and from Capgemini. Capgemini has focused on the provision of sector-specific solutions and currently offers ~110 sector solutions.

Introduction of Digital Twins Ensure Delivery of RoI from Technology Deployment

Capgemini’s approach to data-driven process discovery and excellence is based on combining process mining using process logs, task capture and task mining using desktop recorders, productivity analytics for each individual, and use of digital twins.

Tools used include Fortress IQ, Celonis, and Capgemini’s proprietary Prompt tool. These tools are combined with Capgemini’s Digital Global Enterprise Model (D-GEM) platform to incorporate best-in-class processes and frictionless processing.

Digital twins are used to progress process discovery beyond digital snapshots and provide ongoing process watching, assessment, and definition of opportunities. It also allows Capgemini to simulate the real returns that will be achieved by the introduction of technology by highlighting any other process constraints that will be exposed and limit the expected RoI from automation initiatives.

Capgemini’s approach to process digital twin introduction is:

  • To start with business mining, a combination of process mining, task mining, and Capgemini’s D-GEM platform
  • This is followed by benchmarking the processes against D-GEM
  • Then simulating the impact of introducing technology, calculating the business case, and ensuring that the result achieved is close to what was anticipated by identifying any potential process bottlenecks that might reduce the technology deployment’s savings. These simulations also help in accelerating the approval of intelligent automation projects and the scaling of digital transformation within the enterprise, since they increase management confidence in the certainty of project outcomes
  • This is followed by continuous improvement and identifying ongoing areas for improvement.

Also, during the pandemic, it is increasingly difficult to run onsite workshops for automation opportunity identification. It is becoming increasingly necessary to use digital twin process mining of individuals’ machines to remotely build business cases. This development may become standard practice post-pandemic if it proves to be a faster and more reliable basis for opportunity identification than interviewing SMEs.

Conclusion

In conclusion, the deployment of technology is arguably the easy part of intelligent process automation projects. Two more challenging elements have always been interpreting and routing unstructured transactions and queries and identifying and delivering RoI. Capgemini’s Frictionless Enterprise approach – that leverages a digitally augmented workforce – addresses both these challenges by combining technologies for classification and routing unstructured transactions and queries, and introducing process digital twins to ensure RoI delivery.

You can read Part 1 of this blog here.

]]>
<![CDATA[Capgemini's Intelligent Process Automation, Part 1: Significant Growth From Frictionless Enterprise Approach]]>

 

This is Part 1 of a two-part blog looking at Capgemini’s Intelligent Process Automation practice. Here I examine Frictionless Enterprise, Capgemini’s framework for intelligent process automation that focuses on the adoption of a digitally augmented workforce.

Digital transformation has been high on enterprise agendas for some years. However, COVID-19 has given the drive to digital transformation even greater impetus as organizations have increasingly looked to reduce cost, implement frictionless processing, and decouple their increasingly unpredictable business volumes from the number of servicing personnel required.

For Capgemini, this has resulted in unprecedented increases in Intelligent Process Automation bookings and revenue in 2020.

Frictionless Enterprise & the hybrid workforce

There is always a danger in intelligent automation projects of regarding people as secondary considerations and addressing the workforce through reactive change management. As part of its Frictionless Enterprise approach, Capgemini's framework for intelligent process automation stresses the adoption of a digitally augmented workforce, and aims to avoid this pitfall by maintaining high workforce-centricity, stressing the need to involve employees in the automation journey by taking a structured approach to workforce communication and upskilling.

Capgemini's Intelligent Automation Practice emphasizes the workforce communication and reskilling needed to achieve a digitally augmented or hybrid workforce. This involves putting humans at the center of the hybrid workforce and motivating and reskilling them.

The personnel-related stages in the journey towards a Frictionless Enterprise that leverages a digitally augmented workforce used by Capgemini are:

  • Design of the augmented workforce. On the design side, it is important to ask, "what is the impact of technology on the workforce and how should the organization's competency model change?" How is the workforce of the future defined?
  • Building the augmented workforce
  • Creating the right context.

Client cases

In one client example, Capgemini assisted a major capital markets firm in designing and building its digitally augmented workforce, using a four-step process:

  1. Resource profiling
  2. Dedicated curriculum creation
  3. Pilot on 15% of resources
  4. Augmented workforce scaling.

Step 1: Involved identifying personnel with a statistics or mathematics background who could be potential candidates for, say, ML data analysis. These potential candidates were then interviewed and tested to ensure their ability, for example, to run a Monte Carlo simulation.

Having established the desired job profiles, these personnel were allocated to various job families, such as automation business analysts, data analysts, power users, and developers, with developers split into low code/no-code developers and advanced developers.

Step 2: A dedicated curriculum was created in support of each of the job families. However, to ensure the training was focused and to increase employee engagement and retention, each employee was tasked up-front with clearly defined projects to be undertaken following training. This kept the training relevant and avoided a demotivating disconnect between training and deployment

Step3: 15% of the entire team were then trained and deployed in their new roles. This figure ranges between 5% and 15% depending on the client, but it is important to deploy on a sub-set of the workforce before rolling out more widely across the organization. This has the dual advantages of testing the deployment and creating an aspirational group that other employees wish to join

Step 4: Roll-out to the wider labor force. The speed of roll-out typically depends on the sector and company culture.

Capgemini has also helped a wealth management company enhance its ability to supply information from various sources to its traders by enhancing its capabilities in data management and automation. In particular, this required upskilling its workforce to address shortages of data, automation, and AI skillsets.

This involved a 3-year MDM Ops modernization program with dedicated workforce augmentation and upskilling for digitally displaced personnel, starting with three personnel groups.

This resulted in an average processing speed increase of 64% and an estimated data quality increase of 50%, and the approach was subsequently adopted more widely within the company's in-house operations.

AI Academy Practitioner's Program

Capgemini has created its AI Academy Practitioner's Program, an "industrialized approach" to AI training to support workforce upskilling. This program is mentor-led and customizable by sector and function to ensure that it supports the organization's current challenges.

The program's technical elements include:

  • "Qualifying" (6 hours over 3 days) for personnel who only need to be aware of the potential of AI
  • "Professional" (10-hours per week for 4 weeks), where personnel are provided with low code tools to start developing something
  • "Expert" (10-hours per week for 4 weeks), incorporating custom AI & ML model building.

The program's functional courses include:

  • Data literacy (4 hours over 4 days)
  • Business functional (10-hours per week for 4 weeks)
  • Business influencer (CXO) (15 hours over 3 days)
  • Intelligent process automation (15 hours over 3 days), highlighting combining automation stack with AI.

Conclusion

In conclusion, the deployment of technology is arguably the easy part of intelligent process automation projects. A more challenging element has always been to motivate the workforce to come forward with ideas and enthusiastically adopt change. Capgemini's Frictionless Enterprise approach – that leverages a digitally augmented workforce – addresses this challenge by adopting an aspirational approach to upskilling the workforce and removing the disconnect between training and deployment.

]]>
<![CDATA[Automation Anywhere Launches AARI to Facilitate Bot Access to Employees]]>

 

NelsonHall recently attended Automation Anywhere's 2020 innovation day, where the company launched its Automation Anywhere Robotic Interface (AARI) digital assistant focused on making bot usage easier and more accessible to employees.

Automation Anywhere Robotic Interface

AARI “aims to elevate employees' workflows in the same manner as at-home digital assistants such as Alexa and Siri have enhanced their home life” and increase the adoption of RPA in the front and back office.

The AARI application allows users to:

  • Launch bots providing integrations to, for example, Salesforce, Google Sheets, and Microsoft Excel through a chat-based interface in addition to via the desktop application, mobile, and web interfaces. Automation Anywhere will also add voice support for accessing bots
  • Provide form-style entry into the bot, with the information then disseminated to the client’s business applications
  • Manage escalation scenarios.

Automation Anywhere expects CoEs to use AARI to create attended bots triggered using natural language conversations for workgroups or business users across front and back offices.

For example, in a contact center handling customer loans, once the CoE has established the logic behind loan terms and conditions, the workflow with AARI could be optimized to:

  • Collect the customer data from across platforms before the call, and present it in the contact center’s CRM platform of choice (e.g., Salesforce)
  • Provide forms during the call for the CX agent to populate with information from the conversation, which can then be used to populate the appropriate platforms, reducing the need for re-entry of information
  • Extract unstructured information from emailed PDFs using IQ Bot and run credit checks in the background
  • Suggest context-specific next-best actions incorporating business rules
  • On a natural language command from the CX agent to AARI, such as ‘send over the new loan terms to this customer,’ use the previously established logic to create a set of terms and conditions and email them to the customer.

Early adopter client examples include:

  • Colombian financial services firm Bancolombia using AARI to reduce in-branch wait times. The deployment of AARI was completed in one month and resulted in a $19m reduction in provision costs, 59% reduction in response time, and delivered a 1300% ROI in its first year
  • CX BPO firm TaskUs using AARI to improve employee experience along with shorter training cycles and improved agent performance for a San Antonio-based client, resulting in a 20-second reduction in AHT, a 3.4% improvement in CSAT, and 2.7% improvement in call quality.

Interactions with AARI are created using standard drag-and-drop task items from the toolbox and can leverage Automation Anywhere's Discovery Bot and other features. AARI will be charged on a $35 per user per month basis.

How distinctive is the AARI concept?

  • UiPath’s Forms feature provides an input form functionality similar to AARI to allow users to design forms for a user to input data, then disseminate the input data across its business applications, but does not allow users to launch bots through a conversational input with a digital assistant
  • The NICE Employee Virtual Assistant (NEVA) acts as an automation finder to launch pre-existing processes and conversational AI-based scenarios but lacks form-style entry
  • In the front office space, the use of bots to integrate platforms to reduce data entry and swivel chair activities is not new. Many of the CX Services vendors have had this form of capability for some years, in addition to handling escalation scenarios as a hygiene factor. These platforms do not, however, offer the automation capabilities of an RPA implementation. The CX vendors also have roadmaps to include features such as chatbots to capture sensitive information and assess the customer's tone to provide answers tailored to their emotional response.

Automation Anywhere differs in bringing together the form data entry capabilities of the RPA providers and CX vendors, and the more niche ability to interact with bots through more conversational means.

]]>
<![CDATA[AntWorks Targets Breadth & Depth in Client Engagements, Partners & Curation Capabilities]]>

 

Last week, NelsonHall attended ANTENNA2020, AntWorks’ yearly analyst retreat. AntWorks has made considerable progress since its last analyst retreat, experiencing considerable growth (estimated at ~260%) in the three quarters ending January 2020, and employing 604 personnel at the end of this period.

By geography, AntWorks’ most successful geography remains APAC, closely followed by the Americas, with AntWorks having an increasingly balanced presence across APAC, the Americas, and EMEA. By sector, AntWorks’ client base remains largely centered on BFSI and healthcare, which together account for ~70% of revenues.

The company’s success continues to be based on its ability to curate unstructured data, with all its clients using its Cognitive Machine Reading (CMR) platform and only 20% using its wider “RPA” functionality. Accordingly, AntWorks is continuing to strengthen its document curation functionality while starting to build point solutions and building depth into its partnerships and marketing.

Ongoing Strengthening of Document Curation Functionality

The company is aiming to “go deep” rather than “shallow and wide” with its customers and cites the example of one client which started with one unstructured document use case and has over the past year introduced an additional ten unstructured document use cases resulting in revenues of $2.5m.

Accordingly, the company continues to strengthen its document curation capability, and recent CMR enhancements include signature verification, cursive handwriting, language extension, sentiment analysis, and hybrid processing. The signature verification functionality can be used to detect the presence of a signature in a document and verify it against signatures held centrally or on other documents and is particularly applicable for use in KYC and fraud avoidance where, for example, a signature on a passport or driving license can be matched with those on submitted applications.

This strategy of the depth of document curation functionality resonated strongly with the clients speaking at the event. In one such case, it was the depth of the platform allowing cursive and text to be analyzed together that led to an early drop out of a number of competitors tasked with building a POC that could extract cursive writing.

AntWorks also continues to extend the range of languages where it can curate documents; currently, 17 languages are supported. The company has changed the learning process for new languages to allow for quicker training on new languages, with support for Mandarin and Arabic available soon.

Hybrid processing enables multi-format documents containing, for example, text, cursive handwriting, and signatures to be processed in a single step.

Elsewhere, AntWorks has addressed a number of hygiene factors with QueenBOT, enhancing its business continuity management, auto-scaling, and security. Auto-scaling in QueenBOT to allow bots to switch between processes if one process requires extra assistance to meet SLAs, effectively allowing bots to be “carpenters in the morning and electricians in the evening,” increasing both SLA adherence and bot utilization.

Another key hygiene factor addressed in the past year has been training material. AntWorks began 2019 with a thin training architecture, with just two FTEs supporting the rapidly expanding company; over the past year, the number of FTEs supporting training has grown to 25, supporting the creation of thousands of hours of training material. AntWorks also launched its internship program, starting in India which has added 43 FTEs in 2019. The ambition this year is to go global with this program.

Announcement of Process Discovery, Email Agent & APaaS Offerings

Process discovery is an increasingly important element in intelligent automation, helping to remove the up-front cost involved in scaling use cases by identifying and mapping potential use cases.

AntWorks’ process discovery module enables organizations to both record the keystrokes taken by one or more users against multiple transactions or import keystroke data from third-party process discovery tools. From these recordings, it uses AI to identify the cycles of the process, i.e. the individual transactions, and presents the user with the details of the workflow, which can then be grouped into process steps for ease of use. The process discovery module can also be used to help identify the business rules of the process and assist in semi-automatic creation of the identified automations (aka AutoBOT).

The process discovery module aims to offer ease of use compared to competitive products and can, besides identifying transaction steps, be used to assist organizations in calculating the RoI on business cases and in estimating the proportions of processes that can be automated, though AntWorks is understandably reluctant to underwrite these estimates.

One of the challenges for AntWorks over the coming year is to develop standardized use cases/point solutions based on its technology components, initially in horizontal form, and ultimately verticalized. Two of these just announced are Email Agent and Accounts Payable as-a-Service (APaaS).

Email Agent is a natural progression for AntWorks given its differentiation in curating unstructured documents, built on components from the ANTstein full-stack and packaged for ease of consumption. It is a point solution designed solely to automate email traffic and encompasses ML-based email classification, sentiment analysis to support email prioritization, and extraction of actionable data. Email Agent can also respond contextually via templated static or dynamic content. AntWorks estimates that 40-50 emails are sufficient for training for each use case such as HR-related email.

The next step in the development of Email Agent is the production of verticalized solutions by training the model on specific verticals to understand the front office relations organizations (such as those in the travel industry) have with their clients.

APaaS is a point solution consisting of a pre-trained configuration of CMR to extract relevant information from invoices which can then be API’d into accounting systems such as QuickBooks. Through these point solutions offered on the cloud, AntWorks hopes to open up the potential for the SME market.

Focusing on Quality of Partnerships, Not Quantity

Movement on AntWorks’ partner ecosystem (now ~66) has been slower than expected, with only a handful of partners added since last year's ANTENNA event, despite its expansion being a priority. Instead, AntWorks has been ensuring that the partnerships it does have and signs are deep and constructive. Examples of these deep partnerships include Bizhub and Accenture, two partners who have been added and that are helping train CMR in Korean and Thai respectively in exchange for some timed exclusivity in those countries.

AntWorks is also partnering with SBI Group to penetrate the South East Asia marketplace, with SBI assisting AntWorks in implementing the ability to carry out data extraction in Japanese. Elsewhere, AntWorks has partnered with the SEED Group based in Dubai and chaired by Sheikh Saeed Bin Ahmed Al Maktoum to access the MENA (Middle East & North Africa) region.

New hire Hugo Walkinshaw was brought in to lead the partnership ecosystem very recently, and he has his work cut out for him, as CEO Ash Mehra targets a ratio of direct sales to sales through partners of between 60:40 and 50:50 (an ambitious target from the current 90:10 ratio). The aim is to achieve this through the current strategy of working very closely with partners, signing exclusive partnerships where appropriate, and targeting less mature geographies and emerging use cases, such as IoT, where AntWorks can establish a major presence.

In the coming year, expect AntWorks to add more deep partnerships focused on specific geographic presence in less mature markets and targeted verticals, and possibly with technology players to support future plans for running bots on embedded devices such as ships.

Continuing to Ramp Up Marketing Investment

AntWorks was relatively unknown 18 months ago but has made a major investment in marketing since then. AntWorks attended ~50 major events in 2019, possibly 90 events in total, counting all minor events. However, AntWorks’ approach to events is arguably even more important than the number attended, with the company keen to establish a major presence at each event it attends. AntWorks does not wish to be merely another small booth in the crowd, instead opting for larger spaces in which it can run demos to support the interest in clients and partners.

This appears to have had the desired impact. Overall, AntWorks states that in the past year it has gone from being invited to RFIs/RFPs in 20% of cases to 80% and that it intends to continue to ramp up its marketing budget.

A series B round of funding, currently underway, is targeted on expanding its marketing investments as well as its platform capabilities. Should AntWorks utilize this second round of funding as effectively as its first with SBI Investments 2 years ago, we expect it to act as a springboard for exponential growth and these deep relationships and continue to lead in middle- and back-office intelligent automation use cases with high volumes of complex or hybrid unstructured documents.

]]>
<![CDATA[Genpact Acquires Rightpoint to Strengthen 'Experience' Capability]]>

 

Enterprise operations transformation requires three critically important capabilities:

  • Domain process expertise and the ability to identify new “digital” target operating models
  • Transformational technology capability, leveraging technologies such as cloud platforms and intelligent automation to elevate straight-through processing and self-service principles ahead of agent-based processing
  • Experience design and implementation, now highly important to optimize the experience across entire customer, employee, and partner populations.

Genpact has strong domain process expertise and, in recent years, has developed strong transformational technology capability but, despite its acquisition of TandemSeven, has historically possessed lower levels of capability in “experience” design and development.

However, TandemSeven’s experience capability was becoming highly important to Genpact even in core activities such as order management and collections, and Genpact recognized that “experience” was potentially a key differentiating factor for the company. Accordingly, having seen the benefits of integrating TandemSeven, Genpact increasingly looked to go up the value chain in experience capability by both enhancing and scaling its existing capabilities.

Rightpoint Judged to be Highly Complementary to TandemSeven

Rightpoint was then identified as a possible acquisition target by the Genpact M&A team, with Genpact judging that Rightpoint’s assets and capabilities were highly complementary to those of TandemSeven.

Rightpoint currently employs ~450 personnel and positions as a full-service digital agency offering multidisciplinary teams across strategy, design, content, engineering, and insights. The company was formed with the thesis that employee experience is paramount, with the company initially focusing on employee experience, a key area for Genpact, and subsequently developing an increasing emphasis on consumer experience in recent years.

Genpact perceives that Rightpoint can make a significant contribution to helping organizations “define the creative, define the interactive, and hence define a higher experience.” The company’s clients include Aon, Sanofi, M Health, Grant Thornton, Flywheel, and Walgreens. For example, Rightpoint has defined and designed the entire employee experience for Grant Thornton, where the company developed an employee information sharing and knowledge management platform. In addition, Rightpoint has assisted a large pharmaceutical company in creating a patient engagement application to encourage patients to monitor their insulin and sugar levels.

In addition to a complementary skillset, Rightpoint is also complementary to TandemSeven in industry presence. TandemSeven has a strong focus on financial services, with Rightpoint having a significant presence in healthcare and clients in consumer goods, auto, and insurance.

Maximizing the Synergies Between Genpact & Rightpoint

Genpact expects to grow both Rightpoint’s and its own revenues by exploiting the synergies between the two organizations.

One initial synergy being targeted by Genpact is providing end-to-end and “closed loop” services to its clients. Rightpoint employs both creative and technology personnel, with its creative personnel typically having a blend of technology capability allowing them to go from MVP to first product to roll-out. Rightpoint is a Microsoft Customer Engagement Alliance National Solution Provider, a Sitecore Platinum Partner, a certified Google Developer Agency, and also has partnerships with Episerver and Salesforce.

However, the company lacks the process and domain expertise that Genpact can bring to improve process target models and process controls & management. For example, for the medical company example above, Rightpoint could develop the app, while Genpact could run the app and provide the analytics to improve patient engagement, with Rightpoint then modifying the app accordingly.

Secondly, Genpact will support Rightpoint’s growth by bringing financial muscle to Rightpoint, facilitating:

  • An ability to invest in new technology capability in platforms such as Shopify and Adobe
  • The financial means to be able to spend a significant amount of time doing discovery work with clients and prospects, and hence targeting larger-scale assignments.

However, Genpact is being careful not to overstretch Rightpoint. The company intends to be highly disciplined in introducing Rightpoint to its accounts, initially targeting just those champion accounts where Rightpoint will enable Genpact to create a significant level of differentiation.

Genpact also perceives that it can learn from Rightpoint delivery methodologies. Rightpoint has a strong methodology in driving agile delivery and makes extensive use of gig workers (with ~10-15% of its workforce being gig workers) and these are both areas where Genpact perceives it can apply Rightpoint practice to its wider business.

Rightpoint Will Retain its Identity, Culture & Management

Rightpoint and TandemSeven are planned to be integrated with a porting of expertise and resources between both companies, and with Ross Freedman heading an expanded Rightpoint capability and reporting into Genpact’s transformation services lead.

In terms of the current organization, RightPoint has an experience practice and a digital operations practice. This includes an offshore delivery center in Jaipur and technology practice groups. However, while the practices are national, most of Rightpoint’s client delivery work is carried out in regional centers to give strong client proximity. The company’s HQ is in Chicago, with regional centers in Atlanta, Boston, Dallas, Denver, Detroit, Los Angeles, New York, and Oakland.

In due course, Genpact will likely further restructure some of the delivery, with a greater proportion of non-client-facing activity being moved into offshore CoEs.

]]>
<![CDATA[D-GEM: Capgemini’s Answer to the Problem of Scaling Automation]]> Finance & accounting is at the forefront of the application of RPA, with organizations attracted by its high volumes of transactional activity. Consequently, activities such as the movement and matching of data within purchase-to-pay have been a frequent start-point for organizational automation initiatives.

Organizations starting on RPA are initially faced with the challenges of understanding RPA tools and approaches and typically lack the internal skills necessary to undertake automation initiatives. Once these skills have been acquired, RPA is then often applied in a piecemeal fashion, with each use case considered by a governance committee on its own merits. However, once a number of deployments have been achieved, organizations then look to scale their automation initiatives across the finance function and are confronted by the sheer complexity, and impossibility, of managing the scaling of automation while maintaining a ‘piecemeal’ approach. At this point, organizations realize they need to modify their approach to automation and adopt a guiding framework and target operating model if they are to scale automation successfully across their finance & accounting processes.

In response to these needs, Capgemini has introduced its Digital Global Enterprise Model (D-GEM to assist organizations in scaling automation across processes such as finance & accounting more rapidly and effectively.

Introducing D-GEM

The basic premise behind D-GEM is that organizations need both a vision and a detailed roadmap if they are to scale their application of automation successfully. Capgemini is taking an automation-first approach to solutioning, with the client vision initially developed in “Five Senses of Intelligent Automation” workshops. Here, Capgemini runs workshops for clients to demo the various technologies and the possibilities from automation, and to establish their new target operating model taking into account:

  • The key outcomes sought within finance & accounting under the new target operating model. For example, key outcomes sought could be reduced DSO, increased working capital, and reduced close days
  • How the existing processes could be configured and connected better using “five senses”:
    • Act (RPA)
    • Think (analytics)
    • Remember (knowledge base)
    • Watch (machine vision & machine learning)
    • Talk (chatbot technology).

However, while the vision, goals, and technology are important, implementing this target operating model at scale requires an understanding of the underlying blueprint, and here Capgemini has developed D-GEM as the “practitioners’ guidebook, a repository showing (e.g., for finance & accounting) what can be achieved and how to achieve it at a granular level (process level 4).

D-GEM essentially aims to provide the blueprint to support the use of automation and deliver the transformation. It is now being widely used within Capgemini and is being made available not just to the company’s BPO clients but for wider application by non-BPO clients within their SSCs and GBS organizations.

From GEM to D-GEM

Capgemini’s original GEM (Global Enterprise Model) was used for solutioning and driving transformation within BPO clients prior to the advent of intelligent automation technologies. Its transformation focus was on improving the end-to-end process and eliminating exceptions. It aimed to introduce best-in-class processes while optimizing the location mix and improving domain competencies and reflected the need to drive standardization and lean processes to deliver efficiency.

While the focus of D-GEM remains the introduction of “best-in-class” processes, best-in-class has now been updated to take into account Intelligent Automation technologies, and the transformation focus has changed to the application of automation to facilitate best-in-class. For example, industrialization of the inputs needs to be taken into account at an early stage if downstream processes are to be automated at scale. Alongside the efficiency focus on eliminating waste, it also looks to use technology to improve the user experience. For instance, rather than eliminating non-standard reporting as has often been a focus in the past, deployment of reporting tools and services on top of standardized inputs and data can enhance the user experience by allowing them to produce their own one-off reports based on consistent and accurate information.

D-GEM provides a portal for practitioners using the same seven levers as GEM, namely:

  • Grade Mix
  • Location Mix
  • Competencies
  • Digital Global Process Model
  • Technology
  • Pricing and Cost Allocations
  • Governance.

However, the emphasis within each of these levers has now changed, as explained in the following sections.

Role of the Manager Changes from Managing Throughput to Eliminating Exceptions

Within Grade Mix, Capgemini evaluates the impact of automation on the grade mix, including how to increase the manager’s span of control by adding bots as well as people, how to use knowledge to increase the capability at different grades, and how to optimize the team structure.

Under D-GEM, the role of the manager fundamentally changes. With the emphasis on automation-first, the primary role of the manager is now to assist the team in eliminating exceptions rather than managing the throughput of team members. Essentially, managers now need to focus on changing the way invoices are processed rather than managing the processing of invoices.

The needs of the agents also change as the profile of work changes with increased levels of task automation. Typically, agents now need to have a level of knowledge that will enable them to act as problem-solvers and trainers of bots. Millennials typically have great problem-solving skills, and Capgemini is using Transversal and the process knowledge base within D-GEM to skill people up faster and ensure Process Champions are growing within each delivery team, so knowledge management tools have a key role to play in ensuring that knowledge is effectively dispersed and able junior team members can expand their responsibility more quickly.

The required changes in competency are key considerations within digital transformations, and it is important to understand how the competencies of particular roles or grades change in response to automation and how to ensure that the workforce knows how automation can enrich and automate their capabilities.

The resulting team structure is often portrayed as a diamond. However, Capgemini believes it is important not to end up with a top-heavy organization as a result of process automation. The basic pyramid structure doesn’t necessarily change, but the team now includes an army of robots, so while the span of managers will typically be largely unchanged in terms of personnel, they are now additionally managing bots. In addition, tools such as Capgemini’s “prompt” facilitate the management of teams across multiple locations.

Within Location Mix, as well as evaluating that the right processes are in the right locations and how the increased role of automation impacts the location mix, it is now important to consider how much work can be transitioned to a Virtual Delivery Center.

Process & Technology Roadmaps Remain Important

Within Digital Global Process Model, D-GEM provides a roadmap for best-practice processes powered by automation with integrated control and performance measures. Capgemini firmly believes that if an organization is looking to transform and automate at scale, then it is important to apply ESOAR (eliminate, standardize, optimize, automate, and then apply RPA and other intelligent automation technologies) first, not just RPA.

Finance & accounting processes haven’t massively changed in terms of the key steps, but D-GEM now includes a repository for each process, based on ESOAR, which shows which steps can be eliminated, what can be standardized, how to optimize, how to automate, how to robotize, and how to add value.

Within the Technology lever, D-GEM then provides a framework for identifying suitable technologies and future-proofing technology. It also indicates what technologies could potentially be applied to each process tower, showing a “five senses” perspective. For example, Capgemini is now undertaking some pilots applying blockchain to intercompany accounting to create an internal network. Elsewhere, for one German organization, Capgemini has applied Tradeshift and RPA on top of the organization’s ERP to achieve straight-through processing.

In addition, as would be expected, D-GEM includes an RPA catalog, listing the available artifacts by process, together with the expected benefits from each artifact, which greatly facilitates the integration of RPA into best practices.

Governance is also a critical part of transformation, and the Governance lever within D-GEM suggests appropriate structures to drive transformation, what KPIs should be used to drive performance, and how roles in the governance model change in the new digital environment.

Summary

Overall, D-GEM has taken Capgemini’s Global Enterprise Model and updated it to address the world of digital transformation, applying automation-first principles. While process best practice remains key, best practice is now driven by a “five senses” perspective and how AI can be applied in an interconnected fashion across processes such as finance and accounting.

]]>
<![CDATA[AntWorks Positioning BOT Productivity and Verticalization as Key to Intelligent Automation 2.0]]> Last week, AntWorks provided analysts with a first preview of its new product ANTstein SQUARE, to be officially launched on May 3.

AntWorks’strategy is based on developing full stack intelligent automation, built for modular consumption, and the company’s focus in 2019 is on:

  • BOT productivity, defined as data harvesting plus intelligent RPA
  • Verticalization.

In particular, AntWorks is trying to dispel the idea that Intelligent Automation needs to consist of three separate products from three separate vendors across machine vision/OCR, RPA, and AI in the form of ML/NLP, and show that AntWorks can offer a single, though modular, “automation” across these areas end-to-end.

Overall, AntWorks positions Intelligent Automation 2.0 as consisting of:

  • Multi-format data ingestion, incorporating both image and text-based object detection and pattern recognition
  • Intelligent data association and contextualization, incorporating data reinforcement, natural language modelling using tokenization, and data classification. One advantage claimed for fractal analysis is that it facilitates the development of context from images such as company logos and not just from textual analysis and enables automatic recognition of differing document types within a single batch of input sheets
  • Smarter RPA, incorporating low code/no code, self-healing, intelligent exception handling, and dynamic digital workforce management.

Cognitive Machine Reading (CMR) Remains Key to Major Deals

AntWorks’ latest release, ANTstein SQUARE is aimed at delivery of BOT productivity through combining intelligent data harvesting with cognitive responsiveness and intelligent real-time digital workforce management.

ANTstein data harvesting covers:

  • Machine vision, including, to name a modest sub-set, fractal machine learning, fractal image classifier, format converter, knowledge mapper, document classifier, business rules engine, workflow
  • Pre-processing image inspector, where AntWorks demonstrated the ability of its pre-processor to sharpen text and images, invert white text on a black background, remove grey shapes, and adjust skewed and rotated inputs, typically giving a 8%-12% uplift
  • Natural language modelling.

Clearly one of the major issues in the industry over the last few years has been the difficulty organizations have experienced in introducing OCR to supplement their initial RPA implementations in support of handling unstructured data.

Here, AntWorks has for some time been positioning its “cognitive machine reading” technology strongly against traditional OCR (and traditional OCR plus neural network-based machine learning) stressing its “superior” capabilities using pattern-based Content-based Object Retrieval (CBOR) to “lift and associate all the content” and achieve high accuracy of captured content, higher processing speeds, and ability to train in production. AntWorks also takes a wide definition of unstructured data covering not just typed text, but also including for example handwritten documents and signatures and notary stamps.

AntWorks' Cognitive Machine Reading encompasses multi-format data ingestion, fractal network driven learning for natural language understanding using combinations of supervised learning, deep learning, and adaptive learning, and accelerators e.g. for input of data into SAP.

Accuracy has so far been found to be typically around 75% for enterprise “back-office” processes, but the level of accuracy depends on the nature of the data, with fractal technology most appropriate where the past data strongly correlates with future data and data variances are relatively modest. Fractal techniques are regarded by AntWorks as being totally inappropriate in use cases where the data has a high variance e.g. crack detection of an aircraft or analysis of mining data. In such cases, where access to neural networks is required, AntWorks plans to open up APIs to, for example, Amazon and AWS.

Several examples of the use of AntWorks’ CMR were provided. In one of these, AntWorks’ CMR is used in support of sanction screening within trade finance for an Australian bank to identify the names of the parties involved and look for banned entities. The bank estimates that 89% of entities could be identified with a high degree of confidence using CMR with 11% having to be handled manually. This activity was previously handled by 50 FTEs.

Fractal analysis also makes its own contribution to one of ANTstein’s USPs: ease of use. The business user uses “document designer”, to train ANTstein on a batch of documents for each document type, but fractal analysis requires lower numbers of cases than neural networks and its datasets also inherently have lower memory requirements since the system uses data localization and does not extract unnecessary material.

RPA 2.0 “QueenBOTs” Offer “Bot Productivity” through Cognitive Responsiveness, Intelligent Digital Automation, and Multi-Tenancy

AntWorks is positioning to compete against the established RPA vendors with a combination of intelligent data harvesting, cognitive bots, and intelligent real-time digital workforce management. In particular, AntWorks is looking to differentiate at each stage of the RPA lifecycle, encompassing:

  • Design, process listener and discoverer
  • Development, aiming to move towards low code business user empowerment
  • Operation, including self-learning and self-healing in terms of exception handling to become more adaptive to the environment
  • Maintenance, incorporating code standardization into pre-built components
  • Management, based on “central intelligent digital workforce management.

Beyond CMR, much of this functionality is delivered by QueenBOTs. Once the data has been harvested it is orchestrated by the QueenBOT, with each QueenBOT able to orchestrate up to 50 individual RPA bots referred to as AntBOTs.

The QueenBOT incorporates:

  • Cognitive responsiveness
  • Intelligent digital automation
  • Multi-tenancy.

“Cognitive responsiveness” is the ability of the software to adjust automatically to unknown exceptions in the bot environment, and AntWorks demonstrated the ability of ANTstein SQUARE to adjust in real-time to situations where non-critical data is missing or the portal layout has changed. In addition, where a bot does fail, ANTstein aims to support diagnosis on a more granular basis by logging each intermittent step in a process and providing a screenshot to show where the process failed.

AntWorks’ is aiming to put use case development into the hands of the business user rather than data scientists. For example, ANTstein doesn’t require the data science expertise for model selection typically required when using neural network based technologies and does its own model selection.

AntWorks also stressed ANTstein’s ease of use via use of pre-built components and also by developing its own code via the recorder facility and one client talking at the event is aiming to handle simple use cases in-house and just outsourcing the building of complex use cases.

AntWorks also makes a major play on reducing the cost of infrastructure compared to traditional RPA implementations. In particular, ANTstein addresses the issue of servers or desktops being allocated to, or controlled by, an individual bot by incorporating dynamic scheduling of bots based on SLAs rather than timeslots and enabling multi-tenancy occupancy so that a user can use a desktop while it is simultaneously running an AntBOTs or several AntBOTs can run simultaneously on the same desktop or server.

Building Out Vertical Point Solutions

A number of the AntWorks founders came from a BPO background, which gave them a focus on automating the process middle- and back-office and the recognition that bringing domain and technology together is critical to process transformation and building a significant business case.

Accordingly, verticalization is a major theme for AntWorks in 2019. In addition to support for a number of horizontal solutions, AntWorks will be focusing on building point solutions in nine verticals in 2019, namely:

  • Banking: trade finance, retail banking account maintenance, and anti-money laundering
  • Mortgage (likely to be the first area targeted): new application processing, title search, and legal description
  • Insurance: new account set up, policy maintenance, claims handling, and KYC
  • Healthcare & life sciences: BOB reader, PRM chat, payment posting, and eligibility
  • Transportation & logistics: examination evaluation
  • Retail & CPG: no currently defined point solutions
  • Telecom: customer account maintenance
  • Media & entertainment: no currently defined point solutions
  • Technology & consulting: no currently defined point solutions.

The aim is to build point solutions (initially in conjunction with clients and partners) that will be 80% ready for consumption with a further 20% of effort required to train the bot/point solution on the individual company’s data.

Building a Partner Ecosystem for RPA 2.0

The company claims to have missed the RPA 1.0 bus by design (the company commenced development of “full-stack ANTstein in 2017) and is now trying to get out the message that the next generation of Intelligent Automation requires more than OCR combined with RPA to automate unstructured data-heavy industry-specific processes.

The company is not targeting companies with small numbers of bot implementations but is ideally seeking dozens of clients, each with the potential to build into $10m relationships. Accordingly the bulk of the company’s revenues currently comes from, and is likely to continue to come from, CMR-centric sales with major enterprises either direct or through relationships with major consultancies.

Nonetheless, AntWorks is essentially targeting three market segments:

  • Major enterprises with CMR-centric deals
  • RPA 2.0, through channels
  • Point solutions.

In the case of major enterprises, CMR is typically pulling AntWorks’ RPA products through to support the same use cases.

AntWorks is trying to dissociate itself from RPA 1.0, strongly positioning against the competition on the basis of “full stack”, and is slightly schizophrenic about whether to utilize a partner ecosystem which is already tied to the mainstream RPA products. Nonetheless, the company is in the early stages of building a partner ecosystem for its RPA product based on:

  • Referral partners
  • Authorized resellers
  • Managed Services Program, where partners such as EXL build their own solutions incorporating AntWorks
  • Technology Alliance partners
  • Authorized training partners
  • University partners, to develop up a critical mass of entry-level automation personnel with experience in AntWorks and Intelligent Automation in general.

Great Unstructured Data Accuracy but Needs to Continue to Enhance Ease of Use

A number of AntWorks’ clients presented at the event and it is clear that they perceive ANTstein to deliver superior capture and classification of unstructured data. In particular, clients liked the product’s:

  • Superior natural language-based classification using limited datasets
  • Ability to use codeless recorders
  • Ability to deliver greater than 70% accuracy at PoC stage

However, despite some the product’s advantages in terms of ease of use, clients would like further fine tuning of the product in areas such as:

  • The CMR UI/UX is not particularly user-friendly. The very long list of options is hard for business users to understand who require shorter more structured UI
  • Improved ease of workflow management including ability to connect to popular workflows.

So, overall, while users should not yet consider mass replacement of their existing RPAs, particularly where these are being used for simple rule-based process joins and data movement, ANTstein SQUARE is well worth evaluation by major organizations that have high-volume industry-specific or back-office processes involving multiple types of unstructured documents in text or handwritten form and where achieving accuracy of 75%+ will have a major impact on business outcomes. Here, and in the industry solutions being developed by AntWorks, it probably makes sense to use the full-stack of ANTstein utilizing both CMR and RPA functionality. In addition, CMR could be used in standalone form to facilitate extending an existing RPA-enabled process to handle large volumes of unstructured text.

Secondly, major organizations that have an outstanding major RPA roll-out to conduct at scale, are becoming frustrated at their level of bot productivity, and are prepared to introduce a new RPA technology should consider evaluating AntWorks' QueenBOT functionality.

The Challenge of Differentiating from RPA 1.0

If it is to take advantage of its current functionality, AntWorks urgently needs to differentiate its offerings from those of the established RPA software vendors and its founders are clearly unhappy with the company’s past positioning on the majority of analyst quadrants. The company aimed to achieve a turnaround of the analyst mindset by holding a relatively intimate event with a high level of interaction in the setting of the Maldives. No complaints there!

The company is also using “shapes” rather than numbers to designate succeeding versions of its software. Quirky and could be incomprehensible downstream.

However, these marketing actions are probably insufficient in themselves. To complement the merits of its software, the company needs to improve its messaging to its prospects and channel partners in a number of ways:

  • Firstly, the company’s tagline “reimagining, rethink, recreate” shows the founders’ backgrounds and is arguably more suitable for a services company than for a product company
  • Secondly, establishing an association with Intelligent Automation 2.0 and RPA 2.0 is probably too incremental to attract serious attention.

Here the company needs to think big and establish a new paradigm to signal a significant move beyond, and differentiation from, traditional RPA.

]]>
<![CDATA[A First Look at Blue Prism’s New RPA Initiatives]]>

 

Today’s announcement from Blue Prism covers new product capabilities, new service and design support services, and a new go-to-market framework that underscores the importance of automation as a means to enable legacy organizations to compete with 'born-digital' startups. Blue Prism’s announcement is equal parts perspective, product, and process. Let’s examine each in turn.

Perspective

The perspective Blue Prism is bringing to the table today is the notion of empowering digital entrepreneurs within an organization (under the flag ‘connected RPA’) with the intent of either disruption-proofing that organization or at least enabling self-disruption as part of a deliberate strategy.

In Blue Prism’s view, this is best accomplished through a package of three organizational automation design concepts. The first is the federation of the center of excellence concept – which is not to say that existing CoEs are obsolete, but rather now serve as a lighthouse for other disciplinary CoEs within, for example, finance, production, and customer care. Pushing more organizational automation authority and responsibility outward into the organization, in Blue Prism’s view, enables legacy organizations to begin acting more like ‘born-digital’ disruptors.

The second such principle, enabled by the first, is the concept of significantly accelerating the process of moving from proof of concept to at-scale testing to enterprise deployment. Again, the company positions this as a means to emulate born-digital firms and build both proactive and reactive organizational change speed through rapid automation technology deployment.

And third, Blue Prism is emphasizing the value of peer-to-peer interaction among organizational automation executives, a plank of its strategy that is being served through the rollout of Blue Prism Community – an area in Blue Prism Digital Exchange for sharing best practices and collaborating on automation challenges.

Product

The product announcements supporting this new go-to-market perspective include a process discovery capability, which will be available on the Blue Prism website. For those readers who recall seeing Blue Prism announce a partner relationship with Celonis in September of 2018, this may come as a surprise, but the firm has every intention of maintaining that relationship; this new software offering is intended as a lighter process exploration tool with the ability to visualize and contextualize process opportunities.

Blue Prism is careful to distinguish here between process discovery – the identification of processes representing a good fit for automation – and process mining, a deeper capability offered by Celonis that includes analysis of the specific stepwise work done within those processes.

Blue Prism also announced today the availability of its London-based Blue Prism AI Research Lab and accompanying AI roadmap strategy, which focuses on three areas: understanding and ingesting data in a broader variety of formats, simplifying automation design, and improving the relationship between humans and digital workers in assisted automations.

In addition, in an effort to put its expanded product set in the hands of more organizations, Blue Prism is also going to open up access to the company’s RPA software making it easy for people to get started, learn more, and explore what’s possible with an intelligent digital workforce.

Process

Finally, the process of engaging Blue Prism is changing as well. The company has established, through its experience in deployments, that the early stages of organizational automation initiatives are critical to the long-term success of such efforts, and has staged more support services and personnel into this period in response. Far from being a rebuke of channel partner efforts, this packaged service offering will actually increase the need for delivery partner resources ‘on the ground’ to service customers’ automation capabilities.

Blue Prism’s own customer success and services organization will offer to provide Blue Prism expertise into the customer programs through a series of pre-defined interventions that complement and augment the customers’ and partners’ efforts. The offering, entitled Success Accelerator, is designed around Blue Prism’s Robotic Operating Model (ROM), the company’s design and deployment framework. The intent of this new product is accelerating and accentuating client ROI by establishing sound automation delivery principles based on lessons Blue Prism has learned in its deployment history to date.

Summary

Blue Prism’s suite of product, process and perspective announcements today underscore an emerging trend in the sector – namely, the awareness that automation offers real improvements in organizational speed and agility, two characteristics that will be important for legacy organizations to develop if they are to compete with fast, reactive, born-digital disruptive startups.

The connected RPA vision that Blue Prism has outlined highlights the evolving power of automation. It extends beyond the limits of traditional RPA, giving users a compelling automation platform which includes AI and cognitive features. Furthermore, the new roadmap, capabilities, and features being introduced today enable Blue Prism’s growing community of developers, customers, channel partners, and technology alliances.

]]>
<![CDATA[6 Ways to Prepare for Cognitive Automation During RPA Implementation]]>

 

2017 brought a surge of RPA deployments across industries, and in 2018 that trend has accelerated as more and more firms begin exploring the many benefits of a digital workforce. But even as some firms are just getting their RPA projects started, others are beginning to explore the next phase: cognitive automation. And a common challenge for firms is the desire to begin planning for a more intelligent digital workforce while automating simpler rule-based processes today.

Having spoken with organizations at different stages of their journeys from BI to RPA and on to cognitive, there are tasks that companies can begin during RPA implementation to ensure that they are well positioned for the machine learning-intensive demands of cognitive automation:

Design insight points into the process for machine learning

Too often, the concept of STP gets conflated with the idea of measuring task automation only on completion. But for learning platforms, it is vital to understand exactly where variance and exceptions arise in the process – so allow your RPA platform to document its progress in detail from task inception to task completion.

At each stage, provide a data outlet to track the task’s variance on a stage-by-stage basis. A cognitive platform can then learn where, within each task, variance is most likely to arise – and it may be the case that the work can be redesigned to give straightforward subtasks to a lower-cost RPA platform while cognitive automation handles the more complex subtasks.

Build a robot with pen & paper first

One of the basic measures for determining whether a process can be managed by BPM, by RPA, or by cognitive automation is the degree to which it can be expressed as a function of rigorous rules. So, begin by building a pen-and-paper robot – a list of the rules by which a worker, human or digital, is expected to execute against the task.

Consider ‘borrowing’ an employee with no familiarity with the involved task to see if the task is genuinely as straightforward and rule-bounded as it seems – or whether, perhaps, it involves a higher order of decision-making that could require cognitive automation or AI.

Use the process to revisit the existing work design

In many organizations, tasks have ‘grown up’ inorganically around inputs from multiple stakeholders and have been amended and revised on the fly as the pace of business has demanded. But the migration first to RPA and then on to cognitive automation is a gift-wrapped opportunity to revisit how, where, and when work is done within an organization.

Can key task components be time-shifted to less expensive computing cycles overnight or on weekends? Can whole tasks be re-divided into simpler and more complex components and allocated to the lowest-cost tool for the job?

Dock the initiative with in-house ML & data initiatives

Cognitive automation does not have to remain isolated to individual task areas or divisions within an organization. Often, ML initiatives produce better results when given access to other business areas to learn from. What can cognitive automation learn about customer service tasks from paying a ‘virtual visit’ to the manufacturing floor via IoT? Much, potentially – especially if specific products or parts are difficult to machine to tolerance within an allowed margin of error, they may be more common sources of customer complaints and RMAs.

Similarly, a credit risk-scoring ML platform can learn from patterns of exception management in credit applications being managed in a cognitive automation environment. For ML initiatives, enabling one implementation to learn from others is a key success factor in producing ‘brilliant’ organizational AI.

Revisit the organizational data hygiene & governance models

Data scientists will be the first to underscore the importance of introducing clean data into any environment in which decision-making will be a task stage. Data with poor hygiene, and with low levels of governance surrounding the data cleaning and taxonomy management function, will create equally poor results from cognitive automation technology that utilizes it to make decisions.

Cognitive software is no different than humans in this respect; garbage in, garbage out, as the old saying goes. As a result, a comprehensive visitation of organizational data hygiene and governance models will pay dividends down the road in cognitive work.

Discuss your vendor’s existing technology & roadmap in cognitive & AI

Across the RPA sector, cognitive is a central concept for most vendors’ 2018-2020 roadmaps. Scheduling a working session now on migrating the organization from RPA to cognitive automation provides clients with insight on their vendor’s strengths and capability set. It also enables vendors to get a close look at ‘on the ground’ cognitive automation needs in different organizational task areas.

That’s win/win – and it helps ensure that an existing investment in vendor technology is well-positioned to take the organization forward into cognitive based on a sound understanding of client needs.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[Application of RPA & AI to Unstructured Data Processing: The Next Big Milestone for Shared Services]]>

 

Shared Services Centers (SSCs) have made progress in the initial application of RPA, gained some experience in its application, and are typically now looking to scale their use of RPA widely across their operations. However, although organizations have often undertaken some level of standardization and simplification of their processes to facilitate RPA adoption, one stumbling block that still frequently inhibits greater levels of automation and straight-through processing is an inability to process unstructured data. And this is limiting the value organizations are currently able to realize from automation initiatives.

NelsonHall recently interviewed 127 SSC executives across industries in the U.S. and Europe to understand the progress made in adopting RPA & AI, along with their satisfaction levels and future expectations. To quote from one executive interviewed, “I think the main strategy in the past has been to avoid unstructured data or pre-process it to make it structured. Now we are beginning to embrace the challenge of unstructured data and are growing an internal understanding of how to piece together automation.”

Low Satisfaction in Handling Unstructured Data Widespread in SSCs

This is an important next step. Unstructured data remains rife in organizations within customer and supplier emails and documents, with, for example, supplier invoices taking on a myriad of supplier-dependent formats and handwritten material far from extinct within customer applications.

This need to process unstructured data impacts not just mailroom document management, but a wide range of shared services processes. By industry sector, the processes that have a combination of high levels of unstructured data and a significant level of dissatisfaction with its capture and processing are:

  • Retail & Commercial Banking: new account set-up and customer service
  • P&C Insurance: fraud detection, claims processing, mailroom document management, policy maintenance, and customer service
  • Telecoms: customer service.

Within finance & accounting shared services, the same issues are found within supplier & catalog management, purchase invoice processing, and 3-way matching.

So, it is highly important that SSCs get to grips within handling unstructured documents and data within these process areas. However, this is unknown territory for many SSCs; they are typically in the early stages of automating handling of unstructured data and lack expertise in effective identification and implementation of suitable technologies. In addition, SSCs often lack the necessary experience in process change management and speed of process change when handling RPA & AI projects. Indeed, SSCs have often struggled in the early stages of automation with the challenge of realizing the expected cost savings from this technology. Applying automation is one thing, but realizing its benefits through effective process change management and ensuring that unexpected exceptions don’t derail the process and the associated cost realization, has sometimes been a significant issue.

Combining OCR & Machine Learning is Critical to Processing Unstructured Data

Accordingly, it is critical that SSCs now automate data classification and extraction from their unstructured documents. At present, 80% of SSCs across sectors are still manually classifying documents, with OCR only used modestly and not to its full potential. However, there are strong levels of intention to adopt OCR and RPA & AI technologies in support of processing unstructured data within SSCs during 2018 and 2019, as shown below:

 

SSCs are considering a broad range of technologies for processing unstructured data, with OCR clearly a key technology, but further supported by machine learning in its various forms for effective text classification and extraction. To quote from one executive interviewed, “We want to speed up deployment of automation within the mailroom, we want more OCR and natural language processing in place.”

Need for Improved Turnaround Times Now the Main Driving Force

However, in terms of benefits achievement, there is currently quite a significant difference between organizations’ current automation aspirations and what they have already achieved. While organizations placed a high initial emphasis within their automation initiatives on cost savings, and the achievement of cost savings remains very important to SSCs, the focus of executives within SSCs has now increasingly turned to improving process turnaround times.

Within the telecoms sector, this leads to a high expectation of improved customer satisfaction. However, executives with property & casualty and finance & accounting SSCs tend to attach an equal or higher importance to the impact of these technologies on employee satisfaction - by automating some of the least satisfying types of work within the organization, thus allowing personnel to focus on more added value aspects of the process (i.e. other than finding and entering data from customer documents and invoices).

The principal benefits sought by SSCs from implementing RPA & AI in support of processing of unstructured data are shown below:

 

70% of SSCs Highly Likely to Purchase Operational Service Covering Unstructured Data Processing

While automation is often depicted as having an adverse impact on the outsourcing industry, the reality is often quite the opposite, and organizations seek help in effectively deploying new digital technologies. Indeed, this is certainly the case with unstructured data processing.

SSCs will tend to implement unstructured data handling in-house where the information being handled is highly sensitive, where security is critically important, and where regulation or the set-up of internal systems inhibits use of a third-party service. However, elsewhere, where these constraints do not apply, SSC executives express a high level of intent to purchase external services in support of document classification and extraction of unstructured data. ~70% of SSCs are highly likely to purchase operational services for document processing, including document classification and extraction of unstructured data, while only a minority express a high intent to implement in-house or via a systems integrator.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[7 Essential Tasks Prior to Any RPA Implementation]]>

 

With every new software release from RPA sector leaders, there is always much to be excited about as vendors continue to push the technological boundaries of workplace automation. Whether those new capabilities focus on cognition, or security, or scalability, the technology available to us continues to be a source of inspiration and innovative thinking in how those new capabilities can be applied.

But success in an RPA deployment is not entirely dependent just on the technology involved. In fact, the implementation design framework for RPA is often just as important – if not more so – in determining whether a deployment is successful. Install the most cutting-edge platform available into a subpar implementation design framework, and no amount of technological innovation can overcome that hindrance.

With this in mind, here are seven tasks that should be part of any RPA implementation plan before organizations put pen to paper to sign up with an RPA platform vendor.

Create a cohesive vision of what automation will achieve

Automation is the ultimate strict interpretation code: it does precisely as it’s told, at speed, and in volume. But it must be pointed at the right corporate challenges, with a long-term vision for what it is (and is not) expected to do in order to be successful in that mission. That process involves asking some broad-ranging questions up-front:

  • What stakeholders are involved – internally and externally – in the automation initiative?
  • What are our organization’s expectations of the initiative?
  • How will we know if we succeeded or fail?
  • What metrics will drive those assessments?
  • Where will this initiative go next within our organization?
  • Will we involve our supply chain partners or technology allies in this process?

Ensure a staff model that can scale at the speed of enterprise automation

We tend to spend so much time talking about FTE reduction in the automation sector that we overlook the very real issue of FTE sourcing (in volume!) in relation to the implementation of automation at enterprise scale. Automation needs designers, coders, project managers, and support personnel, all familiar with the platform and able to contribute new code and thoughtware assets at speed.

Some vendors are addressing this issue head-on with initiatives like Automation Anywhere University, UiPath Academy, and Blue Prism Learning and Accreditation, and others have similar initiatives in the works. It is also important that organizational HR professionals be briefed on the specific skillsets necessary for automation-related hires; this is a relatively new field, and partnering up-front on talent acquisition can yield meaningful benefits down the road.

Plan in detail for a labor outage

The RPA sector is rife with reassurances about digital workers: they never go on strike; they don’t sleep or require breaks; they don’t call in sick. But things do go wrong. And while the RPA vendors offer impressive SLAs with respect to getting clients back online quickly, sometimes it’s necessary to handle hours, or even days, of automated work manually. Having mature high-availability and disaster recovery capability built into the platform – as Automation Anywhere included in Enterprise Release 11 – mitigates these concerns to a specific degree, but planning for the worst means just that.

Connect with the press and the labor community

Don’t skip this section because it sounds like organized labor management only, although that’s a factor too. Automation stories get out, and local and national press alike are eager to cover RPA initiatives at large organizations. It’s a hot-button topic and an easily accessible story.

Unfortunately, it’s also all too easy to take an automation story and run with the sensationalist aspects of FTE displacement and cost reduction. By interacting with journalist and labor leaders in advance of launching an automation initiative, you’re owning the story before it can be owned elsewhere in the content chain.

Have a retraining and upskilling initiative parallel to your automation COE

Automation can quickly reduce the number of humans necessary in a work area by half or even more. What is your organization’s plan for redeployment of that human capital to other, higher-value tasks? Who occupies those task chairs now – and what will they be doing?

Once the task of automation deployment is complete, there is still process work to be done in finding value-added work for humans who have a reduced workload due to automation. Some organizations are finding and unlocking new sources of enterprise value in doing so – for example, front-line workers who have their workloads reduced through automation can often ‘see the forest’ better and can advise their superiors on ways to streamline and improve processes.

Similarly, automation can bring together working groups on tasks that have connected automations between departments, allowing for new conversations, strategies, and processes to take shape.

Have an articulation plan for RPA and other advanced technologies

RPA and cognitive automation do more than improve the quality and consistency of work – they also improve the quality and consistency of task-related data. That is an invaluable characteristic of RPA from the organizational data and analytics perspective, and one that is often overlooked in the planning process.

While it might take days for a service center to spot a trend in common product complaints, RPA platforms could see the same trend in hours, combine that data in an organizational data discovery environment with IoT data from the production line, and identify a product fault faster and more efficiently than a traditional workforce might. When designing an automation initiative, it is vital to take these opportunities into account and plan for them.

Create a roadmap to cognitive automation and beyond

RPA is no more a destination than business rules engines were, or CRM, or ERP. These were all enabling technologies that oriented and guided organizations towards greater levels of agility, awareness and capability. Similarly, deploying RPA provides organizations with insight into the complexity, structure and dependencies of specific tasks. Working towards task automation yields real clarity, on a workflow-by-workflow basis, of what level of cognition will be necessary to achieve meaningful automation levels.

While many tasks can be achieved by current levels of vendor RPA capability, others will require more evolved cognitive automation, and some will be reserved for the future, when new AI capabilities become available. By designating relevant work processes to their automation ‘containers’, an enterprise roadmap to cognitive automation and AI begins to take shape.

]]>
<![CDATA[7 Predictions for RPA in 2018]]>

 

The RPA sector is defined as one of rapid technological evolution, and every year it seems like what we thought to be bleeding-edge capability in January turns out to be proven and deployed technology long before year’s end. With this rapid pace of growth and maturation in mind, where might the RPA sector be by the end of 2018? Here are seven predictions.

The first wave of automation-inclusive UI design

To date, RPA has been adaptive in nature – automation software has done the interpretive labor to ‘see’ the application screen as humans do. But as more and more repetitive-task work becomes automated, software designers will begin taking the strengths and weaknesses of computer vision into account in designing applications that will be shared between human and digital workers. This will show up in small ways at first, particularly in interface areas that are challenging for RPA software to learn quickly, but over the course of 2018, ‘hybrid workforce UI design’ will become a new standard for enterprise software vendors.

Process mining makes RPA more accessible for midmarket & emerging large market segments

Early adopters of RPA have already established that detailed process mapping is key to successful task automation across the extended enterprise. For Fortune 1000 firms, that can be fairly straightforward, with retained consulting and systems integration partners on hand to assist in the process of mapping task flows for RPA implementation. Smaller firms, however, don’t always have the luxury of engaging large consulting firms to assist in this process – so vendors developing their own automated process mapping technology, or partnering with third-party providers like Celonis, will find demand booming in the midmarket.

Human skill bottleneck hits providers without education/certification plans

It’s ironic that human skill capital will end up as the limiting factor in the growth rate of successful RPA implementations, but 2018 will close with a clear shortage of qualified automation designers and deployment management professionals. Those organizations (like UiPath, Blue Prism, and Automation Anywhere) that saw this coming early on and established academic settings for the education and certification of on-platform skilled practitioners, will thrive. But those lacking these programs may find themselves in a skill bottleneck in the market – one that will begin to materially inhibit growth.

RPA becomes a designed-in factor for disruptors

In conversations I had with organizations implementing RPA during 2H17, one common factor came to the fore: that their initial FTE rationalization gains had already been realized, and going forward, they were looking to RPA as a means to manage significant growth in their operations.

For organizations coming to market as disruptors, this trend is even more pronounced, and organizations with designs on being disruptive forces are increasingly building automation capabilities into their growth plans from the ground up. Building an organization on a foundation of a hybrid human-digital workforce is a different endeavor entirely from retrofitting an existing company with automation – and as a result, we should begin seeing some real innovation in organizational design beginning this year.

Japan becomes the adoption template geo for big bets

To date, Japan has produced some of the largest implementations of RPA, with UiPath’s late 2017 deployment at SMBC pushing the envelope still further. Japan is betting big on RPA to become a sustainable source of competitive differentiation, and as more large organizations there implement large-scale RPA projects, the best practices library for RPA deployment at scale will expand in kind.

Companies worldwide looked to Japan for guidance in implementing robotics once before, during the rise of robotic manufacturing in the automotive sector. 2018 will see a second such wave.

RPA proves its case as a source of compliance gains

RPA has been marketed with a number of different value creation characteristics already, with the obvious cost reduction and quality improvement factors taking center stage. But RPA has significant benefits to offer organizations in regulated industries, most notably in the ability to secure access to sensitive information, systematize the process of accessing and modifying that information, and standardizing the documentation process and audit logging work associated with it.

2018 will be the year that organizations begin to see meaningful returns from adopting RPA as a solution to compliance task challenges.

Demand for specialist implementation navigators grows significantly

RPA implementation has been a partnered endeavor since the technology first arrived on the scene, with software vendors allying themselves closely with large consulting firms and systems integrators to optimize their client deployments. But demand is emerging for focused, automation-centric services, and right on time, the industry is seeing a surge of new RPA specialist service providers like Symphony and Agilify.

As buying organizations begin to ask more of their new – or revamped – RPA implementations, demand for these providers’ services will grow swiftly during 2018.

]]>
<![CDATA[Intelligent Automation Summit Takeaways: Four Alternative Gain Frameworks for RPA]]>

 

At the Intelligent Automation (IA) event in New Orleans, December 6-8, snow in the Big Easy air was not the only surprise. As expected, there was plenty of technological innovation on show in the exhibition hall, but the event also played host to some energized discussions on human-centric gains to be realized from RPA implementation – suggesting that we are indeed moving into the next phase of considering automation holistically in the enterprise.

Specifically, many presentations and conversations shared a theme of human enablement within the enterprise – positioning the organization for greater long-term success, rather than focusing on the short-term fiscal gains of reductions in force and reduced cost to serve specific processes. Here are four automation gain frameworks I took away from the event that are focused on areas other than raw FTE reduction.

Automation as a disruption buffer

‘Disrupt or be disrupted’ has become a mantra for many change management executives across industries, and it was invoked numerous times during the IA event in relation to automation’s role as a buffer to disruptive change – in both directions. An automated workforce can quickly scale up (or down) as needed without costly and time-consuming facility management and workforce rationalization tasks. While there was some discussion regarding the downside containment role of RPA, far more participants at the event were looking to RPA as a tool to effectively manage explosive growth in their sectors

Automation as a ‘hazmat bot’

The idea of using bots to handle sensitive processes and data emerged as a strong theme for the near-term RPA sector roadmap. Where bots were actually less trusted with ‘low-touch’ environment data in highly-regulated industries, like BFSI and healthcare, the dialog is beginning to turn in favor of sending bots to touch and manipulate that data rather than humans.

The rationale is sound: bots can be coded with very narrowly-defined rights and credentials, self-document their own work without exception, and produce their own audit trails. Expect to see this trend gain steam in 2018 and beyond. ‘We send bots into nuclear reactors and onto other planets,’ one attendee told me. ‘We treat the data core in card issuance with no less of a hazmat perspective – where we can minimize human contact, we will, for everyone's benefit.'

Automation as a workflow stress diagnostic

The very process of automating workflows within the organization produces a wealth of usable data, and nowhere is that more evident than in analyzing those workflows for exception management stress points. In a given workflow, there are usually clearly defined and straightforward task components, and those that produce more than an average volume of exceptions. By mapping these workflows and using them to understand similar tasks in other areas of the organization, companies can leverage automation data to identify those phases of a workflow that are creating exception management stress for employees, and add support via process redesign, digitization, or assisted automation.

Automation as human capital churn ‘coolant’

Related to the previous point is the idea that RPA is beginning to serve as a very real source of ‘coolant’ for burnout-prone repetitive task areas in the organization by continuously separating work into automation-relevant and human-relevant. Eliminating the most burnout-causing task stages from the human workday reduces the proclivity for turnover and the total cost to the organization of managing the human side of the workforce.

Summary

Productivity, quality, and fiscal gains are often the first three topics of conversation when organizations discuss launching an RPA initiative. But automation has much more to offer, not only to the organizational bottom line, but to the human employees in the enterprise as well. As this sector’s technology offerings evolve and mature, so too do the use cases and benefit frameworks within customer organizations.

]]>
<![CDATA[In RPA Deployment, Slow Down... To Go Faster]]>

 

RPA software offers users the tantalizing possibility of being able to simply 'hit record and go' at the beginning of an enterprise automation initiative. But organizations that are seeing the greatest returns are slowing the initial process down, and framing their initiatives as they would treat any major technology migration.

At UIPath’s recent User Summit in New York City, one of the hottest topics was the right pace of RPA implementation, with UIPath’s customer and partner panels devoting a considerable amount of time to the topic. And the message was clear: RPA is a technology that encourages an implementation rate faster than the customer might want to sign up for.

That very idea is a strange one for most veteran IT and business executives, who are used to IT project implementations going slower than expected, with fiscal returns further in the future than they might have hoped. So when a technology like RPA does come along that promises to enable users to ‘hit record and go’, why shouldn’t beleaguered line of business heads take those promises at face value and get moving with automation today? After all, automation is often part of a larger digital transformation initiative, with expectations that projects will be self-funding through savings. Shouldn’t technologies, like RPA, that generate material cost reductions be implemented as quickly as possible?

It’s a fair question. But there are four simple reasons why RPA projects should still be managed in a stepwise fashion, like any other IT or business project:

  • Technical debt mounts quickly in too-quick RPA implementations. The ‘hit record and go’ philosophy might offer some minimal return in a short period of time, but federating the automation creation process means that multiple users often create similar automations for similar tasks, wasting time and resources consumed later in consolidating different versions of the same robot down to a single bot. In addition, individual users often create related-task bots based on their original automation scripts, multiplying the task of bot consolidation later. Often, organizations find that they have to start over completely, and only then do they undertake a more formal approach
  • Installing RPA through a traditional project framework brings stakeholders together. Automation is a technology that has the potential to bring IT and business stakeholders together in an enterprise service delivery partnership – or drive them apart with turf battles and finger-pointing. Establishing rules up front for which business units should be involved in automation design, which in automation coding, which in automation governance, and which in automation innovation establishes ground rules that all parties involved can respect and buy into for the long term, avoiding larger-scale conflict that can emerge when the process is entered into too quickly up front
  • Designing for scale demands both innovation and centralization. As automation demand scales both in terms of breadth of services within the organization and the number of workers involved, the need for centralization of automation design and deployment increases commensurately. Innovation can actually proceed faster in many organizations being managed from a CoE or automation ‘lighthouse’ than through trial and error at the desktop level. Add in the additional demands on automation systems that result from global organizations demanding localized automations and in-language service, and that scale factor becomes a critical component in achieving peak fiscal return from an RPA initiative
  • Most RPA providers rely on integration partners for ‘right-speed’ deployment and support. Across the RPA sector, strong partnerships have evolved between RPA software developers and major integrators and consulting service providers, and for good reason – the latter bring experience in change management, process design, and implementation at scale to the former’s technological innovations. This has quickly become a proven combination, and one that is returning significant fiscal and operational value to enterprise-scale organizations. Short-circuiting that value return chain by cutting partner perspective and capability out of the equation might again save some dollars and time in the short run, but will end up being more costly as RPA is scaled up.

RPA presents IT and business leaders with an alluring combination of immediacy of access, significant potential fiscal returns, and low to non-existent stack requirements on deployment. Organizations that have jumped into the deep end of enterprise automation from the ‘hit record and go’ perspective might see some immediate fiscal returns, but ultimately, they are selling short the full promise of professionally-managed automation projects executed in partnership between lines of business and IT. Providers like UIPath that are emphasizing speeding up implementation are doing so with a structured framework in mind – so that once the process is designed for scale, and implementation rules and procedures are put in place, the actual software component of the solution can proceed into deployment as quickly as possible.

But in the end, a few additional weeks or even months spent in up-front work can better enable enterprise-level organizations to achieve their peak automation return. Moreover, this approach saves costly rework and redesign stages that inevitably stretch a ‘hit record and go’ implementation out to the same project timeline, or often much longer, than a more structured approach. As strange as it may sound, the best practices in RPA deployment involve slowing down… in order to go faster. 

]]>
<![CDATA[Nvidia Draws on Gaming Culture to Compete for AI Chip Leadership]]>

 

Nvidia faces stiff new competition for the leadership position in the AI processing chip market. But the firm has a significant competitive advantage: a culture of innovation and production efficiency that was developed to address the demanding needs of a wholly different market.

Intel and Google have been making waves in the AI processing chip market, the former with the acquisitions of Nervana Systems and Mobileye, the latter with the new Tensor Processing Unit (TPU) announcement. Both are moves intended to compete more directly with Nvidia in the burgeoning market for AI processing chips.

James Wang of investment firm ARK recently set forth his long-term bet on the industry – and it favors Nvidia. Wang posits that products like TPU will be less efficient than Nvidia GPUs for the foreseeable future, arguing that “…until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.”

Wang is right, but his opinion may not actually go far enough in explaining why Nvidia should enjoy a sustainable advantage over other relative newcomers, despite their resources and experience in chipbuilding. That advantage, by the way, doesn’t have a thing to do with Google’s chip fabrication expertise, or Intel’s understanding of the needs of the AI market. It’s a deeper factor that’s seated firmly in Nvidia’s culture.

Cutting-edge engineering & savvy pricing: key strengths forged in the gaming cauldron

By the time 2017 dawned, Nvidia owned just over three-quarters of the graphics card segment (76.7%), compared with main competitor AMD’s one-quarter (23.2%). But that wasn’t always the case. In fact, for much of the past decade, Nvidia held an uncomfortable leadership position in the marketplace against AMD, sometimes leading by as few as ten points of market share (2Q10).

During that time, Nvidia understood that a misstep against AMD in bringing new products forth could yield the market leader position, and even send the company into an unrecoverable decline if gamers – a tough audience to say the least – lost confidence in Nvidia’s vision.

As such, Nvidia learned many of the principles of design thinking the hard way. They learned to fail fast, to find new segments in the market and exploit them – as they did with the GTX 970, a product that stunned the marketplace by being priced underneath its predecessor at launch – and to take and hold ground with innovation and rapid-cycle development. More importantly, they learned how to demonstrate value to a gamer community that wanted to buy long-term performance security when it was time for a hardware refresh. In short, they learned to understand the wants and needs of an extraordinarily demanding consumer public, in the form of gamers, and relentlessly squeezed their competition out with a combination of cutting-edge engineering and savvy segment pricing.

Much of the real-world output from that cultural core of relentless engineering improvement is the remarkable pace of platform efficiency that Nvidia has achieved in its GPU chips. The company maintained close ties with leading game publishing houses, and as a result kept clearly in mind what sort of processing speed – as well as heat output and energy draw – cutting-edge games were going to require. At multiple points in time, the standards for supporting new games have meaningfully advanced inside eighteen months. This often mandated that Nvidia turn over a new top-end GPU processing platform on a blistering production timeline.

In response, Nvidia turned to parallel computing, an ideal fit for GPUs, which already offered significantly more cores than their CPU cousins. As it turned out, Nvidia had put itself on the fast track to dominating the AI hardware market, since GPUs are far better suited for applications, like AI, that demand computing tasks work in parallel. In serving one market, Nvidia built a long-term engineering and fabrication roadmap nearly perfectly suited for another.

The competition is hot, but Nvidia poised to win?

Fast forward to 2017, and some are questioning whether Nvidia is in the fight of its life now with new, aggressive competitors seeking to take away part – or all – of its AI GPU business. While Wang has pushed his chips into the center of the table on Nvidia, others are unconvinced that Nvidia can hold its lead – especially with fifteen other firms actively developing Deep Learning chips. That roster includes such notable brands as Bitmain, a leading manufacturer of Bitcoin mining chips; Cambricon, a startup backed by the Chinese government; and Graphcore, a UK startup that hired a veritable ‘who’s who’ of AI talent. 

There’s no shortage of innovation and talent at these organizations, but hardware is a business that rewards sustained performance improvement over time at steadily reducing cost per incremental GFLOPS (where a GFLOP is one billion floating point operations per second). The first of these components is certainly an innovation-centric factor, but the second rewards organizations that have kept pace not only with the march of performance demands, but the need to justify hardware refresh with lower operating costs. Given that this is an area where Nvidia shines, as a function of its cultural evolution under identical circumstances in gaming, the sector’s long-term bet on Nvidia is the correct call. 

 

Dave Mayer is currently working on a major global project evaluating RPA & AI technology. To find out more, contact Guy Saunders.

]]>
<![CDATA[Amelia Enhances its Emotional, Contextual, and Process Intelligence to Outwit Chatbots]]>

IPSoft's Amelia

 

NelsonHall recently attended the IPSoft analyst event in New York, with a view to understanding the extent to which the company’s shift into customer service has succeeded. It immediately became clear that the company is accelerating its major shift in focus of recent years from autonomics to cognitive agents. While IPSoft began in autonomics in support of IT infrastructure management, and many Amelia implementations are still in support of IT service activities, IPSoft now clearly has its sights on the major prize in the customer service (and sales) world, positioning its Amelia cognitive agent as “The Most Human AI” with much greater range of emotional, contextual, and process “intelligence” than the perceived competition in the form of chatbots.

Key Role for AI is Human Augmentation Not Human Replacement

IPSoft was at pains to point out that AI was the future and that human augmentation was a major trend that would separate the winners from the losers in the corporate world. In demonstrating the point that AI was the future, Nick Bostrom from the Future of Humanity Institute at Oxford University discussed the result of a survey of ~300 AI experts to identify the point at which high-level machine intelligence, (the point at which unaided machines can accomplish any task better and more cheaply than human workers) would be achieved. This survey concluded that there was a 50% probability that this will be achieved within 50-years and a 25% probability that it will happen within 20-25 years.

On a more conciliatory basis, Dr. Michael Chui suggested that AI was essential to maintaining living standards and that the key role for AI for the foreseeable future was human augmentation rather than human replacement.

According to McKinsey Global Institute (MGI), “about half the activities people are paid almost $15tn in wages to do in the global economy have the potential to be automated by adapting currently demonstrated technology. While less than 5% of all occupations can be automated entirely, about 60% of all occupations have at least 30% of constituent activities that could be automated. More occupations will change than can be automated away.”

McKinsey argues that automation is essential to maintain GDP growth and standards of living, estimating that of the 3.5% per annum GDP growth achieved on average over the past 50 years, half was derived from productivity growth and half from growth in employment. Assuming that growth in employment will largely cease as populations age over the next 50 years, then an increase/approximate doubling in automation-driven productivity growth will be required to maintain the historical levels of GDP growth.

Providing Empathetic Conversations Rather than Transactions

The guiding principles behind Amelia are to provide conversations rather than transactions, to understand customer intent, and to deliver a to-the-point and empathetic response. Overall, IPSoft is looking to position Amelia as a cognitive agent at the intersection of systems of engagement, systems of record, and data platforms, incorporating:

  • Conversational intelligence, encompassing intelligent understanding, empathetic response, & multi-channel handling. IPSoft has recently added additional machine learning and DEEP learning
  • Advanced analytics, encompassing performance analytics, decision intelligence, and data visualization
  • Smart workflow, encompassing dynamic process execution and integration hub, with UI integration (planned)
  • Experience management, to ensure contextual awareness
  • Supervised automated learning, encompassing automated training, observational learning, and industry solutions.

For example, it is possible to upload documents and SOPs in support of automated training and Amelia will advise on the best machine learning algorithms to be used. Using supervised learning, Amelia submits what it has learned to the SME for approval but only uses this new knowledge once approved by the SME to ensure high levels of compliance. Amelia also learns from escalations to agents and automated consolidation of these new learnings will be built into the next Amelia release.

IPSoft is continuing to develop an even greater range of algorithms by partnering with universities. These algorithms remain usable across all organizations with the introduction of customer data to these algorithms leading to the development of client-specific customer service models.

Easier to Teach Amelia Banking Processes than a New Language

An excellent example of the use of Amelia was discussed by a Nordic bank. The bank initially applied Amelia to its internal service desk, starting with a pilot in support of 600 employees in 2016 covering activities such as unlocking accounts and password guidance, before rolling out to 15,000 employees in Spring 2017. This was followed by the application of Amelia to customer service with a silent launch taking place in December 2016 and Amelia being rolled out in support of branch office information, booking meetings, banking terms, products and services, mobile bank IDs, and account opening. The bank had considered using offshore personnel but chose Amelia based on its potential ability to roll-out in a new country in a month and its 24x7 availability. Amelia is currently used by ~300 customers per day over chat.

The bank was open about its use of AI with its customers on its website, indicating that its new chat stream was based on the use of “digital employees with artificial intelligence”. The bank found that while customers, in general, seemed pleased to interact via chat, less expectedly, use of AI led to totally new customer behaviors, both good and bad, with some people who hated the idea of use of robots acting much more aggressively. On the other hand, Amelia was highly successful with individuals who were reluctant to phone the bank or visit a bank branch.

Key lessons learnt by the bank included:

  • The high level of acceptance of Amelia by customer service personnel who regarded Amelia as taking away boring “Monday-morning” tasks allowing them to focus on more meaningful conversations with customers rather than threatening their livelihoods
  • It was easier than expected to teach Amelia the banking processes, but harder than expected to convert to a new language such as Swedish, with the bank perceiving that each language is essentially a different way of thinking. Amelia was perceived to be optimized for English and converting Amelia to Swedish took three months, while training Amelia on the simple banking processes took a matter of days.

Amelia is now successfully handling ~90% of requests, though ~30% of these are intentionally routed to a live agent for example for deeper mortgage discussions.

Amelia Avatar Remains Key to IPSoft Branding

While the blonde, blue-eyed nature of the Amelia avatar is likely to be highly acceptable in Sweden, this stereotype could potentially be less acceptable elsewhere and the tradition within contact centers is to try to match the nature of the agent with that of the customer. While Amelia is clearly designed to be highly empathetic in terms of language, it may be more discordant in terms of appearance.

However, the appearance of the Amelia avatar remains key to IPSoft’s branding. While IPSoft is redesigning the Amelia avatar to capture greater hand and arm movements for greater empathy, and some adaptation of clothing and hairstyle are permitted to reflect brand value, IPSoft is not currently prepared to allow fundamental changes to gender or skin color, or to allow multiple avatars to be used to develop empathy with individual customers. This might need to change as IPSoft becomes more confident of its brand and the market for cognitive agents matures.

Partnering with Consultancies to Develop Horizontal & Vertical IP

At present, Amelia is largely vanilla in flavor and the bulk of implementations are being conducted by IPSoft itself. IPSoft estimates that Amelia has been used in 50 instances, covering ~60% of customer requests with ~90% accuracy and, overall, IPSoft estimates that it takes 6-months to assist an organization to build an Amelia competence in-house, 9-days to go-live, and 6-9 months to scale up from an initial implementation.

Accordingly, it is key to the future of IPSoft that Amelia can develop a wide range of semi-productized horizontal and vertical use cases and that partners can be trained and leveraged to handle the bulk of implementations.

At present, IPSoft estimates that its revenues are 70:30 services:product, with product revenues growing faster than services revenues. While IPSoft is currently carrying out the majority (~60%) of Amelia implementations itself, it is increasingly looking to partner with the major consultancies such as Accenture, Deloittes, PwC, and KPMG to build baseline Amelia products around horizontals and industry-specific processes, for example, working with Deloittes in HR. In addition, IPSoft has partnered with NTT in Japan, with NTT offering a Japanese-language, cloud-based virtual assistant, COTOHA.

IPSoft’s pricing mechanisms consist of:

  • A fixed price per PoC development
  • Production environments: charge for implementation followed by a price per transaction.

While Amelia is available in both cloud and onsite, IPSoft perceives that the major opportunities for its partners lie in highly integrated implementations behind the client firewall.

In conclusion, IPSoft is now making considerable investments in developing Amelia with the aim of becoming the leading cognitive agent for customer service and the high emphasis on “conversations and empathic responses” differentiates the software from more transactionally-focused cognitive software.

Nonetheless, it is early days for Amelia. The company is beginning to increase its emphasis on third-party partnerships which will be key to scaling adoption of the software. However, these are currently focused around the major consultancies. This is fine while cognitive agents are in the first throes of adoption but downstream IPSoft is likely to need the support of, and partnerships with the major contact center outsourcers who currently control around a third of customer service spend and who are influential in assisting organizations in their digital customer service transformations.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 3: From Pilot to Production & Beyond – The Keys to Successful RPA Deployment]]>

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the third and final blog in a series presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at the stages of deployment, from pilot development, through design & build, to production, maintenance, and support.

Piloting & deployment – it’s all about the business

When developing pilots, it’s important to recognize that the organization is addressing a business problem and not just applying a technology. Accordingly, organizations should consider how they can make a process better and achieve service delivery innovation, and not just service delivery automation, before they proceed. One framework that can be used in analyzing business processes is the ‘eliminate/simplify/standardize/automate’ approach.

While organizations will probably want to start with some simple and relatively modest RPA pilots to gain quick wins and acceptance of RPA within the organization (and we would recommend that they do so), it is important as the use of RPA matures to consider redesigning and standardizing processes to achieve maximum benefit. So begin with simple manual processes for quick wins, followed by more extensive mapping and reengineering of processes. Indeed, one approach often taken by organizations is to insert robotics and then use the metrics available from robotics to better understand how to reengineer processes downstream.

For early pilots, pick processes where the business unit is willing to take a ‘test & learn’ approach, and live with any need to refine the initial application of RPA. Some level of experimentation and calculated risk taking is OK – it helps the developers to improve their understanding of what can and cannot be achieved from the application of RPA. Also, quality increases over time, so in the medium term, organizations should increasingly consider batch automation rather than in-line automation, and think about tool suites and not just RPA.

Communication remains important throughout, and the organization should be extremely transparent about any pilots taking place. RPA does require a strong emphasis on, and appetite for, management of change. In terms of effectiveness of communication and clarifying the nature of RPA pilots and deployments, proof-of-concept videos generally work a lot better than the written or spoken word.

Bot testing is also important, and organizations have found that bot testing is different from waterfall UAT. Ideally, bots should be tested using a copy of the production environment.

Access to applications is potentially a major hurdle, with organizations needing to establish virtual employees as a new category of employee and give the appropriate virtual user ID access to all applications that require a user ID. The IT function must be extensively involved at this stage to agree access to applications and data. In particular, they may be concerned about the manner of storage of passwords. What’s more, IT personnel are likely to know about the vagaries of the IT landscape that are unknown to operations personnel!

Reporting, contingency & change management key to RPA production

At the production stage, it is important to implement a RPA reporting tool to:

  • Monitor how the bots are performing
  • Provide an executive dashboard with one version of the truth
  • Ensure high license utilization.

There is also a need for contingency planning to cover situations where something goes wrong and work is not allocated to bots. Contingency plans may include co-locating a bot support person or team with operations personnel.

The organization also needs to decide which part of the organization will be responsible for bot scheduling. This can either be overseen by the IT department or, more likely, the operations team can take responsibility for scheduling both personnel and bots. Overall bot monitoring, on the other hand, will probably be carried out centrally.

It remains common practice, though not universal, for RPA software vendors to charge on the basis of the number of bot licenses. Accordingly, since an individual bot license can be used in support of any of the processes automated by the organization, organizations may wish to centralize an element of their bot scheduling to optimize bot license utilization.

At the production stage, liaison with application owners is very important to proactively identify changes in functionality that may impact bot operation, so that these can be addressed in advance. Maintenance is often centralized as part of the automation CoE.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on RPA, with your buy-side peers sharing their RPA experiences. To find out more, contact Matthaus Davies.  

This is the final blog in a three-part series. See also:

Part 1: How to Lay the Foundations for a Successful RPA Project

Part 2: How to Identify High-Impact RPA Opportunities

]]>
<![CDATA[HCL: Applying RPA to Reduce Customer Touch Points in Closed Book Life Insurance]]> This is the third in a series of blogs looking at how business process outsourcing vendors are applying RPA in the insurance sector.

HCL provides closed book life insurance outsourcing services, and is currently engaged in RPA initiatives with three insurance clients.

In order to capture customer data in a smarter, more concise way, HCL is using ‘enhancers’ at the front end, providing users with intuitive screens based on the selected administrative task. These input forms aim to request only the minimum, necessary data required with RPA now being used to transfer the data to the insurance system, ALPS, via a set of business rules.

For example, one RPA implementation undertaken can recognize the product type, policy ownership, values, and payment methods, and it can prepare and produce correspondence for the customer. If all rules are met, it is then able to move onto payment on the due date. This has been done with a view to reducing the number of touchpoints and engaging with the customer only when required. Indeed, HCL is working with its clients to devise a more exhaustive set of risk-based rules to further reduce the extent to which information needs to be gathered from customers.

Seeking a 25% cost take-out in high volume activities

On average, 11k customer enquiries are received by one HCL insurance contact center every month, and these were traditionally handed off to the back office to be resolved. However, HCL is now using RPA and business rules to enable more efficient handling of enquires/claims with limited user input, with the aim of creating capacity for an additional 4.4k customer queries per month to be handled within the contact center.

Overall, within its insurance operations, HCL is applying RPA-based business rules to ~10 core process areas that together amount to around 60% of typical day-to-day activity. These process areas include:

  • Payments out, including maturities, surrenders, and transfers

  • Client information, including change of address or, account information

  • Illustrations.

These processes are typically carried out by an offshore team and the aspiration is to reduce the effort taken to complete each of them by ~25%. In addition, HCL expects that capturing customer data in this new way will shorten the end-to-end journey by between 5% and 10%.

One lesson learned has been the need for robust and compatible infrastructure, both internally (ensuring that all systems and platforms are operating on the same network), and with respect to client infrastructure; e.g. ensuring that HCL is using the same version of Microsoft or Internet Explorer as the client environment.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 2: How to Identify High-Impact RPA Opportunities]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the second in a series of blogs presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at how to assess and prioritize RPA opportunities prior to project deployment.

Prioritize opportunities for quick wins

An enterprise level governance committee should be involved in the assessment and prioritization of RPA opportunities, and this committee needs to establish a formal framework for project/opportunity selection. For example, a simple but effective framework is to evaluate opportunities based on their:

  • Potential business impact, including RoI and FTE savings
  • Level of difficulty (preferably low)
  • Sponsorship level (preferably high).

The business units should be involved in the generation of ideas for the application of RPA, and these ideas can be compiled in a collaboration system such as SharePoint prior to their review by global process owners and subsequent evaluation by the assessment committee. The aim is to select projects that have a high business impact and high sponsorship level but are relatively easy to implement. As is usual when undertaking new initiatives or using new technologies, aim to get some quick wins and start at the easy end of the project spectrum.

However, organizations also recognize that even those ideas and suggestions that have been rejected for RPA are useful in identifying process pain points, and one suggestion is to pass these ideas to the wider business improvement or reengineering group to investigate alternative approaches to process improvement.

Target stable processes

Other considerations that need to be taken into account include the level of stability of processes and their underlying applications. Clearly, basic RPA does not readily adapt to significant process change, and so, to avoid excessive levels of maintenance, organizations should only choose relatively stable processes based on a stable application infrastructure. Processes that are subject to high levels of change are not appropriate candidates for the application of RPA.

Equally, it is important that the RPA implementers have permission to access the required applications from the application owners, who can initially have major concerns about security, and that the RPA implementers understand any peculiarities of the applications and know about any upgrades or modifications planned.

The importance of IT involvement

It is important that the IT organization is involved, as their knowledge of the application operating infrastructure and any forthcoming changes to applications and infrastructure need to be taken into account at this stage. In particular, it is important to involve identity and access management teams in assessments.

Also, the IT department may well take the lead in establishing RPA security and infrastructure operations. Other key decisions that require strong involvement of the IT organization include:

  • Identity security
  • Ownership of bots
  • Ticketing & support
  • Selection of RPA reporting tool.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held later this month: to find out more, contact Guy Saunders.  

In the third blog in the series, I will look at deploying an RPA project, from developing pilots, through design & build, to production, maintenance, and support.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 1: Laying the Foundations for Successful RPA]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the first in a series of blogs presenting key guidelines for organizations embarking on RPA, covering establishing the RPA framework, RPA implementation, support, and management. First up, I take a look at how to prepare for an RPA initiative, including establishing the plans and frameworks needed to lay the foundations for a successful project.

Getting started – communication is key

Essential action items for organizations prior to embarking on their first RPA project are:

  • Preparing a communication plan
  • Establishing a governance framework
  • Establishing a RPA center-of-excellence
  • Establishing a framework for allocation of IDs to bots.

Communication is key to ensuring that use of RPA is accepted by both executives and staff alike, with stakeholder management critical. At the enterprise level, the RPA/automation steering committee may involve:

  • COOs of the businesses
  • Enterprise CIO.

Start with awareness training to get support from departments and C-level executives. Senior leader support is key to adoption. Videos demonstrating RPA are potentially much more effective than written papers at this stage. Important considerations to address with executives include:

  • How much control am I going to lose?
  • How will use of RPA impact my staff?
  • How/how much will my department be charged?

When communicating to staff, remember to:

  • Differentiate between value-added and non value-added activity
  • Communicate the intention to use RPA as a development opportunity for personnel. Stress that RPA will be used to facilitate growth, to do more with the same number of people, and give people developmental opportunities
  • Use the same group of people to prepare all communications, to ensure consistency of messaging.

Establish a central governance process

It is important to establish a strong central governance process to ensure standardization across the enterprise, and to ensure that the enterprise is prioritizing the right opportunities. It is also important that IT is informed of, and represented within, the governance process.

An example of a robotics and automation governance framework established by one organization was to form:

  • An enterprise robotics council, responsible for the scope and direction of the program, together with setting targets for efficiency and outcomes
  • A business unit governance council, responsible for prioritizing RPA projects across departments and business units
  • A RPA technical council, responsible for RPA design standards, best practice guidelines, and principles.

Avoid RPA silos – create a centre of excellence

RPA is a key strategic enabler, so use of RPA needs to be embedded in the organization rather than siloed. Accordingly, the organization should consider establishing a RPA center of excellence, encompassing:

  • A centralized RPA & tool technology evaluation group. It is important not to assume that a single RPA tool will be suitable for all purposes and also to recognize that ultimately a wider toolset will be required, encompassing not only RPA technology but also technologies in areas such as OCR, NLP, machine learning, etc.
  • A best practice for establishing standards such as naming standards to be applied in RPA across processes and business units
  • An automation lead for each tower, to manage the RPA project pipeline and priorities for that tower
  • IT liaison personnel.

Establish a bot ID framework

While establishing a framework for allocation of IDs to bots may seem trivial, it has proven not to be so for many organizations where, for example, including ‘virtual workers’ in the HR system has proved insurmountable. In some instances, organizations have resorted to basing bot IDs on the IDs of the bot developer as a short-term fix, but this approach is far from ideal in the long-term.

Organizations should also make centralized decisions about bot license procurement, and here the IT department which has experience in software selection and purchasing should be involved. In particular, the IT department may be able to play a substantial role in RPA software procurement/negotiation.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s extensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held in November: to find out more, contact Matthaus Davies.  

 

In the second blog in this series, I will look at RPA need assessment and opportunity identification prior to project deployment.

 

]]>
<![CDATA[WNS: Applying RPA in P&C Insurance with Focus on FNOL, Claims & Underwriting]]> This is the second in a series of blogs looking at how business process outsourcing vendors are applying RPA and AI in the insurance sector.

 

WNS’ RPA journey is moving quickly, with six pilots underway and five more ready to go. WNS has decided to wait on AI for the time being, in favour of developing its process automation capabilities, which has included the launch of eAdjudicator (a bolt-on RPA tool for claims adjudication) and InsurACE (a policy administration workflow tool) earlier this year.

RPA delivering 25% savings; 40% achievable with employee retraining

Echoing its peers, WNS started by applying RPA to defined, rules-based, and transaction-based insurance activities, specifically in payments and first notice of loss (FNOL), followed by subrogation, since these sub-processes are relatively standardized and do not require human judgement. Based on its pilot experience to date, cost savings in these areas have been around 25%, but in order to realise further cost savings, there is a ‘Phase 2’ that requires re-training of the labor force and process reengineering to take advantage of the automation, which could see a further 10-15% savings. Three of the pilots are in this second phase.

To take its journey forward, WNS required a technology partner who had an insurance focus, a cloud-based offering, and a particular strength in robotics for analytics – specifically with a capability to handle the vast number of compliance requirements imposed by the different U.S. states.  It found these in Blue Prism (although it continues to be open to additional partnerships with other technology vendors), who also happened to be looking for more traction in the insurance space – something that WNS brought to the table.

P&C FNOL, Claims & Underwriting the Focus for 2016

In 2016, WNS has three focus areas in which it will be applying RPA, based on client appetite: FNOL, claims processing, and underwriting (UW), with an overall aim of removing the unnecessary steps in each sub-process.

As yet, there does not seem to be huge traction on the life insurance side and, as such, WNS will be focusing on property & casualty (P&C) processes. An example of a recently on-boarded UW client is a U.S. P&C insurer who was seeking to reduce the number of UW assistants it would need to hire. The client expected to hire ~75 UW assistants, but since partnering with WNS, the expectation is now that it will be in a position to hire ~30% less than this, and a further ~20% additional capacity will be created. The client moved from pilot mode for this first line of business (personal auto) to full production in April 2016, and is set to add further lines of business to the scope, each one going through separate pilots.  

An example of cost saving achieved through applying the Blue Prism framework to a set of UW processes was with a client whose workforce operated in a predominantly virtual environment. The ‘before’ state saw work passing through ~40 handoffs, which WNS was able to bring down to 7, using workflow mapping. This alone has yielded ~35% savings for the client and has proved ‘transformational’ for the business.

In most cases, the conversations appear to be led by WNS. One of the key concerns raised by clients, however, is around what happens to staff allocation once RPA is deployed. Typically, staff are still very much required, but need re-training to make the most of the new systems and to ensure they operate effectively.

For now, WNS believes that sufficient savings and efficiencies can be gained through applying RPA to an insurance sub-process such as claims logging, which will provide the claims adjuster with a better summation of the situation and enable the handler to carry out the insurance process more effectively and accurately. For example, reducing the number of claims pages down from 50 to 10, and eventually to as little as 7 bullet points of actionable items.

Other similar areas in which WNS has successfully applied this type of RPA include medical review and transcription. However, WNS is of the view that there are some sub-processes that cannot be carried out by anything other than human effort, e.g. bodily injury; as it stands, WNS has not found a way to simulate the experience of the claims handler with RPA for this type of process.

Areas that are now progressed beyond pilot mode and are proving successful for WNS are:

  • Vendor payment
  • Subrogation (clients are almost all on transaction-based pricing)
  • Claims logging
  • FNOL (~60% of clients are on transaction-based pricing).
]]>
<![CDATA[Wipro: Applying RPA to Insurance Claims & New Business, Looking to Holmes to Support KYC]]> This is the first in a series of blog articles looking at how business process outsourcing vendors are applying RPA and AI in the insurance sector. First up: Wipro.

 

 

Wipro started its automation journey in the late noughties and has since gone on to set up a dedicated RPA practice, and also developed its own AI platform, Wipro Holmes. Currently, Wipro is principally partnering with Automation Anywhere for RPA software.

Clients showing early interest had questions around which insurance processes bots could most easily be deployed in, and where should they be applying RPA. The processes Wipro found to be most suitable for application of RPA in the insurance sector are claims processing and new business, and hence these are the key focus areas for Wipro.

Efficiency improvements of ~40% in target insurance sub-processes

Today, over 50% of Wipro’s RPA clients are in the BFSI sector, with ~40% using bots for data entry processes and 60% for rules-based services. Wipro currently has four clients for RPA services in the insurance sector split across life, annuities & pensions (LA&P), property & casualty (P&C), and healthcare insurance. Two of these companies are focused on a single geography and two are multi-geography, including U.S., Europe, LATAM and the Middle East.  

One of the insurance clients is a Swiss provider of life and P&C services for whom Wipro provides RPA in support of new business data entry. Pre-bots, the filling in of a new business form required the use of multiple unsynchronized screens to collect the necessary information. To address this issue, Wipro developed an interface (a replica of the application form) to enable 100% automated data entry using bots, a typical ‘swivel chair’ use of RPA. This yielded a 30% - 40% efficiency improvement.

In the healthcare payer sector, Wipro has implemented RPA in support of provider contract data management, specifically in the area of contract validation. Here, Wipro designed four bots in 90 days, automating ~75% of the contract validation process and improving productivity by ~40%.

In 2016, Wipro has noticed a shift in customer attitude, with organizations now appreciating the enhanced accuracy and level of auditability that RPA brings.

Of course, the implementation of RPA is not without its objections. One frequent question from organizations just starting the RPA journey is ‘how do I stop bots going berserk if the process changes?’, since once programmed, the bots are unable to do anything other than what they have been programmed to do. Accordingly, Wipro ensures that any changes that occur in a given process are flagged up in the command centre before an attempt is made for them to be carried out by a bot, and a signal is given that the bot needs ‘re-training’ in order to carry out that process.

Secondly, IT departments sometimes ask how long the bots are required to stay in the work environment and how do they fit into an overall IT transformation strategy. Wipro’s response is to treat the bot like an FTE and to keep it for as long as it is achieving benefit, ‘re-training’ it as required. Wipro suggests that bots wouldn’t conflict with the aims of an IT transformation, and ought to be considered as complementary to an IT transformation.

Complementing RPA with Cognitive using Holmes

So far, so good for Wipro regarding its application of RPA in the insurance sector. RPA is being used to address data entry processes (40% of activity) and rules-based transaction processing areas such as claims (60% of current activity). However, this still leaves the question of complementing the rigid process execution of RPA with machine learning and self-learning processes, and also the question of addressing knowledge-based processing requiring human judgment.

This is where Wipro Holmes comes into the picture – a proprietary AI platform with applications for cognitive process automation, knowledge visualization, and predictive services. The platform is not currently being used with insurance clients, but conversations are expected to start within the next 9 months. It is expected that, in contrast to the RPA conversations which were led by Wipro in more than 95% of cases, the AI discussion will be led by existing RPA clients and across a wider pool of services, including finance & accounting (F&A).

Accordingly, the focus now is on developing Wipro Holmes, to ensure it is ready for use with clients in 2017. Insurance activities that will benefit first from this platform could include the area of Know Your Customer (KYC) compliance, to enable more rapid client on-boarding. 

]]>
<![CDATA[TCS Leapfrogging RPA & as-a-Service with Neural Automation & Services-as-Software]]> Much of the current buzz in the industry continues to be centered on RPA, a term currently largely synonymous with automation, and this technology clearly has lots of life left in it, for a few years at least. Outside service providers, where its adoption is rapidly becoming mature, RPA is still at the early growth stage in the wider market: while a number of financial services firms have already achieved large-scale roll-outs of RPA, others have yet to put their first bot into operation.

RPA is a great new technology and one that is yet to be widely deployed by most organizations. Nonetheless, RPA fills one very specific niche and remains essentially a band-aid for legacy processes. It is tremendous for executing on processes where each step is clearly defined, and for implementing continuous improvement in relatively static legacy process environments. However, RPA, as TCS highlights, does have the disadvantages that it fails to incorporate learning and can really only effectively be applied to processes that undergo little change over time. TCS also argues that RPA fails to scale and fails to deliver sustainable value.

These latter criticisms seem unfair in that RPA can be applied on a large scale, though frequently scale is achieved via numerous small implementations rather than one major implementation. Similarly, provided processes remain largely unchanged, the value from RPA is sustained. The real distinction is not scalability but the nature of the process environment in which the technology is being applied.

Accordingly, while RPA is great for continuous improvement within a static legacy process environment where processes are largely rule-based, it is less applicable for new business models within dynamic process environments where processes are extensively judgment-based. New technologies with built-in learning and adaptation are more applicable here. And this is where TCS is positioning Ignio.

TCS refers to Ignio as a “neural automation platform” and as a “Services-as-Software” platform, the latter arguably a much more accurate description of the impact of digital on organizations than the much-copied Accenture “as-a-Service” expression.

TCS summarizes Ignio as having the following capabilities:

  • “Sense”: ability to assimilate and mine diverse data sources, both internal and external, both structured and unstructured (via text mining techniques)
  • “Think”: ability to identify trends & patterns and make predictions and estimate risk
  • “Act”: execute context-aware autonomous actions. Here TCS could potentially have used one of the third-party RPA software products, but instead chose to go with their own software instead
  • “Learn”: improving its knowledge on a continuous basis and self-learning its context.

TCS Ignio, like IPsoft Amelia, began life as a tool for supporting IT infrastructure management, specifically datacenter operations. TCS Ignio was launched in May 2015 and is currently used by ten organizations, which includes Nationwide Building Society in the U.K. All ten are using Ignio in support of their IT operations, though the scope of its usage remains limited at present, with Ignio being used within Nationwide in support of batch performance and capacity management. Eventually the software is expected to be deployed to learn more widely about the IT environment and predict and resolve IT issues, and Ignio is already being used for patch and upgrade management by one major financial services institution.

Nonetheless, despite its relatively low level of adoption so far within IT operations, TCS is experiencing considerable wider interest in Ignio and feels it should strike while the iron is hot and take Ignio out into the wider business process environment immediately.

The implications are that the Ignio roll-out will be rapid (expect to see the first public example in the next quarter) and will take place domain by domain, as for RPA, with initial targeted areas likely to include purchase-to-pay and order-to-cash within F&A and order management-related processes within supply chain. In order to target each specific domain, TCS is pre-building “skills” which will be downloadable from the “Ignio store”. One of the initial implementations seems likely to be supporting a major retailer in resolving the downstream implications of delivery failures due to causes such as traffic accidents or weather-related incidents. Other potential supply chain-related applications cited for Ignio include:

  • Customer journey abandonment
  • The profiling, detection, and correction of check-out errors
  • Profiling, detecting, and correcting anomalies in supplier behavior
  • Detection of customer feedback trends and triggering corrective action
  • Profiling and predicting customer behavior.

Machine learning technologies are receiving considerable interest right now and TCS, like other vendors, recognizes that rapid automation is being driven faster than ever before by the desire for competitive survival and differentiation, and in response is adopting a “if it can be automated, it must be automated” stance. And the timescales for implementation of Ignio, cited at 4-6 weeks, are comparable to that for RPA. So Ignio, like RPA, is a relatively quick and inexpensive route to process improvement. And, unlike many cognitive applications, it is targeted strongly at industry-specific and back office processes and not just customer-facing ones.

Accordingly, while RPA will remain a key technology in the short-term for fixing relatively static legacy rule-based processes, next generation machine learning-based “Services-as-Software” platforms such as Ignio will increasingly be used for judgment-based processes and in support of new business models. And TCS, which a year ago was promoting RPA, is now leading with its Ignio neural automation-based “Services-as-Software” platform.

]]>