NelsonHall: RPA & AI Technology Evaluation blog feed https://research.nelson-hall.com//sourcing-expertise/digital-transformation-technologies-services/rpa-ai-technology-evaluation/?avpage-views=blog Insightful Analysis to Drive Intelligent Automation Platform Evaluation. NelsonHall has developed the Intelligent Automation Platforms Program as a dedicated service for organizations evaluating technology for the use of RPA and AI. <![CDATA[AntWorks Targets Breadth & Depth in Client Engagements, Partners & Curation Capabilities]]>

 

Last week, NelsonHall attended ANTENNA2020, AntWorks’ yearly analyst retreat. AntWorks has made considerable progress since its last analyst retreat, experiencing considerable growth (estimated at ~260%) in the three quarters ending January 2020, and employing 604 personnel at the end of this period.

By geography, AntWorks’ most successful geography remains APAC, closely followed by the Americas, with AntWorks having an increasingly balanced presence across APAC, the Americas, and EMEA. By sector, AntWorks’ client base remains largely centered on BFSI and healthcare, which together account for ~70% of revenues.

The company’s success continues to be based on its ability to curate unstructured data, with all its clients using its Cognitive Machine Reading (CMR) platform and only 20% using its wider “RPA” functionality. Accordingly, AntWorks is continuing to strengthen its document curation functionality while starting to build point solutions and building depth into its partnerships and marketing.

Ongoing Strengthening of Document Curation Functionality

The company is aiming to “go deep” rather than “shallow and wide” with its customers and cites the example of one client which started with one unstructured document use case and has over the past year introduced an additional ten unstructured document use cases resulting in revenues of $2.5m.

Accordingly, the company continues to strengthen its document curation capability, and recent CMR enhancements include signature verification, cursive handwriting, language extension, sentiment analysis, and hybrid processing. The signature verification functionality can be used to detect the presence of a signature in a document and verify it against signatures held centrally or on other documents and is particularly applicable for use in KYC and fraud avoidance where, for example, a signature on a passport or driving license can be matched with those on submitted applications.

This strategy of the depth of document curation functionality resonated strongly with the clients speaking at the event. In one such case, it was the depth of the platform allowing cursive and text to be analyzed together that led to an early drop out of a number of competitors tasked with building a POC that could extract cursive writing.

AntWorks also continues to extend the range of languages where it can curate documents; currently, 17 languages are supported. The company has changed the learning process for new languages to allow for quicker training on new languages, with support for Mandarin and Arabic available soon.

Hybrid processing enables multi-format documents containing, for example, text, cursive handwriting, and signatures to be processed in a single step.

Elsewhere, AntWorks has addressed a number of hygiene factors with QueenBOT, enhancing its business continuity management, auto-scaling, and security. Auto-scaling in QueenBOT to allow bots to switch between processes if one process requires extra assistance to meet SLAs, effectively allowing bots to be “carpenters in the morning and electricians in the evening,” increasing both SLA adherence and bot utilization.

Another key hygiene factor addressed in the past year has been training material. AntWorks began 2019 with a thin training architecture, with just two FTEs supporting the rapidly expanding company; over the past year, the number of FTEs supporting training has grown to 25, supporting the creation of thousands of hours of training material. AntWorks also launched its internship program, starting in India which has added 43 FTEs in 2019. The ambition this year is to go global with this program.

Announcement of Process Discovery, Email Agent & APaaS Offerings

Process discovery is an increasingly important element in intelligent automation, helping to remove the up-front cost involved in scaling use cases by identifying and mapping potential use cases.

AntWorks’ process discovery module enables organizations to both record the keystrokes taken by one or more users against multiple transactions or import keystroke data from third-party process discovery tools. From these recordings, it uses AI to identify the cycles of the process, i.e. the individual transactions, and presents the user with the details of the workflow, which can then be grouped into process steps for ease of use. The process discovery module can also be used to help identify the business rules of the process and assist in semi-automatic creation of the identified automations (aka AutoBOT).

The process discovery module aims to offer ease of use compared to competitive products and can, besides identifying transaction steps, be used to assist organizations in calculating the RoI on business cases and in estimating the proportions of processes that can be automated, though AntWorks is understandably reluctant to underwrite these estimates.

One of the challenges for AntWorks over the coming year is to develop standardized use cases/point solutions based on its technology components, initially in horizontal form, and ultimately verticalized. Two of these just announced are Email Agent and Accounts Payable as-a-Service (APaaS).

Email Agent is a natural progression for AntWorks given its differentiation in curating unstructured documents, built on components from the ANTstein full-stack and packaged for ease of consumption. It is a point solution designed solely to automate email traffic and encompasses ML-based email classification, sentiment analysis to support email prioritization, and extraction of actionable data. Email Agent can also respond contextually via templated static or dynamic content. AntWorks estimates that 40-50 emails are sufficient for training for each use case such as HR-related email.

The next step in the development of Email Agent is the production of verticalized solutions by training the model on specific verticals to understand the front office relations organizations (such as those in the travel industry) have with their clients.

APaaS is a point solution consisting of a pre-trained configuration of CMR to extract relevant information from invoices which can then be API’d into accounting systems such as QuickBooks. Through these point solutions offered on the cloud, AntWorks hopes to open up the potential for the SME market.

Focusing on Quality of Partnerships, Not Quantity

Movement on AntWorks’ partner ecosystem (now ~66) has been slower than expected, with only a handful of partners added since last year's ANTENNA event, despite its expansion being a priority. Instead, AntWorks has been ensuring that the partnerships it does have and signs are deep and constructive. Examples of these deep partnerships include Bizhub and Accenture, two partners who have been added and that are helping train CMR in Korean and Thai respectively in exchange for some timed exclusivity in those countries.

AntWorks is also partnering with SBI Group to penetrate the South East Asia marketplace, with SBI assisting AntWorks in implementing the ability to carry out data extraction in Japanese. Elsewhere, AntWorks has partnered with the SEED Group based in Dubai and chaired by Sheikh Saeed Bin Ahmed Al Maktoum to access the MENA (Middle East & North Africa) region.

New hire Hugo Walkinshaw was brought in to lead the partnership ecosystem very recently, and he has his work cut out for him, as CEO Ash Mehra targets a ratio of direct sales to sales through partners of between 60:40 and 50:50 (an ambitious target from the current 90:10 ratio). The aim is to achieve this through the current strategy of working very closely with partners, signing exclusive partnerships where appropriate, and targeting less mature geographies and emerging use cases, such as IoT, where AntWorks can establish a major presence.

In the coming year, expect AntWorks to add more deep partnerships focused on specific geographic presence in less mature markets and targeted verticals, and possibly with technology players to support future plans for running bots on embedded devices such as ships.

Continuing to Ramp Up Marketing Investment

AntWorks was relatively unknown 18 months ago but has made a major investment in marketing since then. AntWorks attended ~50 major events in 2019, possibly 90 events in total, counting all minor events. However, AntWorks’ approach to events is arguably even more important than the number attended, with the company keen to establish a major presence at each event it attends. AntWorks does not wish to be merely another small booth in the crowd, instead opting for larger spaces in which it can run demos to support the interest in clients and partners.

This appears to have had the desired impact. Overall, AntWorks states that in the past year it has gone from being invited to RFIs/RFPs in 20% of cases to 80% and that it intends to continue to ramp up its marketing budget.

A series B round of funding, currently underway, is targeted on expanding its marketing investments as well as its platform capabilities. Should AntWorks utilize this second round of funding as effectively as its first with SBI Investments 2 years ago, we expect it to act as a springboard for exponential growth and these deep relationships and continue to lead in middle- and back-office intelligent automation use cases with high volumes of complex or hybrid unstructured documents.

]]>
<![CDATA[UiPath: Forging Connections Between Business Users & Automation]]>

 

Reboot work was the slogan for UiPath’s recent Forward III partner event, a reference to rethinking the way we work. UiPath’s vision is to elevate employees above repetitive and tedious tasks to a world of creative, fulfilling work. The company’s vision is driven by an automation first mindset, along with the concept of a bot for everyone and human-automation collaboration.

During the event, which attracted ~3K attendees, UiPath referenced ~50 examples of clients at scale, and pointed to a sales pipeline of more than $100m.

Previously, UiPath’s automation process had three phases: Build, Manage, and Run, using Studio, Orchestrator, and Attended and Unattended bots respectively. Their new products extend this process to six phases: Plan, Build, Manage, Run, Engage, and Measure. In this blog, I look at the six phases of the UiPath automation process and at the key automation products at each stage, including new and enhanced products announced at the event.

 

 

Plan phase (with Explorer Enterprise, Explorer Expert, ProcessGold, and Connected Enterprise)

By introducing the product lines Explorer and Connected Enterprise, UiPath aims to allow RPA developers to have a greater understanding of the processes to be automated when planning RPA development.

Explorer consists of three components, Explorer Enterprise, Explorer Expert, and ProcessGold. Explorer provides new process mapping and mining functionality building on two UiPath acquisitions: the previously announced SnapShot, which now comes under the Explorer Enterprise brand, and the newly announced ProcessGold whose existing clients include Porsche and EY. Both products construct visual process maps in data-driven ways; Explorer Enterprise (SnapShot) does this by observing the steps performed by a user for the process, and ProcessGold does this by mining transaction logs from various systems.

Explorer Enterprise performs task mining, with an agent sitting in the background of a user machine (or set of users’ machines) for 1-2 weeks. Explorer then collects details of the user activities, the effort required, the frequency of the activity, etc.

ProcessGold, on the other hand, monitors transaction logs and, following batch updates and 2 to 3 hours of construction, builds a process flow diagram. These workflow diagrams show the major activities of the process and the time/effort required for each step, which can then be expanded to an individual task level. Additionally, at the activity level, the user has access to activity and edge sliders. The activity slider expands the detail of the activities, and the edge slider expands the number of paths that the logged users take, which can identify users possibly straying from a golden path.

Administrators can then use the data from Explorer Enterprise and/or ProcessGold in Explorer Expert. Explorer Expert allows the admin users to enter deeper organizational insights, and either record a process to build or manually create a golden path workflow. These workflows act as a blueprint to build bots and can be exported to Word documents which can then be used by bot creators.

Connected Enterprise enables an organization to crowdsource ideas for which processes to automate, and aims to simplify the automation and decision-making pipelines for CoEs.

Automation ideas submitted to Connected Enterprise are accompanied by process information from the submitter in the form of nine standard questions, such as how rule-based it is, how likely to change it is, who the owner is, etc. as well as process owners. This information is crunched to produce automation potential and ease of implementation scores to help decide on the priority of the automation idea. These ideas are then curated by admins who can ask the end-user for more information, including an upload of ProcessGold files.

The additions of Explorer and Connected Enterprise allow developers to gain deeper insights into the processes to be automated, and business users to connect with RPA development.

Build phase (with enhanced Studio, plus new StudioX & StudioT)

New components to the build phase include StudioX and StudioT along with a number of enhancements to the existing Studio bot builder.

StudioX is a simplified version of the Studio component which is targeted at citizen developers and regular business users, which UiPath referred to ‘Excel power user level’, to create more simplistic bots as part of a push for citizen developers and a bot for every person.

StudioX simplifies bot development by removing the need for variables, and reduces the number of tasks that can be selected. Bots produced with StudioX can be opened with Studio; however, the reverse may not necessarily be the case depending on the components used in Studio.

The build-a-bot demo session for StudioX focused on using Excel to copy data in and out of HR and finance systems and extracting and renaming files from an Outlook inbox to a folder. Using StudioX in the build-a-bot session was definitely an improvement over Studio for the creation of these simple bots.

StudioT, which is in beta and set to release Q1 2020, will act as a version of Studio focused entirely on testing automation. NelsonHall’s software testing research, including software testing automation, can be found here.  

Further key characteristics of the existing build components include:

  • Long-running workflows which can suspend a process, send a query to a human while freeing up the bot, and continuing the bot once the human has provided input
  • Cloud which has a 1-minute signup for the Community version of the aaS platform and (as of September 2019) has 240k users, up from 167k in June 2019
  • Queue triggers which can automatically take action when items are added to the queue
  • More advanced debugging with breakpoint and watch panels
  • Taxonomy management
  • Validation stations.

With the introduction of StudioX, UiPath aims to democratize RPA development to the business users, at least in simple cases; and with long-running workflows, human-bot collaboration no longer requires bots to sit idle, hogging resources while waiting for responses.

Manage phase (with AI Fabric)

The Manage phase now allows users to manage machine learning (ML) models using AI Fabric, an add-on to Studio. It allows users to more easily select ML models, including models created outside of UiPath, and integrate them into a bot. AI Fabric, which was announced in April 2019, has now entered private preview.

Run phase (with enhancements to bots with native integrations)

Improvements to the run components leverage changes across the portfolio of Plan, Build, Manage, Run, Engage, and Measure, in particular for attended bots with Apps (see below). Other new features include:

  • Expanding the number of native integrations, for which UiPath and its partners are building 100s of connectors to business applications such as Salesforce and Google to provide functionality including launching bots from the business application. Newer native applications are available via the UiPath Go! Storefront
  • A new tray will feature in the next release.

Engage phase (allowing users direct connection to bots with Apps)

Apps act as a direct connection for users to interact with attended bots through the use of forms, tasks, and chatbots. In Studio, developers can add a form with the new form designer to ask for inputs directly from the user. For example, combined with an OCR confidence score, a bot could trigger a form to be filled in should the confidence score of the OCR be substandard due to a low-quality image.

Bots that encounter a need for human intervention through Apps will automatically suspend, add a task to the centralized inbox, and move on to running another job. When a human has completed the required interaction, the job is flagged to be resumed by a bot.

With the addition of Apps, the development required to capture inputs from the business users is minimized to allow for a deeper human-bot connection, reduction of development timelines, and helping to enable the goal of ‘a bot for every person’.

Measure phase (now with Insights to measure bot performance)

Insights expands UiPath’s reporting capabilities. Specifically, Insights features customizable dashboarding facilities for process and bot metrics. Insights also features the ability to send pulses, i.e. notifications, to users on metrics, such as if an SLA falls below a threshold. Dashboards can be filtered on processes and bots and can be shared through a URL or as a manually sent or scheduled PDF update.

What does this mean for the future of UiPath?

While UiPath and its competitors have long-standing partnerships with the likes of Celonis for process mining, the addition of native process mining through the acquisitions of SnapShot and ProcessGold, in addition to the expanded reporting capabilities, position UiPath as more of an end-to-end RPA provider.

With ProcessGold, NelsonHall believes that UiPath will continue the development of Explorer, which could lead to a nirvana state in which a client deploys ProcessGold, ProcessGold maps the processes and identifies areas that are ideal for automation, and Explorer Expert helps the bot creator to design this process by linking directly with Studio. While NelsonHall has had conversations with niche process mining and automation providers that are focusing on developing bots through a combination of transaction logs and recording users, UiPath is currently the best positioned of the big 3 intelligent automation platform providers to invest in this space.

StudioX is a big step towards allowing citizen developers. During our build-a-bot session, it was clear that the simplified version of the platform is more user-friendly, resulting in the NelsonHall team powering ahead of the instructor at points. However, we were somewhat concerned that while StudioX opens up the ability to develop bots to a larger scope of personae, the slight disconnects between Studio and StudioX could lead to users learning StudioX and wanting to leverage activities that are currently restricted to Studio (such as error handling) becoming frustrated. NelsonHall believes that the lines between Studio and StudioX will blur, with StudioX receiving simplified functions currently restricted to Studio, which will enable more bots to be passed between the two personae.

Conclusion

With the announcements at the Forward III event, it is clear that UiPath is enabling organizations to connect the business users directly with automation; be that through citizen developers with StudioX, the Connected Enterprise Hub to forge stronger connections between business users and automation CoEs, Explorer to allow the CoE to have greater understanding of the processes, or Apps to provide direct access to the bot.

This multi-pronged push approach to connect the developer and automation to the business user will certainly reduce frustrations around bot development and reduce the feeling from business users that automation is something that is thrust upon them rather than being part of their organization's journey to a more efficient way of working.

]]>
<![CDATA[Automation Anywhere’s Enterprise A2019, Simpler to Use, Quicker to Scale]]> ‘Anything Else is Legacy’ was the messaging presented at Automation Anywhere’s Enterprise A2019 launch, hosted in New York.

The event, the first under new CMO Riadh Dridi, showcased improvements in the new version of the Automation Anywhere platform around:

  • Experience – the most immediate change is in the UI. While prior versions utilized code, workflow, and mixed code/workflow views, the new version features a completely revamped workflow view that simplifies the UX with little coding environment
  • Cloud –delivery now utilizes a completely web-based interface, allowing users to sign in and create bots in minutes with zero required installation. This speed of development was demonstrated live on stage with SVP of products Abhijit Kakhandiki successfully racing to create a simple bot against the arrival of an Uber ordered by CEO Mihir Shukla. The bot used in this example was part of Automation Anywhere’s RPA-aaS offering, hosted on Azure leveraging its partnership with Microsoft. Automation Anywhere was also keen to point out the ability to use the platform on-premise or in a private cloud, as is deployed at JP Morgan Chase, the client speaking at the event
  • Ecosystem – Automation Anywhere highlighted it has strong and growing ecosystem. With Microsoft, for example, the partnership has been operating for over a year and has so far featured the ability to embed Microsoft’s AI tools into bots, and the above-mentioned Azure partnership. The event featured a demonstration of the integration of Automation Anywhere into Office: a user was able to select and use bots from Excel, as a single joined experience
  • Intelligent Automation – in addition to leveraging the ecosystem for its ability to drag and drop third-party AI components, another improvement in A2019 was the integration of the capabilities gained through the Klevops acquisition earlier this year to improve assisted automation capabilities, providing a greater bot and human collaboration across teams and workflows

The majority of these enhancements are already analyzed in NelsonHall’s profile of Automation Anywhere’s capabilities as part of the Intelligent Automation Platform NEAT assessment.

Using the above enhancements, Automation Anywhere estimates that whereas previously clients required 3 to 6 months to POC, and a further 6 to 24 months to scale, it now takes 1-4 months to POC and 4-12 months to scale.

Absent from the event were enhancements to the governance procedure of bots, vitally important as the access to build bots increases, and the bot store for which curation could still be an issue.

While the messaging of the event was ‘Anything Else is Legacy’, there were some natural points in which the announcement looks unfinished – the partnership with Office currently only extends to Excel, the rest of the suite will follow, and the Community version of Automation Anywhere, which is how a large proportion of users dip their toes in the water of automation, is set to be updated to match A2019 later in Q4 2019. Likewise, while the improvement to the workflow view is much cleaner, easier to use than competitors, leading to quicker bot development, the competitor platforms more easily handle complex, branching operations. Therefore, while A2019 can be ideal for organizations that are looking to have citizen developers build simple bots, organizations looking to automate more complex workflows should include the competing platforms in shortlisting.

NelsonHall's profile on the Automation Anywhere platform can be found here.

The recent NEAT evaluation of Intelligent Automation Platforms can be found here.

]]>
<![CDATA[NelsonHall Launches Industry-First Intelligent Automation Platform Evaluation]]>

 

NelsonHall has just launched an industry-first evaluation of Intelligent Automation (IA) platforms, including platforms from Antworks, Automation Anywhere, Blue Prism, Datamatics, IPsoft, Jacada, Kofax, Kryon, Redwood, Softomotive, and UiPath.

As RPA and artificial intelligence converge to address more sophisticated use cases, we at NelsonHall feel it is now time for an evaluation of IA platforms on an end-to-end basis and based on the use cases to which IA platforms will typically be applied. Accordingly, NelsonHall has evaluated IA platforms against five use cases:

  • Ability for Business Process Owners to Develop Automations
  • Bot/Human Co-Working SSC Capability
  • Ease of IA Adoption & Scaling
  • End-to-End IA Capability
  • Overall.

Ability for Business Process Owners to Develop Automations – as organizations move to a ‘bot for every worker’, platforms must support the business process owners in developing automations rather than select individuals as part of an automation CoE. Capabilities that support business process owners in developing an automation include a strong bot development canvas, a well-populated app/bot store, and process discovery functionality, all in support of speed of implementation.

Bot/Human Co-Working SSC Capability – in addition to traditional unassisted back-office automation and assisted individual automations, bots are increasingly required to provide end-to-end support for large-scale SSC and contact center automation. This increasingly requires bot/human rather than human/bot co-working, with the bot taking the lead in processing SSC transactions, queries and requests. The key capabilities here include conversational intelligence, ability to handle unidentified exceptions, and seamless integration of RPA and machine learning.

End-to-End IA Capability – the ability for a platform to support an automation spanning an end-to-end process, leveraging ML and artificial intelligence, either through native technologies or through partnerships. While many IA implementations remain highly RPA-centric, it is critical for organizations to begin to leverage a wider range of IA technologies if they are to address unstructured document processing and begin to incorporate self-learning in support of exception handling. Key capabilities here include computer vision/NLP, ability to handle unidentified exceptions, and seamless integration of RPA and machine learning in support of accurate document/data capture, reduced error rates, and improved transparency & auditability of operations.

Ease of IA Adoption & Scaling – the ability for organizations to roll out automations at scale. Key criteria here include the ability to leverage the cloud delivery of the IA platform and the strength of the bot orchestration/management platform.

Overall – a composite perspective of the strength of the IA platforms across capabilities, delivery options, and the benefits provided to clients.

No single platform is the most appropriate across all these use cases, and the pattern of capability varies considerably by use case. And this area is ill-understood, even by the vendors operating in this market, with companies that NelsonHall has identified as leaders unknown even to some of their peers. However, the NelsonHall Evaluation & Assessment Tool (NEAT) for IA platforms enables organizations to see the relative strengths and capabilities of platform vendors for all the use cases described above in a series of quadrant charts.

If you are a buy-side organization, you can view these charts, and even generate your own charts based on criteria that are important to you, FREE-OF-CHARGE at NelsonHall Intelligent Automation Platform evaluation.

The full project, including comprehensive profiles of each vendor and platform, is also available from NelsonHall by contacting either Guy Saunders or Simon Rodd.

]]>
<![CDATA[Democratizing RPA through the Connected Entrepreneur Enterprise]]>

 

Following on from the Blue Prism World Conference in London (see separate blog), NelsonHall recently attended the Blue Prism World conference in Orlando. Building on the significant theme around positioning the ‘Connected Entrepreneur Enterprise’, the vendor provided further details on how this links to the ‘democratization’ of RPA through organizations.

In the past, Blue Prism has seen automation projects stall when being led from the bottom up (due to inabilities to scale and apply strong governance or best practices from IT), or from the top down (which has issues with buy-in and with speed of deployments). However, their Connected Entrepreneur Enterprise story aims to overcome these issues by decentralizing automation. So how is Blue Prism enabling this?

Connected Entrepreneur Enterprise

The Connected RPA components, namely Blue Prism’s connected-RPA platform, Blue Prism Digital Exchange, Blue Prism Skills, and Blue Prism Communities, all aim to facilitate this. In particular, the likes of Blue Prism Communities acts as a knowledge-sharing platform for which Blue Prism envisions that clients will access forums for help in building digital workers (software robots), share best practices, and (with its new connection into Stack Overflow) collaborate on digital worker development.

Blue Prism Skills helps in lightening the load with knowledge requirements for users to begin digital worker development. with the ability to drag and drop in AI components into processes such as any number of computer vision AI solutions.

Decipher for document processing was developed by Blue Prism’s R&D lab, and features ML which can be integrated into digital workers, and in turn can have skills such as language detection from Google dropped into the process. The ability to drag and drop these skills continues the work in allowing business users who know the process best to quickly and easily build AI into digital workers. Additionally, Decipher introduces human-in-the-loop capability into Blue Prism to assist in cases for which the OCR lacks confidence in its result. The beta version of Decipher is set to launch this summer with a focus on invoice processing.

Decipher will also factor in the new cloud-based and mobile-enabled dashboard capabilities in the new dashboard notification area which, in addition to providing SLA alerts, provides alerts when queues for Decipher’s human-in-the-loop feature are backing up.

Client example

An example of Blue Prism being used to democratize RPA is for marquee client EY. EY, Blue Prism’s fifth largest client, spoke during the conference about its automation journey. During the 4.5-year engagement, EY has deployed 2k digital workers, with 1.3k performing client work and 700 working internally on 500 processes. Through the deployment of the digital workforce, EY has saved 2 million-man hours.

In democratizing RPA, EY federated the automation to the business, while using a centralized governance model and IT pipeline. A benefit of having an IT pipeline was that the automation of processes was not a stop-start development.

When surveying its employees, EY found that the employees who had been involved in the development of RPA had the highest engagement.

Likewise, Blue Prism had market surveys performed with a partner that found that in 87% of cases in the U.S., employees are willing to reskill to work alongside a digital workforce.

Summary

There is further work to be done in democratizing RPA as part of this Connected Entrepreneur Enterprise, and Blue Prism is currently looking into upgrading the underlying architecture and is surveying its partners with regard to UI changes; in addition, it is moving aspects of the platform to the cloud, starting with the dashboarding capability. Also, while Blue Prism has its university partnerships, these are often not heavily marketed and are in competition with other RPA vendors in the space offering the likes of community editions to encourage learnings.

]]>
<![CDATA[AntWorks Positioning BOT Productivity and Verticalization as Key to Intelligent Automation 2.0]]> Last week, AntWorks provided analysts with a first preview of its new product ANTstein SQUARE, to be officially launched on May 3.

AntWorks’strategy is based on developing full stack intelligent automation, built for modular consumption, and the company’s focus in 2019 is on:

  • BOT productivity, defined as data harvesting plus intelligent RPA
  • Verticalization.

In particular, AntWorks is trying to dispel the idea that Intelligent Automation needs to consist of three separate products from three separate vendors across machine vision/OCR, RPA, and AI in the form of ML/NLP, and show that AntWorks can offer a single, though modular, “automation” across these areas end-to-end.

Overall, AntWorks positions Intelligent Automation 2.0 as consisting of:

  • Multi-format data ingestion, incorporating both image and text-based object detection and pattern recognition
  • Intelligent data association and contextualization, incorporating data reinforcement, natural language modelling using tokenization, and data classification. One advantage claimed for fractal analysis is that it facilitates the development of context from images such as company logos and not just from textual analysis and enables automatic recognition of differing document types within a single batch of input sheets
  • Smarter RPA, incorporating low code/no code, self-healing, intelligent exception handling, and dynamic digital workforce management.

Cognitive Machine Reading (CMR) Remains Key to Major Deals

AntWorks’ latest release, ANTstein SQUARE is aimed at delivery of BOT productivity through combining intelligent data harvesting with cognitive responsiveness and intelligent real-time digital workforce management.

ANTstein data harvesting covers:

  • Machine vision, including, to name a modest sub-set, fractal machine learning, fractal image classifier, format converter, knowledge mapper, document classifier, business rules engine, workflow
  • Pre-processing image inspector, where AntWorks demonstrated the ability of its pre-processor to sharpen text and images, invert white text on a black background, remove grey shapes, and adjust skewed and rotated inputs, typically giving a 8%-12% uplift
  • Natural language modelling.

Clearly one of the major issues in the industry over the last few years has been the difficulty organizations have experienced in introducing OCR to supplement their initial RPA implementations in support of handling unstructured data.

Here, AntWorks has for some time been positioning its “cognitive machine reading” technology strongly against traditional OCR (and traditional OCR plus neural network-based machine learning) stressing its “superior” capabilities using pattern-based Content-based Object Retrieval (CBOR) to “lift and associate all the content” and achieve high accuracy of captured content, higher processing speeds, and ability to train in production. AntWorks also takes a wide definition of unstructured data covering not just typed text, but also including for example handwritten documents and signatures and notary stamps.

AntWorks' Cognitive Machine Reading encompasses multi-format data ingestion, fractal network driven learning for natural language understanding using combinations of supervised learning, deep learning, and adaptive learning, and accelerators e.g. for input of data into SAP.

Accuracy has so far been found to be typically around 75% for enterprise “back-office” processes, but the level of accuracy depends on the nature of the data, with fractal technology most appropriate where the past data strongly correlates with future data and data variances are relatively modest. Fractal techniques are regarded by AntWorks as being totally inappropriate in use cases where the data has a high variance e.g. crack detection of an aircraft or analysis of mining data. In such cases, where access to neural networks is required, AntWorks plans to open up APIs to, for example, Amazon and AWS.

Several examples of the use of AntWorks’ CMR were provided. In one of these, AntWorks’ CMR is used in support of sanction screening within trade finance for an Australian bank to identify the names of the parties involved and look for banned entities. The bank estimates that 89% of entities could be identified with a high degree of confidence using CMR with 11% having to be handled manually. This activity was previously handled by 50 FTEs.

Fractal analysis also makes its own contribution to one of ANTstein’s USPs: ease of use. The business user uses “document designer”, to train ANTstein on a batch of documents for each document type, but fractal analysis requires lower numbers of cases than neural networks and its datasets also inherently have lower memory requirements since the system uses data localization and does not extract unnecessary material.

RPA 2.0 “QueenBOTs” Offer “Bot Productivity” through Cognitive Responsiveness, Intelligent Digital Automation, and Multi-Tenancy

AntWorks is positioning to compete against the established RPA vendors with a combination of intelligent data harvesting, cognitive bots, and intelligent real-time digital workforce management. In particular, AntWorks is looking to differentiate at each stage of the RPA lifecycle, encompassing:

  • Design, process listener and discoverer
  • Development, aiming to move towards low code business user empowerment
  • Operation, including self-learning and self-healing in terms of exception handling to become more adaptive to the environment
  • Maintenance, incorporating code standardization into pre-built components
  • Management, based on “central intelligent digital workforce management.

Beyond CMR, much of this functionality is delivered by QueenBOTs. Once the data has been harvested it is orchestrated by the QueenBOT, with each QueenBOT able to orchestrate up to 50 individual RPA bots referred to as AntBOTs.

The QueenBOT incorporates:

  • Cognitive responsiveness
  • Intelligent digital automation
  • Multi-tenancy.

“Cognitive responsiveness” is the ability of the software to adjust automatically to unknown exceptions in the bot environment, and AntWorks demonstrated the ability of ANTstein SQUARE to adjust in real-time to situations where non-critical data is missing or the portal layout has changed. In addition, where a bot does fail, ANTstein aims to support diagnosis on a more granular basis by logging each intermittent step in a process and providing a screenshot to show where the process failed.

AntWorks’ is aiming to put use case development into the hands of the business user rather than data scientists. For example, ANTstein doesn’t require the data science expertise for model selection typically required when using neural network based technologies and does its own model selection.

AntWorks also stressed ANTstein’s ease of use via use of pre-built components and also by developing its own code via the recorder facility and one client talking at the event is aiming to handle simple use cases in-house and just outsourcing the building of complex use cases.

AntWorks also makes a major play on reducing the cost of infrastructure compared to traditional RPA implementations. In particular, ANTstein addresses the issue of servers or desktops being allocated to, or controlled by, an individual bot by incorporating dynamic scheduling of bots based on SLAs rather than timeslots and enabling multi-tenancy occupancy so that a user can use a desktop while it is simultaneously running an AntBOTs or several AntBOTs can run simultaneously on the same desktop or server.

Building Out Vertical Point Solutions

A number of the AntWorks founders came from a BPO background, which gave them a focus on automating the process middle- and back-office and the recognition that bringing domain and technology together is critical to process transformation and building a significant business case.

Accordingly, verticalization is a major theme for AntWorks in 2019. In addition to support for a number of horizontal solutions, AntWorks will be focusing on building point solutions in nine verticals in 2019, namely:

  • Banking: trade finance, retail banking account maintenance, and anti-money laundering
  • Mortgage (likely to be the first area targeted): new application processing, title search, and legal description
  • Insurance: new account set up, policy maintenance, claims handling, and KYC
  • Healthcare & life sciences: BOB reader, PRM chat, payment posting, and eligibility
  • Transportation & logistics: examination evaluation
  • Retail & CPG: no currently defined point solutions
  • Telecom: customer account maintenance
  • Media & entertainment: no currently defined point solutions
  • Technology & consulting: no currently defined point solutions.

The aim is to build point solutions (initially in conjunction with clients and partners) that will be 80% ready for consumption with a further 20% of effort required to train the bot/point solution on the individual company’s data.

Building a Partner Ecosystem for RPA 2.0

The company claims to have missed the RPA 1.0 bus by design (the company commenced development of “full-stack ANTstein in 2017) and is now trying to get out the message that the next generation of Intelligent Automation requires more than OCR combined with RPA to automate unstructured data-heavy industry-specific processes.

The company is not targeting companies with small numbers of bot implementations but is ideally seeking dozens of clients, each with the potential to build into $10m relationships. Accordingly the bulk of the company’s revenues currently comes from, and is likely to continue to come from, CMR-centric sales with major enterprises either direct or through relationships with major consultancies.

Nonetheless, AntWorks is essentially targeting three market segments:

  • Major enterprises with CMR-centric deals
  • RPA 2.0, through channels
  • Point solutions.

In the case of major enterprises, CMR is typically pulling AntWorks’ RPA products through to support the same use cases.

AntWorks is trying to dissociate itself from RPA 1.0, strongly positioning against the competition on the basis of “full stack”, and is slightly schizophrenic about whether to utilize a partner ecosystem which is already tied to the mainstream RPA products. Nonetheless, the company is in the early stages of building a partner ecosystem for its RPA product based on:

  • Referral partners
  • Authorized resellers
  • Managed Services Program, where partners such as EXL build their own solutions incorporating AntWorks
  • Technology Alliance partners
  • Authorized training partners
  • University partners, to develop up a critical mass of entry-level automation personnel with experience in AntWorks and Intelligent Automation in general.

Great Unstructured Data Accuracy but Needs to Continue to Enhance Ease of Use

A number of AntWorks’ clients presented at the event and it is clear that they perceive ANTstein to deliver superior capture and classification of unstructured data. In particular, clients liked the product’s:

  • Superior natural language-based classification using limited datasets
  • Ability to use codeless recorders
  • Ability to deliver greater than 70% accuracy at PoC stage

However, despite some the product’s advantages in terms of ease of use, clients would like further fine tuning of the product in areas such as:

  • The CMR UI/UX is not particularly user-friendly. The very long list of options is hard for business users to understand who require shorter more structured UI
  • Improved ease of workflow management including ability to connect to popular workflows.

So, overall, while users should not yet consider mass replacement of their existing RPAs, particularly where these are being used for simple rule-based process joins and data movement, ANTstein SQUARE is well worth evaluation by major organizations that have high-volume industry-specific or back-office processes involving multiple types of unstructured documents in text or handwritten form and where achieving accuracy of 75%+ will have a major impact on business outcomes. Here, and in the industry solutions being developed by AntWorks, it probably makes sense to use the full-stack of ANTstein utilizing both CMR and RPA functionality. In addition, CMR could be used in standalone form to facilitate extending an existing RPA-enabled process to handle large volumes of unstructured text.

Secondly, major organizations that have an outstanding major RPA roll-out to conduct at scale, are becoming frustrated at their level of bot productivity, and are prepared to introduce a new RPA technology should consider evaluating AntWorks' QueenBOT functionality.

The Challenge of Differentiating from RPA 1.0

If it is to take advantage of its current functionality, AntWorks urgently needs to differentiate its offerings from those of the established RPA software vendors and its founders are clearly unhappy with the company’s past positioning on the majority of analyst quadrants. The company aimed to achieve a turnaround of the analyst mindset by holding a relatively intimate event with a high level of interaction in the setting of the Maldives. No complaints there!

The company is also using “shapes” rather than numbers to designate succeeding versions of its software. Quirky and could be incomprehensible downstream.

However, these marketing actions are probably insufficient in themselves. To complement the merits of its software, the company needs to improve its messaging to its prospects and channel partners in a number of ways:

  • Firstly, the company’s tagline “reimagining, rethink, recreate” shows the founders’ backgrounds and is arguably more suitable for a services company than for a product company
  • Secondly, establishing an association with Intelligent Automation 2.0 and RPA 2.0 is probably too incremental to attract serious attention.

Here the company needs to think big and establish a new paradigm to signal a significant move beyond, and differentiation from, traditional RPA.

]]>
<![CDATA[Get Ready for Quantum Computing: 5 Steps to Take in 2019]]>

 

IBM recently announced the first ‘commercial-ready’ quantum computer, the 20-qubit Q System One. The date is certainly worth recording in the annals of computing history. But, in much the same way that mainframes, micros, and PCs all began with an ‘iron launch’ and then required a long pragmatic use case maturity curve, so too will this initial offering from IBM be the first step on a long evolution path. With so much conjecture and contemplation happening in the industry surrounding this announcement, let’s unpack what IBM’s announcement means – and how organizations should be reacting.

First, although Q System One is being billed as commercial-ready, that designation means that the product is ready for usage on a traditional cloud computing basis, not necessarily that it is ready to contribute meaningfully to solving business problems (although the device will certainly mature quickly in both capability and speed). What Q System One does offer is a keystone for the industry to begin working with quantum technology in much the same way that any other cloud utility supercomputing devices are available, and a testbed for beginning to explore and develop quantum code and quantum computing strategies. As such, while Q System One may not outperform traditional cloud computing resources today, its successors will likely do so in short order – perhaps as soon as 2020.

As I noted in my blockchain predictions blog for 2019, quantum computing has long been the shadow over blockchain adoption, owing to the concern that quantum computing will make blockchain’s security aspect obsolete. That watershed lies years in our future, if indeed at all, and it is important to note that quantum computing can as easily be tasked to enhance cryptographic strength as it can to break it down. As a result, expect that the impact of quantum computing on blockchain will net to a zero-sum game, with quantum capabilities powering ever-more evolved cryptographic standards in much the same way that the cybersecurity arms race has proceeded to date.

With this in mind, what should organizations have on their quantum readiness roadmaps? The short answer is that quantum readiness is more the beginning of many long-term projects rather than the consummation of any short-term ones, so quantum is more a component of IT strategy than near-term tactical change. Here are five recommendations I’m making for beginning to ready your organization for quantum computing during 2019.

Migrate to SHA-3 – and build an agile cybersecurity faculty

There is no finish line for cybersecurity, especially with quantum capabilities on the horizon, but when I speak with enterprise organizations on the subject, I recommend that a combination of NIST and RSA/ECC technologies approximates to something that will be quantum-proof for the foreseeable future. Migration off of SHA-2 is a strong prescriptive regardless, given the flaws that platform shared with its predecessor. But perhaps more importantly than the construction of a cryptographic standard to meet quantum’s capabilities is the design of an agile cybersecurity faculty that can shorten the time to transition from one standard to the next. Quantum computing will produce overnight gains in both security and exposure as the technology evolves; being ready to take swift counteraction will be key in the next decade of information technology.

Begin asking entirely new questions in a Quantum CoE

Traditional computing technology has taught us clear phase lines of the possible and impossible with respect to solving business problems. Quantum, over the course of the next decade, will completely redraw those lines, with more capability coming online with each passing year (and, eventually, quarter). Tasks like modeling new supply chain algorithms, new modes of product delivery, even new projections of complex M&A activity in a sector over a long forecast span will become normal requests by 2030.

Make sure data hygiene and MDM protocols are quantum-ready

Already, there have been multiple technologies – Big Data, automation, and blockchain are just three – that have strongly suggested the need to ensure that organizations are running on clean, reliable data.

As business task flow accelerates, and more cognitive automation and smart contracts touch and interact with information as the first actor in the process chain, it is increasingly vital to ensure that these technologies are handling quality data. Quantum may be the last such opportunity to bring the car into the pit for adjustments before racing at full speed commences in sectors like retail, telecom, technology, and logistics. This is a to-do that benefits a broad array of technological deployment projects, so while it may not be relevant for quantum computing until the next decade begins, the benefits will begin to accrue from these efforts today.

Aim at a converged point involving data, analytics, automation & AI

Quantum computing is often discussed in the context of moonshot computing problems – and, indeed, the technology is currently best deployed against problems outside the realm of capability for legacy iron. But quantum will also power the move from offline or nearline processing to ‘now’ processing, so tasks that involve putting insights from Big Data environments to work in real-time will also fall within reach over the course of the next decade. What you may find from a combination of this action and the two prior is that some of the questions and projects you had slated for a quantum computing environment may actually be addressable today through a combination of cognitive technologies.

Reach out to partners, suppliers & customers to build a holistic quantum perspective

Legacy enterprise computing grew up as a ‘four-walls’ concept in part because of the complexity of tackling large, complex business optimization problems that involved moving parts outside the organization. Quantum does not automatically erase those boundary lines from an integration perspective, but the next decade will see more than enough computing power come online to optimize long, global supply chain performance challenges and cross-border regulatory and financing networks. Again, efforts in this area can also benefit organizational initiatives today; projects in IoT and blockchain, in particular, can achieve greater benefits when solutions are designed with partners, suppliers, regulators, and financiers involved up front.

Conclusion

Quantum computing is not going to change the landscape of enterprise IT tomorrow, or next month, or even next year. But when it does effect that change, organizations should expect its new capabilities to be game-changers – especially for those firms that planned well in advance to take advantage of quantum computing’s immense power.

This short checklist of quantum-readiness tasks can provide a framework for pre-quantum technology projects, too – making them an ideal roster of 2019 ‘to-dos’ for enterprise organizations.

]]>
<![CDATA[Blue Prism Launches "Blue Prism Digital Exchange" to Accelerate RPA Value Realization]]>

Automation marketplaces are swiftly becoming a standard offering for leading RPA and intelligent automation providers, and with good cause. Automation is coming of age in the era of the ‘appified’ technology space, one in which tech buyers increasingly expect best practices and new capabilities to be offered on an as-needed, storefront basis. Within the space of a single year, such marketplaces are well on their way to expected-offering status for top providers. Blue Prism’s entry into this space, the Blue Prism Digital Exchange (Blue Prism DX), is a compelling environment built on the principles of “value delivery and organization and curation of automation content by skill”.

The first of these – value delivery – is aligned with Blue Prism’s overall message of ‘democratization of technology,’ a strategy that prioritizes access to automation technology at the operational level. Placing technologies like machine learning, OCR, and analytics directly into the hands of the business can accelerate the speed of automation projects to deployment and fiscal return. Blue Prism is also providing Blue Prism DX contributors with multiple informational content slots on their content upload pages to offer prospective content users video and text guidance on usage and deployment. The Forum area of Blue Prism DX also offers users peer-to-peer content sharing with respect to best practices in deploying Exchange capabilities and automations.

The second – organization by skill –categorizes the Blue Prism DX automation marketplace in the same way as the company’s existing training and automation thinking along the lines of six discrete skills: knowledge and insight, visual perception, learning, planning and sequencing, problem-solving, and collaboration. By organizing the Blue Prism DX along these same lines, Blue Prism is encouraging users to begin thinking of automation problems as collections of tasks in these areas, and – Blue Prism hopes – curating their own capabilities using these same categories.

Blue Prism DX visually looks similar to competing automation app stores, and perhaps that’s a sound decision given that many organizations have begun utilizing two or more automation vendors to address increasingly diverse challenges. The store does offer considerable detail on component authorship, supported Blue Prism releases (as not all will immediately be 6.4-capable), and dependencies.

Where Blue Prism has worked to differentiate its offering is very much ‘under the hood.’ The company has done considerable work in ‘wrapping’ third-party capabilities – like Microsoft Vision – in interface components that ease the process of integrating those capabilities directly into automation script. In the demonstration provided, the Blue Prism-customized Microsoft Vision block made the process of image analysis straightforward, with traditional UI components used to make the process of managing a complex task simple for the user.

At launch, Blue Prism DX contains a broad variety of assets, ranging from comprehensive task automation code (Avanade offers an entire intelligent help desk application that includes Blue Prism and Microsoft Cognitive Services software in a single package) to particulate technology components, such as connectors for ABBYY’s FlexiCapture and Cogito’s text analytics offering. Blue Prism has cast a wide net for Blue Prism DX content and intends to curate solutions of both types – and points between – as the content library expands. In doing so, Blue Prism offers filtering options for asset type (VBO/WebAPI/Solution) and industry relevance to assist users in finding the type of content best suited to their needs. 

At the moment, Blue Prism has no near-term plans to monetize Blue Prism DX, but the environment is built on a platform capable of monetization, and contributors are free to charge for their content. At launch, Blue Prism expected the content on Blue Prism DX to be approximately 60/40 internally developed content to partner content but sees that evolving as more partners become involved in contributing automation capabilities and code.

On the roadmap for Blue Prism’s Digital Exchange offering are a number of improvements, including new tools for simplifying the collaboration process between human and digital workers, interactive user communities, and expansion of the ecosystem with a broader array of technological capability contributors. Blue Prism DX aims to be the first stop for clients seeking to complete existing automation projects, launch new ones, and keep pace with best practices in the automation technology sector.

]]>
<![CDATA[UiPath’s Go! Automation Marketplace Aims to Accelerate RPA Adoption in Enterprise Clients]]>

 

UiPath held its 2018 UiPathForward event October 3-4, 2018, in Miami, Florida. The focus of proceedings was the October release of the company’s software and a related trio of major announcements: a new automation marketplace, new investment in partner technology and marketing, and a new academic alliance program.

The analyst session included a visit from CEO Daniel Dines and an update on the company’s performance and roadmap. UiPath has grown from a $1m ARR to a $100m ARR in just 21 months, and the company is trending on a $140m ARR for 2018 en route to Dines’ forecasted $200m ARR in early 2019. UiPath is adding nearly six enterprise clients a day and has begun staking a public claim – not without defensible merit – to being the fastest-growing enterprise software company in history.

During the event, UiPath announced a new academic alliance program, consisting of three sub-programs – one aimed at training higher education students for careers in automation, another providing educators with resources and examples to utilize in the classroom setting, and the third focused on educating youth in elementary and secondary educational settings. UiPath has a stated goal of partnering with ~1k schools and training ~1m students on its RPA platform.

The centerpiece of the event, however, was Release 2018.3 (Dragonfly), which was built around the launch of UiPath Go!, the company’s new online automation marketplace. It would be easy to characterize Go! as a direct response to Automation Anywhere’s Bot Store, but that would be overly simplistic. Where currently the Bot Store skews more toward apps as automation task solutions, Go! is an app store for particulate task components – so while the former might offer a complete end-to-end document processing bot, Go! would instead offer a set of smaller, more atomic components like signature verification, invoice number identification, address lookup and correction, etc.

The specific goal of Go! is to accelerate adoption of RPA in enterprise-scale clients, and the component focus of the offering is intended to fill in gaps in processes to allow them to be more entirely automated. The example presented was, the aforementioned signature verification; given that a human might take two seconds to verify a signature, is it really worth automating this phase of the process? Not in and of itself, but failing to do so creates an attended automation out of an unattended one, requiring human input to complete. With Go!, companies can automate the large, obvious task phases from their existing automation component libraries, and then either build new components or download Go! components to complete the task automation in toto.

Dragonfly is designed to integrate Go! components into the traditional UiPath development environment, providing a means for automation architects to combine self-designed automation components with downloaded third-party components. Given the increased complexity of managing project automation software dependencies for automations built from both self-designed and downloaded components, UiPath has also improved the dependency and library management tools in 2018.3. For example, automation tasks that reuse components already developed can include libraries of such components stored centrally, reducing the amount of rework necessary for new projects.

In addition, the new dependencies management toolset allows automation designers to point projects at specific versions of automations and task components, instead of defaulting to the most recent, for advanced debugging purposes. Dragonfly also moves UiPath along the Citrix certification roadmap, as this release is designated Ready for Citrix, another step toward becoming Certified for Citrix. Finally, Dragonfly also adds new capabilities in VDI management, new localization capabilities in multiple languages, and UI improvements in the Studio environment.

In the interest of spurring development of Go! components, UiPath has designated $20m for investment in its partners during 2019. The investment is split between two funds, the UiPath Venture Innovation Fund and the UiPath Partner Acceleration Fund. The first of these is aimed directly at the Go! marketplace by providing incentives for developers to build UiPath Go! components. In at least one instance, UiPath has lent developers directly to an ISV along with funding to support such development. UiPath expects that these investment dollars will enable the Go! initiative to populate the store faster than a more passive approach of waiting for developers to share their automation code.

The second fund is a more traditional channel support fund, aimed at encouraging partners to develop on the UiPath platform and support joint marketing and sales efforts. The timing of this latter fund’s rollout, on the heels of UiPath’s deal registration/marketing and technical content portal announcement, demonstrates the company’s commitment to improving channel performance. Partners are key to UiPath’s ability to sustain its ongoing growth rate and the strength of its partner sales channels will be vital in securing the company’s next round of financing. (UiPath's split of partner/direct deployments is approaching 50/50, with an organizational goal of reaching 100% partner deployments by 2020.) Accordingly, it is clear that the company’s leadership team is now placing a strong and increasing emphasis on channel management as a driver of continued growth.

]]>
<![CDATA[IPsoft’s Challenging Vision for Cognitive Automation]]>

 

I recently attended IPsoft’s Digital Workforce Summit in New York City, an intriguing event that in some ways represented a microcosm of the challenges clients are experiencing in moving from RPA to cognitive automation.

The AI challenge

Chetan Dube loomed large over proceedings. IPsoft’s president and CEO was onstage more than is common at events of this type, chairing several fireside chats himself in addition to his own technology keynote, and participating (with sleeves rolled up) at the analyst day that followed. He brought a clear challenge to the stage, while at the same time conveying the complexity and capability of IPsoft’s flagship cognitive products, Amelia and 1DESK, and making them understandable to the audience, in part by framing them in terms of commercial value and ROI.

RPA vendors have a simpler form of this challenge, but both robotic process automation and cognitive automation vendors have a hill to climb in gaining clients’ trust in the underlying technology and reassuring service buyers that automation will be both a net reducer of cost and a net creator of jobs (rather than a net displacer of them).

From a technological perspective, RPA sounds from the stage (and sells) much more like enterprise software than neuroscience or linguistics, so the overall pitch can be sited much more in the wheelhouse of IT buyers. The product does what it says on the tin, and the cavalcade of success stories that appear on event stages are designed to put clients’ concerns to rest. To be sure, RPA is by no means easy to implement, nor is it yet a mature offering in toto, but the bulk of the technological work to achieve a basic business result has been done. And overall, most vendors are working on incremental and iterative improvements to their core technology at this time.

AI differs in that it is still at the start of the journey towards robust, reliable customer-facing solutions. While Amelia is compelling technology (and is performing competently in a variety of settings across multiple industries), the version that IPsoft fields in 2025 will likely make today’s version seem almost like ELIZA by comparison, if Dube’s roadmap comes to fruition. He was keen to stress that Amelia is about much more than just software development, and he spent a lot of time explaining aspects of the core technology and how it was derived from cognitive theory. The underlying message, broadly supported by the other presenters at the event, was clearly one of power through simplicity.

IPsoft’s vision

The messaging statements coming from the stage during the event portrayed a diverse and wide-ranging vision for the future of Amelia. Dube sees Amelia as an end-to-end automation framework, while Chief Cognitive Officer Edwin van Bommel sees Amelia as a UI component able to escape the bounds of the chatbox and guide users through web and mobile content and actions. Chief Marketing Officer Anurag Harsh focused on AI though the lens of the business, and van Bommel presented a mature model for measuring the business ROI of AI.

Digging deeper, some of what Dube had to say was best read metaphorically. At one point he announced that by 2025 we will be unable to pass an employee in the hallway and know if he or she is human or digital. That comment elicited some degree of social media protest. But consider that what he was really saying is that most interaction in an enterprise today is performed electronically – in that case, ‘the hallways’ can be read as a metaphor for ‘day-to-day interaction’.

The question discussed by clients, prospects, and analysts was whether Dube was conveying a visionary roadmap or fueling hype in an often overhyped sector. Listening to his words and their context carefully, I tend towards the former. Any enterprise technology purchase demands three forms of reassurance from the vendor community:

  • That the product is commercially ready today and can take up the load it is promising to address
  • That the company has a long-term roadmap to ensure that a client’s investment stays relevant, and the product is not overtaken by the competition in terms of capacity and innovation
  • And perhaps most importantly, that the roadmap is portrayed realistically and not in an overstated fashion that might cause clients to leave in favor of competitors’ offerings.

I took away from Digital Workforce Summit that Dube was underscoring the first and second of these points, and doing so through transparency of operation and vision.

There are only two means of conveying the idea that you sell a complex product which works simply from the user perspective – you either portray it as a black box and ask that clients trust your brand promise, or you open the box and let clients see how complex the work really is. IPsoft opted for the latter, showing the product’s operation at multiple levels in live demonstrations. Time and again, Dube reminded the audience that it is unnecessary to grasp evolved scientific principles in order to take advantage of technologies that use those principles – so light switches work, in Dube’s example, without the user needing to grasp Faraday’s principles of induction. It still benefits all parties involved to see the complexity and grasp the degree to which IPsoft has worked to make that complexity accessible and actionable.

Conclusion

The challenge, of course, is that clients attend events of this kind to assess solutions. The majority of attendees at Digital Workforce Summit were there to learn whether IPsoft’s Amelia, in its latest form, is up-to-speed to manage customer interactions, and will continue to evolve apace to become a more complete conversational technology solution and fulfill the company’s ROI promises.

I came away with the sense that both are true. Now it is up to the firm’s technology group to translate Dube’s sweeping vision into fiscally rewarding operational reality for clients.

]]>
<![CDATA[Infosys Announces Blockchain-Powered Nia Provenance to Manage Complex Supply Chains]]>

 

EdgeVerve, an Infosys Product subsidiary, this week announced a new blockchain-powered application for supply chain management as part of its product line. Nia Provenance is designed to address the challenges faced by organizations managing complex supply chain networks with multiple IT stacks engaged across multiple stakeholders. Here I take a quick look at the new application and its potential impact.

Supply chain traceability, transparency & trust

Nia Provenance is designed to provide traceability of products from source of origin to point of purchase with full transparency at every point along the supply chain. The product establishes trust through the utilization of a version of Bitcore, the blockchain architecture used by Bitcoin. While this can be a relatively simple task in agribusiness and other supply environments in which a product involves only processing as it moves through the supply chain, environments such as consumer electronics or medical devices are much more complex, involving integration and assembly of multiple components along the way. The ability to isolate a specific component and trace it to its source of origin, through phases of value addition timestamped on a blockchain ledger, is invaluable in case of recall or consumer danger.

Transparency in Nia Provenance is provided through proof of process as the product or commodity moves through the system – so attributes that must be agreed on at specific phases of the supply chain, such as conflict-free or locally-sourced, can be seen in the system as they are accumulated. Similarly, regulatory inspections and certifications are more easily tracked and audited through a blockchain solution like Nia Provenance.

Finally, trust is gained in a system with a combination of data immutability, equality in network participation as a result of decentralization of the overall SCM ledger, and cryptographic information security. Over time, the benefits of a blockchain SCM environment accrue both to the organizational bottom line, in the form of cost savings, and to the organization’s brand as a function of increased consumer trust in the brand promise.

Agribusiness client case

As one example of how Nia Provenance is being leveraged in the real world, a global agribusiness firm undertook a proof of concept for its coffee sourcing division in Indonesia to track the journey of coffee from the growing site, through the roasting plant, the blend manufacturer, the quality control operation, the logistics providers, and on to the importer. This enabled the trader to provide trusted accreditation and certification information to the importer for properties such as organic or fair trade status, or that the coffee was grown using sustainable agriculture standards.

Providing strategic blockchain reach

Nia Provenance provides Infosys with three important sources of strategic blockchain ‘reach’ in an increasingly competitive market, because:

  • It is platform-agnostic and purpose-built to dock with multiple blockchain architectures. A supply chain solution that relies too heavily on the specific capabilities of one common blockchain architecture or another – for example, Ethereum or HyperLedger – would encounter difficulty working with other upstream or downstream architectures. By keeping the DLT technology in an abstraction layer, Nia Provenance eases the process of incorporating different blockchain architectures in a complex SCM task environment
  • It is designed to benefit multiple supply chain stakeholders, not just the client. Blockchain adoption becomes more appealing to upstream and downstream stakeholders, as well as horizontal entities like banks, insurers and regulators, when the ecosystem is built with clear benefits for them as well as the organizing entity. Nia Provenance is designed from the ground up with a mindset inclusive of suppliers, inspectors, insurers, shippers, traders, manufacturers, banks, distributors, and end customers
  • It is designed to span multiple industries. Although the platform has its origins in agribusiness, Nia Provenance looks to be up to the task of SCM applications in manufacturing, consumer goods/FMCG, food and beverage, and specialized applications such as cold-chain pharmaceuticals.

Summary

Supply chain provenance is a core application for blockchain, and one that we expect to be a clear value delivery vehicle for blockchain technology through 2025. The combination of – as Infosys puts it – traceability, transparency, and trust that blockchain provides is a compelling proposition. Nia Provenance offers a solution across a broad variety of industry applications for organizations seeking lower cost and greater security in their supply chain operations.

]]>
<![CDATA[The Advantages of Building a Bespoke Blockchain Platform]]>

 

For all the discussion in the blockchain solution industry around platform selection (are they choosing Fabric or Sawtooth? Quorum or Corda?), you’d be forgiven for thinking that every provider’s first stop is the open-source infrastructure shelf. But the reality is that blockchain is more a concept than a fixed architecture, and the platforms mentioned do not encompass the totality of use case needs for solution developers. As a result, some solution developers have elected to start with a blank sheet of paper and build blockchain solutions from the ground up.

One such company is Symbiont, who started down this road much earlier than most. Faced with the task of building a smart contracts platform for the BFSI industry, the company examined what was available in prebuilt blockchain platform infrastructure and did not see their solution requirements represented in those offerings – so they built their own. Symbiont’s concerns centered around the two areas of scalability and security, and for the firm’s pursuit target accounts in capital markets and mortgages, those were red-letter issues.

The company addressed these concerns with Symbiont Assembly, the company’s proprietary distributed ledger technology. Assembly was designed to address three specific demands of high-volume transactional processes in the financial services sector: fault tolerance, volume management, and security.

Supporting fault tolerance

Assembly addresses the first of these through the application of a design called Byzantine Fault Tolerance (BFT). Where some blockchain platforms allow for node failure within a distributed ledger environment, platforms using BFT broaden that definition to include the possibility of a node acting maliciously, and can control for actions taken by these nodes as well. The Symbiont implementation of BFT is on the BFT-SMaRt protocol.

Volume management

In addressing the volume demands of financial services processing, deciding on the BFT-SMaRt protocol was again important, as it enables Assembly to reach performance levels in the ~80k/s range consistently.

This has two specific benefits, one obvious and one less so. First, it means that Assembly can manage the very high-volume transaction pace of applications in specialized financial trading markets without scale concerns. Secondly, it means that in lower-volume environments, the extra ‘headroom’ that BFT-SMaRt affords Assembly can be used to store related data on the ledger without the need to resort to a centralized data store to hold, for example, scanned legal documents that support smart contracts.

Addressing security concerns

The same BFT architecture that supports Assembly’s fault tolerance also provides an additional layer of security, in that malicious node activity is actively identified and quarantined, while ‘honest’ nodes can continue to communicate and transact via consensus. Add in encryption of data, whereby Assembly creates a private security ledger within the larger ledger, and the result is a robust level of security for applications with significant risk of malicious activity in high-value trading and exchange.

Advantages of building a bespoke blockchain platform

Building its own blockchain platform cost Symbiont many hours and R&D dollars that competitors did not have to spend, but ultimately this decision provides Symbiont with three strategic advantages over competitors:

  • Assembly is purpose-built for BFT-relevant, high-volume environments. As a result, the platform has performance and throughput benefits for applications in these environments compared with broader-use blockchain platforms that are intended to be used across a variety of business DLT needs. To some degree this limits the flexibility of the platform in other use cases, but just as a Formula One engine is a bespoke tool for a specific job, so too is Assembly specifically designed to excel in its native use case environment. That provides real benefits to users electing to build their banking DLT applications on the Assembly architecture
  • Symbiont can provide for third-party smart contract writing, should it elect to do so. While this is not in the roadmap for the moment, and Symbiont appears content to build client solutions on proprietary deliverables from the contract-writing layer through the complete infrastructure of the solution, the company could elect to allow clients to write their own smart contracts ‘at the top of the stack’. Symbiont does intend to keep the core Assembly platform proprietary to the company for the foreseeable future
  • Assembly may attract less malicious activity interest than traditional platforms. The rising number of blockchain projects based on HyperLedger and Ethereum is certain to attract more malicious activity based on the commonality of the architecture across a broader common base of technology. In much the same way that Windows historically attracted more virus incursions than the OS platform, Assembly will tend to attract less attention than platforms with broader user bases. Moreover, Assembly’s BFT foundations will enable it to deal more effectively with those events that do occur.

Summary

Symbiont isn’t alone in developing its own proprietary blockchain technology architecture rather than choose from the broadly available offerings in the space, and as blockchain enters the mainstream of enterprise business, other provider organizations will surely go the same route.

What Symbiont has established is an exemplar for developing a purpose-built blockchain platform, beginning with the specific needs of the task environment at scale, and proceeding to address those needs carefully in the development process. 

]]>
<![CDATA[6 Ways to Prepare for Cognitive Automation During RPA Implementation]]>

 

2017 brought a surge of RPA deployments across industries, and in 2018 that trend has accelerated as more and more firms begin exploring the many benefits of a digital workforce. But even as some firms are just getting their RPA projects started, others are beginning to explore the next phase: cognitive automation. And a common challenge for firms is the desire to begin planning for a more intelligent digital workforce while automating simpler rule-based processes today.

Having spoken with organizations at different stages of their journeys from BI to RPA and on to cognitive, there are tasks that companies can begin during RPA implementation to ensure that they are well positioned for the machine learning-intensive demands of cognitive automation:

Design insight points into the process for machine learning

Too often, the concept of STP gets conflated with the idea of measuring task automation only on completion. But for learning platforms, it is vital to understand exactly where variance and exceptions arise in the process – so allow your RPA platform to document its progress in detail from task inception to task completion.

At each stage, provide a data outlet to track the task’s variance on a stage-by-stage basis. A cognitive platform can then learn where, within each task, variance is most likely to arise – and it may be the case that the work can be redesigned to give straightforward subtasks to a lower-cost RPA platform while cognitive automation handles the more complex subtasks.

Build a robot with pen & paper first

One of the basic measures for determining whether a process can be managed by BPM, by RPA, or by cognitive automation is the degree to which it can be expressed as a function of rigorous rules. So, begin by building a pen-and-paper robot – a list of the rules by which a worker, human or digital, is expected to execute against the task.

Consider ‘borrowing’ an employee with no familiarity with the involved task to see if the task is genuinely as straightforward and rule-bounded as it seems – or whether, perhaps, it involves a higher order of decision-making that could require cognitive automation or AI.

Use the process to revisit the existing work design

In many organizations, tasks have ‘grown up’ inorganically around inputs from multiple stakeholders and have been amended and revised on the fly as the pace of business has demanded. But the migration first to RPA and then on to cognitive automation is a gift-wrapped opportunity to revisit how, where, and when work is done within an organization.

Can key task components be time-shifted to less expensive computing cycles overnight or on weekends? Can whole tasks be re-divided into simpler and more complex components and allocated to the lowest-cost tool for the job?

Dock the initiative with in-house ML & data initiatives

Cognitive automation does not have to remain isolated to individual task areas or divisions within an organization. Often, ML initiatives produce better results when given access to other business areas to learn from. What can cognitive automation learn about customer service tasks from paying a ‘virtual visit’ to the manufacturing floor via IoT? Much, potentially – especially if specific products or parts are difficult to machine to tolerance within an allowed margin of error, they may be more common sources of customer complaints and RMAs.

Similarly, a credit risk-scoring ML platform can learn from patterns of exception management in credit applications being managed in a cognitive automation environment. For ML initiatives, enabling one implementation to learn from others is a key success factor in producing ‘brilliant’ organizational AI.

Revisit the organizational data hygiene & governance models

Data scientists will be the first to underscore the importance of introducing clean data into any environment in which decision-making will be a task stage. Data with poor hygiene, and with low levels of governance surrounding the data cleaning and taxonomy management function, will create equally poor results from cognitive automation technology that utilizes it to make decisions.

Cognitive software is no different than humans in this respect; garbage in, garbage out, as the old saying goes. As a result, a comprehensive visitation of organizational data hygiene and governance models will pay dividends down the road in cognitive work.

Discuss your vendor’s existing technology & roadmap in cognitive & AI

Across the RPA sector, cognitive is a central concept for most vendors’ 2018-2020 roadmaps. Scheduling a working session now on migrating the organization from RPA to cognitive automation provides clients with insight on their vendor’s strengths and capability set. It also enables vendors to get a close look at ‘on the ground’ cognitive automation needs in different organizational task areas.

That’s win/win – and it helps ensure that an existing investment in vendor technology is well-positioned to take the organization forward into cognitive based on a sound understanding of client needs.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[Kryon’s Rebranding Focuses on the Business Benefits of RPA]]>

 

Kryon has today launched a new brand presence, along with a new strategic perspective on RPA focused on delivering business benefits. The former Kryon Systems (now simply Kryon) will now be organized around a three-pronged approach the company refers to as ‘Discover, Automate, Optimize’.

As part of this brand migration, several aspects of Kryon’s go-to-market approach will change, as described below.

Focusing on the human side of the RPA equation

Kryon’s former branding package included limited personification of the RPA offering under the Leo name, and also featured an anthropomorphized robot ‘mascot’ in much of the company’s promotional and industry relations materials. That component of the company’s branding has been eliminated from its new visual identity, which now focuses much more on the human side of the RPA equation and the concept of integrating RPA into a hybrid human-digital workforce.

A new focus on business benefits rather than technological innovation

As more RPA features begin to become ‘table stakes’ within the sector, NelsonHall has expected vendors to begin the shift from focusing on product features to business outcomes. Kryon joins that trend with its rebranding, which will include more case studies and success stories represented as a function of business KPIs, while keeping the technological conversation within the context of real-world improvements in cost, efficiency, and quality.

A new framework for the brand

The ‘Discover, Automate, Optimize’ theme speaks to Kryon’s three primary offering areas:

  • Process discovery (already soft-launched, but due for a more formal product rollout in early summer of 2018)
  • Traditional RPA
  • Analytics/AI.

To date, these have been marketed as components, but under the new branding they become part of a larger solution intended to reposition Kryon as an end-to-end provider of business process optimization solutions.

A clear effort to differentiate its offerings

Kryon has sometimes suffered in terms of its ability to break out from the pack of RPA providers and carve out a differentiated and sustainable niche for itself. Under the new brand positioning, the company is making a clear effort to differentiate its offerings based on the ability to do more than automate simple, repetitive tasks.

The company talks about enabling human workers to be mindful and focused on creative tasks by eliminating background work entirely though the application of RPA combined with AI and machine learning. While other firms offer similar messaging, Kryon’s new branding package treats repetitive work as ‘background noise’ to be removed from the typical employee’s workday.

A new name, logo & tagline

While these are often secondary in importance from a technology and business analyst’s perspective, it is worth mentioning what is and what, importantly, is not included in Kryon’s visual rebrand. Gone is the word ‘Systems’ from the old Kryon logo, in a clear effort to migrate the firm towards a broader service mandate.

The tagline ‘Be Your Future’ is added in place of ‘Systems’, again suggesting a broadening of the brand. Finally, the letter ‘O’ in the logo is given a half-gold, half-blue treatment to emphasize the hybrid human/digital nature of its offering.

Summary

2018 and 2019 are expected to be watershed years in the RPA sector, as competitive positioning begins to come into focus and leadership niches become occupied as the sector matures. Kryon is taking clear steps to include itself in the ‘tier one’ vendor conversation through a set of brand migration moves that position the company to compete well into the next decade.

]]>
<![CDATA[Redwood Introduces Disruptive New RPA Pricing Model]]>

 

Today, Redwood announced a new pricing model for its RPA software in which users pay only for units of work completed, and on a cost basis equivalent to efficient human work on the same task. As a result, if a Redwood robot sends an email, or retrieves specific data, or performs reconciliation work, the organization is charged on completion for specific amounts relevant to the parallel human cost of execution in a ‘perfect work efficiency’ environment.

This is a fundamental change from the prevalent model in the industry of paying for licenses for RPA software and estimating how many licenses will be necessary to perform specific tasks. While other pricing models exist – ranging from paying for the process rather than the robot, to buying robots outright as owned software properties – this is the first time that pricing is available both on completion and on a granular, task-centric basis. In essence, Redwood is enabling organizations implementing RPA to pay on a piecework basis, and only after the work is performed.

The new pricing model will mark the second major transition in the company’s client contracting medium in the last five years. Historically, Redwood sold its software on a perpetual licensing basis, which changed over time to a more traditional annual licensed offering (although some clients are still on perpetual licenses). Redwood will need to manage a transition period in which clients can switch to the utility pricing model on the anniversary of their licenses, which may introduce some unevenness to the company’s financial performance during 2018-2019.

There are more implications for Redwood, and for the RPA industry, as a result of deploying this new pricing model:

The new model changes the revenue & profit mix for Redwood…

The company expects to see some flattening of topline revenue as a result of this change, but improved margins, with an overall increase in transaction volume. Redwood believes that by reducing barriers to entry in RPA through enabling payment by the task, and after the fact, more prospective clients will adopt the Redwood solution. This is a logical evolution of the Redwood business model in that it promotes Redwood’s library of prebuilt robots to a larger prospective audience and smooths the on-ramp to Redwood adoption for more organizations.

…and demands that Redwood’s pricing model is appealing

The company has researched levels of productivity and cost in both Western and offshore economies and modeled a function that prices Redwood tasks at roughly 20 Euro cents per moderate-duty task (retrieving a report, reconciling data, sending an email, etc.) based on a perfectly-efficient Western worker performing 156 such tasks per hour for a fully-loaded employment cost of €50k. (A low-cost economy worker performs half as many such tasks per hour for half the cost in Redwood’s model.)

In order for Redwood to unlock the full potential value of this new pricing model, these assumptions and metrics need to be appealing to buyers.

Redwood creates more pressure on the traditional licensing model

This is still a relatively young industry in terms of establishing pricing and contracting norms, so disruptive acts (and Redwood’s new pricing model will certainly be disruptive at some level) creates pressure on ‘safer’, more traditional modes of client engagement. Redwood holds a degree of advantage in that the company has an extensive library of ~35,000 prebuilt robots that it can price and sell on this model, as opposed to RPA providers that provide software that is customized and deployed within the client organization. It will be more difficult for traditional RPA providers to cost-effectively match the Redwood model in the market.

Reporting & invoicing challenges are addressed through Redwood Robotics itself

Transitioning from a license-based contracting structure to a high-resolution, granular use-based contracting structure would normally be a steep challenge for a software organization accustomed to annual licensing, given the degree of reporting and invoicing complexity involved. Fortunately for Redwood, these processes are being handled in their entirety by additional automations, deployed to the client organization at no charge, which monitor and document Redwood automation usage and generate regularly-scheduled invoices for the client.

Summary

Redwood has put forth a compelling new framework for equating robotic and human labor costs, and for enabling organizations to pay only for work done rather than paying for the abstraction layer inherent to a robot license.

In effect, Redwood offers piecework rates in a market predominated by ‘salaried-FTE’ model robots. While this is unlikely to become the norm for RPA pricing, it provides Redwood with a new, and potentially sustainable, source of competitive differentiation.

]]>
<![CDATA[UiPath Gains Unicorn Status with Series B Funding; To Expand into AI]]>

 

This morning, UiPath announced that the company will be receiving $153m in Series B funding from a consortium including the company’s existing investors, with two new names involved – Kleiner Perkins and Capital G, the late-stage growth venture capital fund financed by Alphabet Inc.

The latter is of note as this arm of Google focuses on profit-centric investment rather than acquiring to serve Google’s overall strategic goals. Its notable investments to date have included Gusto (then ZenPayroll) in 2015, Airbnb and Snap in 2016, and Lyft in 2017.  As a result of these investments, Laela Sturdy of Capital G and John Doerr of Kleiner Perkins will be joining UiPath’s strategic advisory board.

This latest round of financing is meaningful on several fronts:

It places UiPath into unicorn territory

This round of funding places UiPath’s market valuation in the vicinity of $1.1bn, implying that the company has grown from seed funding to unicorn status in just 36 months. By contrast, fellow RPA unicorn Blue Prism was founded in 2001 and only recently crossed into unicorn status with a market value of $1.02bn.

…which requires more resources to support rapid growth

While this is both impressive supernormal growth on its own, and a rate that suggests that UiPath has taken considerable share in the past twelve months, it carries with it its own slate of challenges, as referenced in the profile of UiPath that NelsonHall published earlier this year. The company’s  level of growth needs infrastructural backfill in multiple areas, from R&D to sales and marketing. This is a company that is adding 2.5 customers a day on its existing funding levels and operating cashflow. What might UiPath’s organic growth trajectory look like with significantly deeper sales, marketing, deployment, and R&D capabilities? We are about to find out.

It positions the company to acquire in the AI space

The company now boasts a combined war chest of ~$200m in cash, more than enough for a tactical bolt-on or two in the areas of cognitive automation and AI. UiPath already has evolved partnerships with Celonis and Enate, so the company is likely to look outside of those firms’ service footprints for acquisitions. Specifically, UiPath is looking for capabilities in the areas of natural language processing, machine learning, and identity recognition. There will be no shortage of good candidates for UiPath to choose from in these areas, but betting correctly and acquiring for maximum value will be critical in positioning UiPath for success.

It ties the company closer to Google

The CapitalG investment certainly suggests a closer relationship between UiPath and Google, which might have already manifested in UiPath’s decision to utilize Google Cloud for its cloud machine learning initiative. Given Blue Prism’s alignment with IBM, the major RPA providers are beginning to find their technology partners for long-term competition in the segment.

Google will be able to provide UiPath with a host of competitive advantages in terms of technology licensure, partner ecosystem development, and market presence. It would be interesting to see where UiPath might be in a year’s time with a closer relationship with Google’s TensorFlow team, for example, or with its Generative Adversarial Networks working groups.

It likely launches the next wave of innovation in the segment

Armed with a substantive war chest of cash with which to build and acquire new capabilities, UiPath’s actions during 2018 are not likely to go unanswered by other segment leaders. As a result, UiPath’s next moves will likely signal the beginning of the next stage of evolution in the RPA sector – one we expect to bring out the best in technological innovation among those leaders. We see UiPath as a leader in that evolutionary process.

]]>
<![CDATA[7 Essential Tasks Prior to Any RPA Implementation]]>

 

With every new software release from RPA sector leaders, there is always much to be excited about as vendors continue to push the technological boundaries of workplace automation. Whether those new capabilities focus on cognition, or security, or scalability, the technology available to us continues to be a source of inspiration and innovative thinking in how those new capabilities can be applied.

But success in an RPA deployment is not entirely dependent just on the technology involved. In fact, the implementation design framework for RPA is often just as important – if not more so – in determining whether a deployment is successful. Install the most cutting-edge platform available into a subpar implementation design framework, and no amount of technological innovation can overcome that hindrance.

With this in mind, here are seven tasks that should be part of any RPA implementation plan before organizations put pen to paper to sign up with an RPA platform vendor.

Create a cohesive vision of what automation will achieve

Automation is the ultimate strict interpretation code: it does precisely as it’s told, at speed, and in volume. But it must be pointed at the right corporate challenges, with a long-term vision for what it is (and is not) expected to do in order to be successful in that mission. That process involves asking some broad-ranging questions up-front:

  • What stakeholders are involved – internally and externally – in the automation initiative?
  • What are our organization’s expectations of the initiative?
  • How will we know if we succeeded or fail?
  • What metrics will drive those assessments?
  • Where will this initiative go next within our organization?
  • Will we involve our supply chain partners or technology allies in this process?

Ensure a staff model that can scale at the speed of enterprise automation

We tend to spend so much time talking about FTE reduction in the automation sector that we overlook the very real issue of FTE sourcing (in volume!) in relation to the implementation of automation at enterprise scale. Automation needs designers, coders, project managers, and support personnel, all familiar with the platform and able to contribute new code and thoughtware assets at speed.

Some vendors are addressing this issue head-on with initiatives like Automation Anywhere University, UiPath Academy, and Blue Prism Learning and Accreditation, and others have similar initiatives in the works. It is also important that organizational HR professionals be briefed on the specific skillsets necessary for automation-related hires; this is a relatively new field, and partnering up-front on talent acquisition can yield meaningful benefits down the road.

Plan in detail for a labor outage

The RPA sector is rife with reassurances about digital workers: they never go on strike; they don’t sleep or require breaks; they don’t call in sick. But things do go wrong. And while the RPA vendors offer impressive SLAs with respect to getting clients back online quickly, sometimes it’s necessary to handle hours, or even days, of automated work manually. Having mature high-availability and disaster recovery capability built into the platform – as Automation Anywhere included in Enterprise Release 11 – mitigates these concerns to a specific degree, but planning for the worst means just that.

Connect with the press and the labor community

Don’t skip this section because it sounds like organized labor management only, although that’s a factor too. Automation stories get out, and local and national press alike are eager to cover RPA initiatives at large organizations. It’s a hot-button topic and an easily accessible story.

Unfortunately, it’s also all too easy to take an automation story and run with the sensationalist aspects of FTE displacement and cost reduction. By interacting with journalist and labor leaders in advance of launching an automation initiative, you’re owning the story before it can be owned elsewhere in the content chain.

Have a retraining and upskilling initiative parallel to your automation COE

Automation can quickly reduce the number of humans necessary in a work area by half or even more. What is your organization’s plan for redeployment of that human capital to other, higher-value tasks? Who occupies those task chairs now – and what will they be doing?

Once the task of automation deployment is complete, there is still process work to be done in finding value-added work for humans who have a reduced workload due to automation. Some organizations are finding and unlocking new sources of enterprise value in doing so – for example, front-line workers who have their workloads reduced through automation can often ‘see the forest’ better and can advise their superiors on ways to streamline and improve processes.

Similarly, automation can bring together working groups on tasks that have connected automations between departments, allowing for new conversations, strategies, and processes to take shape.

Have an articulation plan for RPA and other advanced technologies

RPA and cognitive automation do more than improve the quality and consistency of work – they also improve the quality and consistency of task-related data. That is an invaluable characteristic of RPA from the organizational data and analytics perspective, and one that is often overlooked in the planning process.

While it might take days for a service center to spot a trend in common product complaints, RPA platforms could see the same trend in hours, combine that data in an organizational data discovery environment with IoT data from the production line, and identify a product fault faster and more efficiently than a traditional workforce might. When designing an automation initiative, it is vital to take these opportunities into account and plan for them.

Create a roadmap to cognitive automation and beyond

RPA is no more a destination than business rules engines were, or CRM, or ERP. These were all enabling technologies that oriented and guided organizations towards greater levels of agility, awareness and capability. Similarly, deploying RPA provides organizations with insight into the complexity, structure and dependencies of specific tasks. Working towards task automation yields real clarity, on a workflow-by-workflow basis, of what level of cognition will be necessary to achieve meaningful automation levels.

While many tasks can be achieved by current levels of vendor RPA capability, others will require more evolved cognitive automation, and some will be reserved for the future, when new AI capabilities become available. By designating relevant work processes to their automation ‘containers’, an enterprise roadmap to cognitive automation and AI begins to take shape.

]]>
<![CDATA[7 Predictions for RPA in 2018]]>

 

The RPA sector is defined as one of rapid technological evolution, and every year it seems like what we thought to be bleeding-edge capability in January turns out to be proven and deployed technology long before year’s end. With this rapid pace of growth and maturation in mind, where might the RPA sector be by the end of 2018? Here are seven predictions.

The first wave of automation-inclusive UI design

To date, RPA has been adaptive in nature – automation software has done the interpretive labor to ‘see’ the application screen as humans do. But as more and more repetitive-task work becomes automated, software designers will begin taking the strengths and weaknesses of computer vision into account in designing applications that will be shared between human and digital workers. This will show up in small ways at first, particularly in interface areas that are challenging for RPA software to learn quickly, but over the course of 2018, ‘hybrid workforce UI design’ will become a new standard for enterprise software vendors.

Process mining makes RPA more accessible for midmarket & emerging large market segments

Early adopters of RPA have already established that detailed process mapping is key to successful task automation across the extended enterprise. For Fortune 1000 firms, that can be fairly straightforward, with retained consulting and systems integration partners on hand to assist in the process of mapping task flows for RPA implementation. Smaller firms, however, don’t always have the luxury of engaging large consulting firms to assist in this process – so vendors developing their own automated process mapping technology, or partnering with third-party providers like Celonis, will find demand booming in the midmarket.

Human skill bottleneck hits providers without education/certification plans

It’s ironic that human skill capital will end up as the limiting factor in the growth rate of successful RPA implementations, but 2018 will close with a clear shortage of qualified automation designers and deployment management professionals. Those organizations (like UiPath, Blue Prism, and Automation Anywhere) that saw this coming early on and established academic settings for the education and certification of on-platform skilled practitioners, will thrive. But those lacking these programs may find themselves in a skill bottleneck in the market – one that will begin to materially inhibit growth.

RPA becomes a designed-in factor for disruptors

In conversations I had with organizations implementing RPA during 2H17, one common factor came to the fore: that their initial FTE rationalization gains had already been realized, and going forward, they were looking to RPA as a means to manage significant growth in their operations.

For organizations coming to market as disruptors, this trend is even more pronounced, and organizations with designs on being disruptive forces are increasingly building automation capabilities into their growth plans from the ground up. Building an organization on a foundation of a hybrid human-digital workforce is a different endeavor entirely from retrofitting an existing company with automation – and as a result, we should begin seeing some real innovation in organizational design beginning this year.

Japan becomes the adoption template geo for big bets

To date, Japan has produced some of the largest implementations of RPA, with UiPath’s late 2017 deployment at SMBC pushing the envelope still further. Japan is betting big on RPA to become a sustainable source of competitive differentiation, and as more large organizations there implement large-scale RPA projects, the best practices library for RPA deployment at scale will expand in kind.

Companies worldwide looked to Japan for guidance in implementing robotics once before, during the rise of robotic manufacturing in the automotive sector. 2018 will see a second such wave.

RPA proves its case as a source of compliance gains

RPA has been marketed with a number of different value creation characteristics already, with the obvious cost reduction and quality improvement factors taking center stage. But RPA has significant benefits to offer organizations in regulated industries, most notably in the ability to secure access to sensitive information, systematize the process of accessing and modifying that information, and standardizing the documentation process and audit logging work associated with it.

2018 will be the year that organizations begin to see meaningful returns from adopting RPA as a solution to compliance task challenges.

Demand for specialist implementation navigators grows significantly

RPA implementation has been a partnered endeavor since the technology first arrived on the scene, with software vendors allying themselves closely with large consulting firms and systems integrators to optimize their client deployments. But demand is emerging for focused, automation-centric services, and right on time, the industry is seeing a surge of new RPA specialist service providers like Symphony and Agilify.

As buying organizations begin to ask more of their new – or revamped – RPA implementations, demand for these providers’ services will grow swiftly during 2018.

]]>
<![CDATA[CSS Corp’s Contelli Automation Platform Driving Improvements in Enterprise Network Management]]>

 

As 2018 begins, the RPA sector is starting to produce more segment specialists from within its vendor base. Whereas just two years ago the sector was still finding its footing in addressing common back- and front-office application automation, enterprise customers today have the luxury of building best-of-breed solutions that often incorporate two or more vendors working in concert to automate a broader spectrum of tasks.

CSS Corp’s Contelli is a relatively new automation platform, but one that is gaining attention for its capability set in a complex and high-value enterprise support area – namely, automated network management. Contelli received an elevated role at CSS in the wake of the company’s late 2016 reorganizaton, which saw CSS' board elect to change the direction of the firm. As part of this strategic direction change (one that saw an influx of new management talent take place in the executive suite), the company transitioned from a corporate focus heavy on legacy IT services to one centered on customer engagement and digital transformation. That transition also included an elevated role for CSS' automation platform, which was rebranded from AIMS (Automated Infrastructure Management Solution) to Contelli. 

The product continuously analyzes client IT operations and uses network traffic data, paired with algorithmic analysis of historical data, to predict downtime, reconfigure traffic for improved efficiency, dynamically provision and de-provision IT assets, and resolve repetitive support tasks. CSS estimates ~30-40% improvements in operational efficiency in IT operations, and ~45% to ~65% reduction in FTEs, in typical deployments of Contelli IT Management Engine.

Although Contelli’s brand name may be a new one in the market, the platform has already achieved success. For a leading managed network services provider with 450k network devices under management, Contelli software provided the client with a 25% improvement in average handle time for open ticket calls, a 22% improvement in case closure rate, and, perhaps most importantly, a 100% success rate in case audits performed on work Contelli automated.

Three factors make Contelli an appealing offering for organizations seeking to reduce their network management costs:

  • It touches a broad range of KPIs. Network optimization isn’t always realized by identifying a few significant sources of cost savings and quality improvement potential; often, the task involves incremental improvement of multiple KPIs, from throughput and traffic efficiency to asset provisioning speed, to support ticket resolution turnaround cycle. Contelli’s position within the network management stack enables the product to offer a broad array of improvements in KPIs across multiple task areas
  • It learns continuously from network data. Automating a fluid process is among the steepest challenges in intelligent automation today. As variables change within the task area to be automated, the RPA platform of choice must not only be able to adapt on the fly, but learn entirely new sets of events and exceptions as topologies and assets evolve. Contelli’s development team has invested considerable time and resources in the product’s machine learning layer to enable dynamic network management automation
  • It is a focus area for CSS’ Innovation Labs. Contelli is a mature offering today, but CSS has significant plans to improve and upgrade the product’s machine learning capabilities in the company’s Innovaton Labs, an R&D environment for continuous improvement of the platform.  CEO Manish Tandon has circled Innovation Labs in red as a key strategic plank for the company’s evolution, and Contelli is slated for considerable time ‘up on the lift.’

Contelli isn’t a ‘one stop shop’ for front- and back-office enterprise automation, but for organizations seeking to self-fund a larger-scale RPA initiative with a broad slate of KPI improvements in a critical business task area, it’s an appealing choice for network management administrators. 

]]>
<![CDATA[Intelligent Automation Summit Takeaways: Four Alternative Gain Frameworks for RPA]]>

 

At the Intelligent Automation (IA) event in New Orleans, December 6-8, snow in the Big Easy air was not the only surprise. As expected, there was plenty of technological innovation on show in the exhibition hall, but the event also played host to some energized discussions on human-centric gains to be realized from RPA implementation – suggesting that we are indeed moving into the next phase of considering automation holistically in the enterprise.

Specifically, many presentations and conversations shared a theme of human enablement within the enterprise – positioning the organization for greater long-term success, rather than focusing on the short-term fiscal gains of reductions in force and reduced cost to serve specific processes. Here are four automation gain frameworks I took away from the event that are focused on areas other than raw FTE reduction.

Automation as a disruption buffer

‘Disrupt or be disrupted’ has become a mantra for many change management executives across industries, and it was invoked numerous times during the IA event in relation to automation’s role as a buffer to disruptive change – in both directions. An automated workforce can quickly scale up (or down) as needed without costly and time-consuming facility management and workforce rationalization tasks. While there was some discussion regarding the downside containment role of RPA, far more participants at the event were looking to RPA as a tool to effectively manage explosive growth in their sectors

Automation as a ‘hazmat bot’

The idea of using bots to handle sensitive processes and data emerged as a strong theme for the near-term RPA sector roadmap. Where bots were actually less trusted with ‘low-touch’ environment data in highly-regulated industries, like BFSI and healthcare, the dialog is beginning to turn in favor of sending bots to touch and manipulate that data rather than humans.

The rationale is sound: bots can be coded with very narrowly-defined rights and credentials, self-document their own work without exception, and produce their own audit trails. Expect to see this trend gain steam in 2018 and beyond. ‘We send bots into nuclear reactors and onto other planets,’ one attendee told me. ‘We treat the data core in card issuance with no less of a hazmat perspective – where we can minimize human contact, we will, for everyone's benefit.'

Automation as a workflow stress diagnostic

The very process of automating workflows within the organization produces a wealth of usable data, and nowhere is that more evident than in analyzing those workflows for exception management stress points. In a given workflow, there are usually clearly defined and straightforward task components, and those that produce more than an average volume of exceptions. By mapping these workflows and using them to understand similar tasks in other areas of the organization, companies can leverage automation data to identify those phases of a workflow that are creating exception management stress for employees, and add support via process redesign, digitization, or assisted automation.

Automation as human capital churn ‘coolant’

Related to the previous point is the idea that RPA is beginning to serve as a very real source of ‘coolant’ for burnout-prone repetitive task areas in the organization by continuously separating work into automation-relevant and human-relevant. Eliminating the most burnout-causing task stages from the human workday reduces the proclivity for turnover and the total cost to the organization of managing the human side of the workforce.

Summary

Productivity, quality, and fiscal gains are often the first three topics of conversation when organizations discuss launching an RPA initiative. But automation has much more to offer, not only to the organizational bottom line, but to the human employees in the enterprise as well. As this sector’s technology offerings evolve and mature, so too do the use cases and benefit frameworks within customer organizations.

]]>
<![CDATA[Nvidia Draws on Gaming Culture to Compete for AI Chip Leadership]]>

 

Nvidia faces stiff new competition for the leadership position in the AI processing chip market. But the firm has a significant competitive advantage: a culture of innovation and production efficiency that was developed to address the demanding needs of a wholly different market.

Intel and Google have been making waves in the AI processing chip market, the former with the acquisitions of Nervana Systems and Mobileye, the latter with the new Tensor Processing Unit (TPU) announcement. Both are moves intended to compete more directly with Nvidia in the burgeoning market for AI processing chips.

James Wang of investment firm ARK recently set forth his long-term bet on the industry – and it favors Nvidia. Wang posits that products like TPU will be less efficient than Nvidia GPUs for the foreseeable future, arguing that “…until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.”

Wang is right, but his opinion may not actually go far enough in explaining why Nvidia should enjoy a sustainable advantage over other relative newcomers, despite their resources and experience in chipbuilding. That advantage, by the way, doesn’t have a thing to do with Google’s chip fabrication expertise, or Intel’s understanding of the needs of the AI market. It’s a deeper factor that’s seated firmly in Nvidia’s culture.

Cutting-edge engineering & savvy pricing: key strengths forged in the gaming cauldron

By the time 2017 dawned, Nvidia owned just over three-quarters of the graphics card segment (76.7%), compared with main competitor AMD’s one-quarter (23.2%). But that wasn’t always the case. In fact, for much of the past decade, Nvidia held an uncomfortable leadership position in the marketplace against AMD, sometimes leading by as few as ten points of market share (2Q10).

During that time, Nvidia understood that a misstep against AMD in bringing new products forth could yield the market leader position, and even send the company into an unrecoverable decline if gamers – a tough audience to say the least – lost confidence in Nvidia’s vision.

As such, Nvidia learned many of the principles of design thinking the hard way. They learned to fail fast, to find new segments in the market and exploit them – as they did with the GTX 970, a product that stunned the marketplace by being priced underneath its predecessor at launch – and to take and hold ground with innovation and rapid-cycle development. More importantly, they learned how to demonstrate value to a gamer community that wanted to buy long-term performance security when it was time for a hardware refresh. In short, they learned to understand the wants and needs of an extraordinarily demanding consumer public, in the form of gamers, and relentlessly squeezed their competition out with a combination of cutting-edge engineering and savvy segment pricing.

Much of the real-world output from that cultural core of relentless engineering improvement is the remarkable pace of platform efficiency that Nvidia has achieved in its GPU chips. The company maintained close ties with leading game publishing houses, and as a result kept clearly in mind what sort of processing speed – as well as heat output and energy draw – cutting-edge games were going to require. At multiple points in time, the standards for supporting new games have meaningfully advanced inside eighteen months. This often mandated that Nvidia turn over a new top-end GPU processing platform on a blistering production timeline.

In response, Nvidia turned to parallel computing, an ideal fit for GPUs, which already offered significantly more cores than their CPU cousins. As it turned out, Nvidia had put itself on the fast track to dominating the AI hardware market, since GPUs are far better suited for applications, like AI, that demand computing tasks work in parallel. In serving one market, Nvidia built a long-term engineering and fabrication roadmap nearly perfectly suited for another.

The competition is hot, but Nvidia poised to win?

Fast forward to 2017, and some are questioning whether Nvidia is in the fight of its life now with new, aggressive competitors seeking to take away part – or all – of its AI GPU business. While Wang has pushed his chips into the center of the table on Nvidia, others are unconvinced that Nvidia can hold its lead – especially with fifteen other firms actively developing Deep Learning chips. That roster includes such notable brands as Bitmain, a leading manufacturer of Bitcoin mining chips; Cambricon, a startup backed by the Chinese government; and Graphcore, a UK startup that hired a veritable ‘who’s who’ of AI talent. 

There’s no shortage of innovation and talent at these organizations, but hardware is a business that rewards sustained performance improvement over time at steadily reducing cost per incremental GFLOPS (where a GFLOP is one billion floating point operations per second). The first of these components is certainly an innovation-centric factor, but the second rewards organizations that have kept pace not only with the march of performance demands, but the need to justify hardware refresh with lower operating costs. Given that this is an area where Nvidia shines, as a function of its cultural evolution under identical circumstances in gaming, the sector’s long-term bet on Nvidia is the correct call. 

 

Dave Mayer is currently working on a major global project evaluating RPA & AI technology. To find out more, contact Guy Saunders.

]]>
<![CDATA[Fast Data: The Smart Will Get Faster... and the Fast Will Get Smarter]]>

 

Fast Data is the emerging hot topic of discussion for business leaders seeking to get ahead of the next wave of data utilization. But Fast Data isn't just an evolution of Big Data; it's a market force unto itself that's asking more of traditional and start-up vendors in both traditional DBMS and AI.

I spent a (surprisingly snowy) morning this week talking with AI and Big Data thought leaders at the Global Data Summit 2017 in Colorado. While there’s no shortage of topics to hold their current interest, none was a higher business priority than solving the challenge of managing Fast Data through the application of AI. The consensus is certainly that the organizations that can best address this challenge will also be those best positioned to compete and win overall. But how best to get their arms around this opportunity and move forward effectively?

First, it's important to distinguish between the challenges of leveraging Big Data and Fast Data. Big Data is generally data at rest; it's explored at (relative) leisure, and doesn’t change so quickly or accumulate so rapidly that offline analytics become impossible. AI has no shortage of applications in Big Data, but in that environment, it's more the ability of an AI platform to manage complexity and work at scale that offers value.

Fast Data, by contrast, accumulates quickly and can change substantively within the course of a day or even an hour. Think adtech here, or online gaming, or vendor pricing with commodity costs as an input; vast amounts of data need to be ingested, analyzed, and understood by the second in order to secure the right ad placement at peak value, or to manage complex MMO games, or to ensure that pricing continuously secures competitive advantage at acceptable margin.

Fast Data becomes Big Data quickly, just by nature of its accumulation rate, and while it's often valuable to query the Big Data that Fast Data becomes to understand trends and cyclicality, Fast Data will always yield its peak value at the millisecond level. It’s the freshest layer that offers the most insight. The Big Data value proposition to retailers, for instance, is looking for cyclicality of demand and regional demand preferences over time; the Fast Data value proposition is understanding the products a shopper is looking at right now and making real-time recommendations for, say, footwear and accessories to match. AI can accomplish both tasks, but often needs to be set about different tasks – with different priorities and ground truths – to succeed. The implications for every phase of the organizational data analysis and workflow management platform – from MDM and data hygiene to machine learning and AI application – are immense.

In response, expect to start seeing considerably more focus from major AI platform vendors not just on depth of understanding by their products, but speed of reaction as well. Organizations big and small in the traditional data sector, from Oracle to VoltDB, are developing and marketing smarter Fast Data solutions, while AI leaders – like IBM and Wipro – are building capabilities for faster data management within their AI platforms.

Servicing this rapidly-growing need for Fast Data management will be a convergent effort: the smart will get faster… and the fast will get smarter.

 

Dave Mayer is a Senior Analyst responsible for NelsonHall's RPA & Cognitive Services research program, covering the areas of robotic process automation (RPA), artificial intelligence, cognitive business, and machine learning. He is currently working on a major global project evaluating RPA & AI technology. To find out more about the project, contact Dave Mayer or Guy Saunders.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 3: From Pilot to Production & Beyond – The Keys to Successful RPA Deployment]]>

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the third and final blog in a series presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at the stages of deployment, from pilot development, through design & build, to production, maintenance, and support.

Piloting & deployment – it’s all about the business

When developing pilots, it’s important to recognize that the organization is addressing a business problem and not just applying a technology. Accordingly, organizations should consider how they can make a process better and achieve service delivery innovation, and not just service delivery automation, before they proceed. One framework that can be used in analyzing business processes is the ‘eliminate/simplify/standardize/automate’ approach.

While organizations will probably want to start with some simple and relatively modest RPA pilots to gain quick wins and acceptance of RPA within the organization (and we would recommend that they do so), it is important as the use of RPA matures to consider redesigning and standardizing processes to achieve maximum benefit. So begin with simple manual processes for quick wins, followed by more extensive mapping and reengineering of processes. Indeed, one approach often taken by organizations is to insert robotics and then use the metrics available from robotics to better understand how to reengineer processes downstream.

For early pilots, pick processes where the business unit is willing to take a ‘test & learn’ approach, and live with any need to refine the initial application of RPA. Some level of experimentation and calculated risk taking is OK – it helps the developers to improve their understanding of what can and cannot be achieved from the application of RPA. Also, quality increases over time, so in the medium term, organizations should increasingly consider batch automation rather than in-line automation, and think about tool suites and not just RPA.

Communication remains important throughout, and the organization should be extremely transparent about any pilots taking place. RPA does require a strong emphasis on, and appetite for, management of change. In terms of effectiveness of communication and clarifying the nature of RPA pilots and deployments, proof-of-concept videos generally work a lot better than the written or spoken word.

Bot testing is also important, and organizations have found that bot testing is different from waterfall UAT. Ideally, bots should be tested using a copy of the production environment.

Access to applications is potentially a major hurdle, with organizations needing to establish virtual employees as a new category of employee and give the appropriate virtual user ID access to all applications that require a user ID. The IT function must be extensively involved at this stage to agree access to applications and data. In particular, they may be concerned about the manner of storage of passwords. What’s more, IT personnel are likely to know about the vagaries of the IT landscape that are unknown to operations personnel!

Reporting, contingency & change management key to RPA production

At the production stage, it is important to implement a RPA reporting tool to:

  • Monitor how the bots are performing
  • Provide an executive dashboard with one version of the truth
  • Ensure high license utilization.

There is also a need for contingency planning to cover situations where something goes wrong and work is not allocated to bots. Contingency plans may include co-locating a bot support person or team with operations personnel.

The organization also needs to decide which part of the organization will be responsible for bot scheduling. This can either be overseen by the IT department or, more likely, the operations team can take responsibility for scheduling both personnel and bots. Overall bot monitoring, on the other hand, will probably be carried out centrally.

It remains common practice, though not universal, for RPA software vendors to charge on the basis of the number of bot licenses. Accordingly, since an individual bot license can be used in support of any of the processes automated by the organization, organizations may wish to centralize an element of their bot scheduling to optimize bot license utilization.

At the production stage, liaison with application owners is very important to proactively identify changes in functionality that may impact bot operation, so that these can be addressed in advance. Maintenance is often centralized as part of the automation CoE.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on RPA, with your buy-side peers sharing their RPA experiences. To find out more, contact Matthaus Davies.  

This is the final blog in a three-part series. See also:

Part 1: How to Lay the Foundations for a Successful RPA Project

Part 2: How to Identify High-Impact RPA Opportunities

]]>
<![CDATA[HCL: Applying RPA to Reduce Customer Touch Points in Closed Book Life Insurance]]> This is the third in a series of blogs looking at how business process outsourcing vendors are applying RPA in the insurance sector.

HCL provides closed book life insurance outsourcing services, and is currently engaged in RPA initiatives with three insurance clients.

In order to capture customer data in a smarter, more concise way, HCL is using ‘enhancers’ at the front end, providing users with intuitive screens based on the selected administrative task. These input forms aim to request only the minimum, necessary data required with RPA now being used to transfer the data to the insurance system, ALPS, via a set of business rules.

For example, one RPA implementation undertaken can recognize the product type, policy ownership, values, and payment methods, and it can prepare and produce correspondence for the customer. If all rules are met, it is then able to move onto payment on the due date. This has been done with a view to reducing the number of touchpoints and engaging with the customer only when required. Indeed, HCL is working with its clients to devise a more exhaustive set of risk-based rules to further reduce the extent to which information needs to be gathered from customers.

Seeking a 25% cost take-out in high volume activities

On average, 11k customer enquiries are received by one HCL insurance contact center every month, and these were traditionally handed off to the back office to be resolved. However, HCL is now using RPA and business rules to enable more efficient handling of enquires/claims with limited user input, with the aim of creating capacity for an additional 4.4k customer queries per month to be handled within the contact center.

Overall, within its insurance operations, HCL is applying RPA-based business rules to ~10 core process areas that together amount to around 60% of typical day-to-day activity. These process areas include:

  • Payments out, including maturities, surrenders, and transfers

  • Client information, including change of address or, account information

  • Illustrations.

These processes are typically carried out by an offshore team and the aspiration is to reduce the effort taken to complete each of them by ~25%. In addition, HCL expects that capturing customer data in this new way will shorten the end-to-end journey by between 5% and 10%.

One lesson learned has been the need for robust and compatible infrastructure, both internally (ensuring that all systems and platforms are operating on the same network), and with respect to client infrastructure; e.g. ensuring that HCL is using the same version of Microsoft or Internet Explorer as the client environment.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 2: How to Identify High-Impact RPA Opportunities]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the second in a series of blogs presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at how to assess and prioritize RPA opportunities prior to project deployment.

Prioritize opportunities for quick wins

An enterprise level governance committee should be involved in the assessment and prioritization of RPA opportunities, and this committee needs to establish a formal framework for project/opportunity selection. For example, a simple but effective framework is to evaluate opportunities based on their:

  • Potential business impact, including RoI and FTE savings
  • Level of difficulty (preferably low)
  • Sponsorship level (preferably high).

The business units should be involved in the generation of ideas for the application of RPA, and these ideas can be compiled in a collaboration system such as SharePoint prior to their review by global process owners and subsequent evaluation by the assessment committee. The aim is to select projects that have a high business impact and high sponsorship level but are relatively easy to implement. As is usual when undertaking new initiatives or using new technologies, aim to get some quick wins and start at the easy end of the project spectrum.

However, organizations also recognize that even those ideas and suggestions that have been rejected for RPA are useful in identifying process pain points, and one suggestion is to pass these ideas to the wider business improvement or reengineering group to investigate alternative approaches to process improvement.

Target stable processes

Other considerations that need to be taken into account include the level of stability of processes and their underlying applications. Clearly, basic RPA does not readily adapt to significant process change, and so, to avoid excessive levels of maintenance, organizations should only choose relatively stable processes based on a stable application infrastructure. Processes that are subject to high levels of change are not appropriate candidates for the application of RPA.

Equally, it is important that the RPA implementers have permission to access the required applications from the application owners, who can initially have major concerns about security, and that the RPA implementers understand any peculiarities of the applications and know about any upgrades or modifications planned.

The importance of IT involvement

It is important that the IT organization is involved, as their knowledge of the application operating infrastructure and any forthcoming changes to applications and infrastructure need to be taken into account at this stage. In particular, it is important to involve identity and access management teams in assessments.

Also, the IT department may well take the lead in establishing RPA security and infrastructure operations. Other key decisions that require strong involvement of the IT organization include:

  • Identity security
  • Ownership of bots
  • Ticketing & support
  • Selection of RPA reporting tool.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held later this month: to find out more, contact Guy Saunders.  

In the third blog in the series, I will look at deploying an RPA project, from developing pilots, through design & build, to production, maintenance, and support.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 1: Laying the Foundations for Successful RPA]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the first in a series of blogs presenting key guidelines for organizations embarking on RPA, covering establishing the RPA framework, RPA implementation, support, and management. First up, I take a look at how to prepare for an RPA initiative, including establishing the plans and frameworks needed to lay the foundations for a successful project.

Getting started – communication is key

Essential action items for organizations prior to embarking on their first RPA project are:

  • Preparing a communication plan
  • Establishing a governance framework
  • Establishing a RPA center-of-excellence
  • Establishing a framework for allocation of IDs to bots.

Communication is key to ensuring that use of RPA is accepted by both executives and staff alike, with stakeholder management critical. At the enterprise level, the RPA/automation steering committee may involve:

  • COOs of the businesses
  • Enterprise CIO.

Start with awareness training to get support from departments and C-level executives. Senior leader support is key to adoption. Videos demonstrating RPA are potentially much more effective than written papers at this stage. Important considerations to address with executives include:

  • How much control am I going to lose?
  • How will use of RPA impact my staff?
  • How/how much will my department be charged?

When communicating to staff, remember to:

  • Differentiate between value-added and non value-added activity
  • Communicate the intention to use RPA as a development opportunity for personnel. Stress that RPA will be used to facilitate growth, to do more with the same number of people, and give people developmental opportunities
  • Use the same group of people to prepare all communications, to ensure consistency of messaging.

Establish a central governance process

It is important to establish a strong central governance process to ensure standardization across the enterprise, and to ensure that the enterprise is prioritizing the right opportunities. It is also important that IT is informed of, and represented within, the governance process.

An example of a robotics and automation governance framework established by one organization was to form:

  • An enterprise robotics council, responsible for the scope and direction of the program, together with setting targets for efficiency and outcomes
  • A business unit governance council, responsible for prioritizing RPA projects across departments and business units
  • A RPA technical council, responsible for RPA design standards, best practice guidelines, and principles.

Avoid RPA silos – create a centre of excellence

RPA is a key strategic enabler, so use of RPA needs to be embedded in the organization rather than siloed. Accordingly, the organization should consider establishing a RPA center of excellence, encompassing:

  • A centralized RPA & tool technology evaluation group. It is important not to assume that a single RPA tool will be suitable for all purposes and also to recognize that ultimately a wider toolset will be required, encompassing not only RPA technology but also technologies in areas such as OCR, NLP, machine learning, etc.
  • A best practice for establishing standards such as naming standards to be applied in RPA across processes and business units
  • An automation lead for each tower, to manage the RPA project pipeline and priorities for that tower
  • IT liaison personnel.

Establish a bot ID framework

While establishing a framework for allocation of IDs to bots may seem trivial, it has proven not to be so for many organizations where, for example, including ‘virtual workers’ in the HR system has proved insurmountable. In some instances, organizations have resorted to basing bot IDs on the IDs of the bot developer as a short-term fix, but this approach is far from ideal in the long-term.

Organizations should also make centralized decisions about bot license procurement, and here the IT department which has experience in software selection and purchasing should be involved. In particular, the IT department may be able to play a substantial role in RPA software procurement/negotiation.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s extensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held in November: to find out more, contact Matthaus Davies.  

 

In the second blog in this series, I will look at RPA need assessment and opportunity identification prior to project deployment.

 

]]>
<![CDATA[WNS: Applying RPA in P&C Insurance with Focus on FNOL, Claims & Underwriting]]> This is the second in a series of blogs looking at how business process outsourcing vendors are applying RPA and AI in the insurance sector.

 

WNS’ RPA journey is moving quickly, with six pilots underway and five more ready to go. WNS has decided to wait on AI for the time being, in favour of developing its process automation capabilities, which has included the launch of eAdjudicator (a bolt-on RPA tool for claims adjudication) and InsurACE (a policy administration workflow tool) earlier this year.

RPA delivering 25% savings; 40% achievable with employee retraining

Echoing its peers, WNS started by applying RPA to defined, rules-based, and transaction-based insurance activities, specifically in payments and first notice of loss (FNOL), followed by subrogation, since these sub-processes are relatively standardized and do not require human judgement. Based on its pilot experience to date, cost savings in these areas have been around 25%, but in order to realise further cost savings, there is a ‘Phase 2’ that requires re-training of the labor force and process reengineering to take advantage of the automation, which could see a further 10-15% savings. Three of the pilots are in this second phase.

To take its journey forward, WNS required a technology partner who had an insurance focus, a cloud-based offering, and a particular strength in robotics for analytics – specifically with a capability to handle the vast number of compliance requirements imposed by the different U.S. states.  It found these in Blue Prism (although it continues to be open to additional partnerships with other technology vendors), who also happened to be looking for more traction in the insurance space – something that WNS brought to the table.

P&C FNOL, Claims & Underwriting the Focus for 2016

In 2016, WNS has three focus areas in which it will be applying RPA, based on client appetite: FNOL, claims processing, and underwriting (UW), with an overall aim of removing the unnecessary steps in each sub-process.

As yet, there does not seem to be huge traction on the life insurance side and, as such, WNS will be focusing on property & casualty (P&C) processes. An example of a recently on-boarded UW client is a U.S. P&C insurer who was seeking to reduce the number of UW assistants it would need to hire. The client expected to hire ~75 UW assistants, but since partnering with WNS, the expectation is now that it will be in a position to hire ~30% less than this, and a further ~20% additional capacity will be created. The client moved from pilot mode for this first line of business (personal auto) to full production in April 2016, and is set to add further lines of business to the scope, each one going through separate pilots.  

An example of cost saving achieved through applying the Blue Prism framework to a set of UW processes was with a client whose workforce operated in a predominantly virtual environment. The ‘before’ state saw work passing through ~40 handoffs, which WNS was able to bring down to 7, using workflow mapping. This alone has yielded ~35% savings for the client and has proved ‘transformational’ for the business.

In most cases, the conversations appear to be led by WNS. One of the key concerns raised by clients, however, is around what happens to staff allocation once RPA is deployed. Typically, staff are still very much required, but need re-training to make the most of the new systems and to ensure they operate effectively.

For now, WNS believes that sufficient savings and efficiencies can be gained through applying RPA to an insurance sub-process such as claims logging, which will provide the claims adjuster with a better summation of the situation and enable the handler to carry out the insurance process more effectively and accurately. For example, reducing the number of claims pages down from 50 to 10, and eventually to as little as 7 bullet points of actionable items.

Other similar areas in which WNS has successfully applied this type of RPA include medical review and transcription. However, WNS is of the view that there are some sub-processes that cannot be carried out by anything other than human effort, e.g. bodily injury; as it stands, WNS has not found a way to simulate the experience of the claims handler with RPA for this type of process.

Areas that are now progressed beyond pilot mode and are proving successful for WNS are:

  • Vendor payment
  • Subrogation (clients are almost all on transaction-based pricing)
  • Claims logging
  • FNOL (~60% of clients are on transaction-based pricing).
]]>
<![CDATA[Wipro: Applying RPA to Insurance Claims & New Business, Looking to Holmes to Support KYC]]> This is the first in a series of blog articles looking at how business process outsourcing vendors are applying RPA and AI in the insurance sector. First up: Wipro.

 

 

Wipro started its automation journey in the late noughties and has since gone on to set up a dedicated RPA practice, and also developed its own AI platform, Wipro Holmes. Currently, Wipro is principally partnering with Automation Anywhere for RPA software.

Clients showing early interest had questions around which insurance processes bots could most easily be deployed in, and where should they be applying RPA. The processes Wipro found to be most suitable for application of RPA in the insurance sector are claims processing and new business, and hence these are the key focus areas for Wipro.

Efficiency improvements of ~40% in target insurance sub-processes

Today, over 50% of Wipro’s RPA clients are in the BFSI sector, with ~40% using bots for data entry processes and 60% for rules-based services. Wipro currently has four clients for RPA services in the insurance sector split across life, annuities & pensions (LA&P), property & casualty (P&C), and healthcare insurance. Two of these companies are focused on a single geography and two are multi-geography, including U.S., Europe, LATAM and the Middle East.  

One of the insurance clients is a Swiss provider of life and P&C services for whom Wipro provides RPA in support of new business data entry. Pre-bots, the filling in of a new business form required the use of multiple unsynchronized screens to collect the necessary information. To address this issue, Wipro developed an interface (a replica of the application form) to enable 100% automated data entry using bots, a typical ‘swivel chair’ use of RPA. This yielded a 30% - 40% efficiency improvement.

In the healthcare payer sector, Wipro has implemented RPA in support of provider contract data management, specifically in the area of contract validation. Here, Wipro designed four bots in 90 days, automating ~75% of the contract validation process and improving productivity by ~40%.

In 2016, Wipro has noticed a shift in customer attitude, with organizations now appreciating the enhanced accuracy and level of auditability that RPA brings.

Of course, the implementation of RPA is not without its objections. One frequent question from organizations just starting the RPA journey is ‘how do I stop bots going berserk if the process changes?’, since once programmed, the bots are unable to do anything other than what they have been programmed to do. Accordingly, Wipro ensures that any changes that occur in a given process are flagged up in the command centre before an attempt is made for them to be carried out by a bot, and a signal is given that the bot needs ‘re-training’ in order to carry out that process.

Secondly, IT departments sometimes ask how long the bots are required to stay in the work environment and how do they fit into an overall IT transformation strategy. Wipro’s response is to treat the bot like an FTE and to keep it for as long as it is achieving benefit, ‘re-training’ it as required. Wipro suggests that bots wouldn’t conflict with the aims of an IT transformation, and ought to be considered as complementary to an IT transformation.

Complementing RPA with Cognitive using Holmes

So far, so good for Wipro regarding its application of RPA in the insurance sector. RPA is being used to address data entry processes (40% of activity) and rules-based transaction processing areas such as claims (60% of current activity). However, this still leaves the question of complementing the rigid process execution of RPA with machine learning and self-learning processes, and also the question of addressing knowledge-based processing requiring human judgment.

This is where Wipro Holmes comes into the picture – a proprietary AI platform with applications for cognitive process automation, knowledge visualization, and predictive services. The platform is not currently being used with insurance clients, but conversations are expected to start within the next 9 months. It is expected that, in contrast to the RPA conversations which were led by Wipro in more than 95% of cases, the AI discussion will be led by existing RPA clients and across a wider pool of services, including finance & accounting (F&A).

Accordingly, the focus now is on developing Wipro Holmes, to ensure it is ready for use with clients in 2017. Insurance activities that will benefit first from this platform could include the area of Know Your Customer (KYC) compliance, to enable more rapid client on-boarding. 

]]>
<![CDATA[TCS Leapfrogging RPA & as-a-Service with Neural Automation & Services-as-Software]]> Much of the current buzz in the industry continues to be centered on RPA, a term currently largely synonymous with automation, and this technology clearly has lots of life left in it, for a few years at least. Outside service providers, where its adoption is rapidly becoming mature, RPA is still at the early growth stage in the wider market: while a number of financial services firms have already achieved large-scale roll-outs of RPA, others have yet to put their first bot into operation.

RPA is a great new technology and one that is yet to be widely deployed by most organizations. Nonetheless, RPA fills one very specific niche and remains essentially a band-aid for legacy processes. It is tremendous for executing on processes where each step is clearly defined, and for implementing continuous improvement in relatively static legacy process environments. However, RPA, as TCS highlights, does have the disadvantages that it fails to incorporate learning and can really only effectively be applied to processes that undergo little change over time. TCS also argues that RPA fails to scale and fails to deliver sustainable value.

These latter criticisms seem unfair in that RPA can be applied on a large scale, though frequently scale is achieved via numerous small implementations rather than one major implementation. Similarly, provided processes remain largely unchanged, the value from RPA is sustained. The real distinction is not scalability but the nature of the process environment in which the technology is being applied.

Accordingly, while RPA is great for continuous improvement within a static legacy process environment where processes are largely rule-based, it is less applicable for new business models within dynamic process environments where processes are extensively judgment-based. New technologies with built-in learning and adaptation are more applicable here. And this is where TCS is positioning Ignio.

TCS refers to Ignio as a “neural automation platform” and as a “Services-as-Software” platform, the latter arguably a much more accurate description of the impact of digital on organizations than the much-copied Accenture “as-a-Service” expression.

TCS summarizes Ignio as having the following capabilities:

  • “Sense”: ability to assimilate and mine diverse data sources, both internal and external, both structured and unstructured (via text mining techniques)
  • “Think”: ability to identify trends & patterns and make predictions and estimate risk
  • “Act”: execute context-aware autonomous actions. Here TCS could potentially have used one of the third-party RPA software products, but instead chose to go with their own software instead
  • “Learn”: improving its knowledge on a continuous basis and self-learning its context.

TCS Ignio, like IPsoft Amelia, began life as a tool for supporting IT infrastructure management, specifically datacenter operations. TCS Ignio was launched in May 2015 and is currently used by ten organizations, which includes Nationwide Building Society in the U.K. All ten are using Ignio in support of their IT operations, though the scope of its usage remains limited at present, with Ignio being used within Nationwide in support of batch performance and capacity management. Eventually the software is expected to be deployed to learn more widely about the IT environment and predict and resolve IT issues, and Ignio is already being used for patch and upgrade management by one major financial services institution.

Nonetheless, despite its relatively low level of adoption so far within IT operations, TCS is experiencing considerable wider interest in Ignio and feels it should strike while the iron is hot and take Ignio out into the wider business process environment immediately.

The implications are that the Ignio roll-out will be rapid (expect to see the first public example in the next quarter) and will take place domain by domain, as for RPA, with initial targeted areas likely to include purchase-to-pay and order-to-cash within F&A and order management-related processes within supply chain. In order to target each specific domain, TCS is pre-building “skills” which will be downloadable from the “Ignio store”. One of the initial implementations seems likely to be supporting a major retailer in resolving the downstream implications of delivery failures due to causes such as traffic accidents or weather-related incidents. Other potential supply chain-related applications cited for Ignio include:

  • Customer journey abandonment
  • The profiling, detection, and correction of check-out errors
  • Profiling, detecting, and correcting anomalies in supplier behavior
  • Detection of customer feedback trends and triggering corrective action
  • Profiling and predicting customer behavior.

Machine learning technologies are receiving considerable interest right now and TCS, like other vendors, recognizes that rapid automation is being driven faster than ever before by the desire for competitive survival and differentiation, and in response is adopting a “if it can be automated, it must be automated” stance. And the timescales for implementation of Ignio, cited at 4-6 weeks, are comparable to that for RPA. So Ignio, like RPA, is a relatively quick and inexpensive route to process improvement. And, unlike many cognitive applications, it is targeted strongly at industry-specific and back office processes and not just customer-facing ones.

Accordingly, while RPA will remain a key technology in the short-term for fixing relatively static legacy rule-based processes, next generation machine learning-based “Services-as-Software” platforms such as Ignio will increasingly be used for judgment-based processes and in support of new business models. And TCS, which a year ago was promoting RPA, is now leading with its Ignio neural automation-based “Services-as-Software” platform.

]]>