NelsonHall: Digital Transformation Technologies & Services blog feed https://research.nelson-hall.com//sourcing-expertise/digital-transformation-technologies-services/?avpage-views=blog NelsonHall's Digital Transformation Technologies & Services program is designed for organizations considering, or actively engaged in, the application of robotic process automation (RPA) and cognitive services such as AI to their business processes. <![CDATA[AntWorks Targets Breadth & Depth in Client Engagements, Partners & Curation Capabilities]]>

 

Last week, NelsonHall attended ANTENNA2020, AntWorks’ yearly analyst retreat. AntWorks has made considerable progress since its last analyst retreat, experiencing considerable growth (estimated at ~260%) in the three quarters ending January 2020, and employing 604 personnel at the end of this period.

By geography, AntWorks’ most successful geography remains APAC, closely followed by the Americas, with AntWorks having an increasingly balanced presence across APAC, the Americas, and EMEA. By sector, AntWorks’ client base remains largely centered on BFSI and healthcare, which together account for ~70% of revenues.

The company’s success continues to be based on its ability to curate unstructured data, with all its clients using its Cognitive Machine Reading (CMR) platform and only 20% using its wider “RPA” functionality. Accordingly, AntWorks is continuing to strengthen its document curation functionality while starting to build point solutions and building depth into its partnerships and marketing.

Ongoing Strengthening of Document Curation Functionality

The company is aiming to “go deep” rather than “shallow and wide” with its customers and cites the example of one client which started with one unstructured document use case and has over the past year introduced an additional ten unstructured document use cases resulting in revenues of $2.5m.

Accordingly, the company continues to strengthen its document curation capability, and recent CMR enhancements include signature verification, cursive handwriting, language extension, sentiment analysis, and hybrid processing. The signature verification functionality can be used to detect the presence of a signature in a document and verify it against signatures held centrally or on other documents and is particularly applicable for use in KYC and fraud avoidance where, for example, a signature on a passport or driving license can be matched with those on submitted applications.

This strategy of the depth of document curation functionality resonated strongly with the clients speaking at the event. In one such case, it was the depth of the platform allowing cursive and text to be analyzed together that led to an early drop out of a number of competitors tasked with building a POC that could extract cursive writing.

AntWorks also continues to extend the range of languages where it can curate documents; currently, 17 languages are supported. The company has changed the learning process for new languages to allow for quicker training on new languages, with support for Mandarin and Arabic available soon.

Hybrid processing enables multi-format documents containing, for example, text, cursive handwriting, and signatures to be processed in a single step.

Elsewhere, AntWorks has addressed a number of hygiene factors with QueenBOT, enhancing its business continuity management, auto-scaling, and security. Auto-scaling in QueenBOT to allow bots to switch between processes if one process requires extra assistance to meet SLAs, effectively allowing bots to be “carpenters in the morning and electricians in the evening,” increasing both SLA adherence and bot utilization.

Another key hygiene factor addressed in the past year has been training material. AntWorks began 2019 with a thin training architecture, with just two FTEs supporting the rapidly expanding company; over the past year, the number of FTEs supporting training has grown to 25, supporting the creation of thousands of hours of training material. AntWorks also launched its internship program, starting in India which has added 43 FTEs in 2019. The ambition this year is to go global with this program.

Announcement of Process Discovery, Email Agent & APaaS Offerings

Process discovery is an increasingly important element in intelligent automation, helping to remove the up-front cost involved in scaling use cases by identifying and mapping potential use cases.

AntWorks’ process discovery module enables organizations to both record the keystrokes taken by one or more users against multiple transactions or import keystroke data from third-party process discovery tools. From these recordings, it uses AI to identify the cycles of the process, i.e. the individual transactions, and presents the user with the details of the workflow, which can then be grouped into process steps for ease of use. The process discovery module can also be used to help identify the business rules of the process and assist in semi-automatic creation of the identified automations (aka AutoBOT).

The process discovery module aims to offer ease of use compared to competitive products and can, besides identifying transaction steps, be used to assist organizations in calculating the RoI on business cases and in estimating the proportions of processes that can be automated, though AntWorks is understandably reluctant to underwrite these estimates.

One of the challenges for AntWorks over the coming year is to develop standardized use cases/point solutions based on its technology components, initially in horizontal form, and ultimately verticalized. Two of these just announced are Email Agent and Accounts Payable as-a-Service (APaaS).

Email Agent is a natural progression for AntWorks given its differentiation in curating unstructured documents, built on components from the ANTstein full-stack and packaged for ease of consumption. It is a point solution designed solely to automate email traffic and encompasses ML-based email classification, sentiment analysis to support email prioritization, and extraction of actionable data. Email Agent can also respond contextually via templated static or dynamic content. AntWorks estimates that 40-50 emails are sufficient for training for each use case such as HR-related email.

The next step in the development of Email Agent is the production of verticalized solutions by training the model on specific verticals to understand the front office relations organizations (such as those in the travel industry) have with their clients.

APaaS is a point solution consisting of a pre-trained configuration of CMR to extract relevant information from invoices which can then be API’d into accounting systems such as QuickBooks. Through these point solutions offered on the cloud, AntWorks hopes to open up the potential for the SME market.

Focusing on Quality of Partnerships, Not Quantity

Movement on AntWorks’ partner ecosystem (now ~66) has been slower than expected, with only a handful of partners added since last year's ANTENNA event, despite its expansion being a priority. Instead, AntWorks has been ensuring that the partnerships it does have and signs are deep and constructive. Examples of these deep partnerships include Bizhub and Accenture, two partners who have been added and that are helping train CMR in Korean and Thai respectively in exchange for some timed exclusivity in those countries.

AntWorks is also partnering with SBI Group to penetrate the South East Asia marketplace, with SBI assisting AntWorks in implementing the ability to carry out data extraction in Japanese. Elsewhere, AntWorks has partnered with the SEED Group based in Dubai and chaired by Sheikh Saeed Bin Ahmed Al Maktoum to access the MENA (Middle East & North Africa) region.

New hire Hugo Walkinshaw was brought in to lead the partnership ecosystem very recently, and he has his work cut out for him, as CEO Ash Mehra targets a ratio of direct sales to sales through partners of between 60:40 and 50:50 (an ambitious target from the current 90:10 ratio). The aim is to achieve this through the current strategy of working very closely with partners, signing exclusive partnerships where appropriate, and targeting less mature geographies and emerging use cases, such as IoT, where AntWorks can establish a major presence.

In the coming year, expect AntWorks to add more deep partnerships focused on specific geographic presence in less mature markets and targeted verticals, and possibly with technology players to support future plans for running bots on embedded devices such as ships.

Continuing to Ramp Up Marketing Investment

AntWorks was relatively unknown 18 months ago but has made a major investment in marketing since then. AntWorks attended ~50 major events in 2019, possibly 90 events in total, counting all minor events. However, AntWorks’ approach to events is arguably even more important than the number attended, with the company keen to establish a major presence at each event it attends. AntWorks does not wish to be merely another small booth in the crowd, instead opting for larger spaces in which it can run demos to support the interest in clients and partners.

This appears to have had the desired impact. Overall, AntWorks states that in the past year it has gone from being invited to RFIs/RFPs in 20% of cases to 80% and that it intends to continue to ramp up its marketing budget.

A series B round of funding, currently underway, is targeted on expanding its marketing investments as well as its platform capabilities. Should AntWorks utilize this second round of funding as effectively as its first with SBI Investments 2 years ago, we expect it to act as a springboard for exponential growth and these deep relationships and continue to lead in middle- and back-office intelligent automation use cases with high volumes of complex or hybrid unstructured documents.

]]>
<![CDATA[UiPath: Forging Connections Between Business Users & Automation]]>

 

Reboot work was the slogan for UiPath’s recent Forward III partner event, a reference to rethinking the way we work. UiPath’s vision is to elevate employees above repetitive and tedious tasks to a world of creative, fulfilling work. The company’s vision is driven by an automation first mindset, along with the concept of a bot for everyone and human-automation collaboration.

During the event, which attracted ~3K attendees, UiPath referenced ~50 examples of clients at scale, and pointed to a sales pipeline of more than $100m.

Previously, UiPath’s automation process had three phases: Build, Manage, and Run, using Studio, Orchestrator, and Attended and Unattended bots respectively. Their new products extend this process to six phases: Plan, Build, Manage, Run, Engage, and Measure. In this blog, I look at the six phases of the UiPath automation process and at the key automation products at each stage, including new and enhanced products announced at the event.

 

 

Plan phase (with Explorer Enterprise, Explorer Expert, ProcessGold, and Connected Enterprise)

By introducing the product lines Explorer and Connected Enterprise, UiPath aims to allow RPA developers to have a greater understanding of the processes to be automated when planning RPA development.

Explorer consists of three components, Explorer Enterprise, Explorer Expert, and ProcessGold. Explorer provides new process mapping and mining functionality building on two UiPath acquisitions: the previously announced SnapShot, which now comes under the Explorer Enterprise brand, and the newly announced ProcessGold whose existing clients include Porsche and EY. Both products construct visual process maps in data-driven ways; Explorer Enterprise (SnapShot) does this by observing the steps performed by a user for the process, and ProcessGold does this by mining transaction logs from various systems.

Explorer Enterprise performs task mining, with an agent sitting in the background of a user machine (or set of users’ machines) for 1-2 weeks. Explorer then collects details of the user activities, the effort required, the frequency of the activity, etc.

ProcessGold, on the other hand, monitors transaction logs and, following batch updates and 2 to 3 hours of construction, builds a process flow diagram. These workflow diagrams show the major activities of the process and the time/effort required for each step, which can then be expanded to an individual task level. Additionally, at the activity level, the user has access to activity and edge sliders. The activity slider expands the detail of the activities, and the edge slider expands the number of paths that the logged users take, which can identify users possibly straying from a golden path.

Administrators can then use the data from Explorer Enterprise and/or ProcessGold in Explorer Expert. Explorer Expert allows the admin users to enter deeper organizational insights, and either record a process to build or manually create a golden path workflow. These workflows act as a blueprint to build bots and can be exported to Word documents which can then be used by bot creators.

Connected Enterprise enables an organization to crowdsource ideas for which processes to automate, and aims to simplify the automation and decision-making pipelines for CoEs.

Automation ideas submitted to Connected Enterprise are accompanied by process information from the submitter in the form of nine standard questions, such as how rule-based it is, how likely to change it is, who the owner is, etc. as well as process owners. This information is crunched to produce automation potential and ease of implementation scores to help decide on the priority of the automation idea. These ideas are then curated by admins who can ask the end-user for more information, including an upload of ProcessGold files.

The additions of Explorer and Connected Enterprise allow developers to gain deeper insights into the processes to be automated, and business users to connect with RPA development.

Build phase (with enhanced Studio, plus new StudioX & StudioT)

New components to the build phase include StudioX and StudioT along with a number of enhancements to the existing Studio bot builder.

StudioX is a simplified version of the Studio component which is targeted at citizen developers and regular business users, which UiPath referred to ‘Excel power user level’, to create more simplistic bots as part of a push for citizen developers and a bot for every person.

StudioX simplifies bot development by removing the need for variables, and reduces the number of tasks that can be selected. Bots produced with StudioX can be opened with Studio; however, the reverse may not necessarily be the case depending on the components used in Studio.

The build-a-bot demo session for StudioX focused on using Excel to copy data in and out of HR and finance systems and extracting and renaming files from an Outlook inbox to a folder. Using StudioX in the build-a-bot session was definitely an improvement over Studio for the creation of these simple bots.

StudioT, which is in beta and set to release Q1 2020, will act as a version of Studio focused entirely on testing automation. NelsonHall’s software testing research, including software testing automation, can be found here.  

Further key characteristics of the existing build components include:

  • Long-running workflows which can suspend a process, send a query to a human while freeing up the bot, and continuing the bot once the human has provided input
  • Cloud which has a 1-minute signup for the Community version of the aaS platform and (as of September 2019) has 240k users, up from 167k in June 2019
  • Queue triggers which can automatically take action when items are added to the queue
  • More advanced debugging with breakpoint and watch panels
  • Taxonomy management
  • Validation stations.

With the introduction of StudioX, UiPath aims to democratize RPA development to the business users, at least in simple cases; and with long-running workflows, human-bot collaboration no longer requires bots to sit idle, hogging resources while waiting for responses.

Manage phase (with AI Fabric)

The Manage phase now allows users to manage machine learning (ML) models using AI Fabric, an add-on to Studio. It allows users to more easily select ML models, including models created outside of UiPath, and integrate them into a bot. AI Fabric, which was announced in April 2019, has now entered private preview.

Run phase (with enhancements to bots with native integrations)

Improvements to the run components leverage changes across the portfolio of Plan, Build, Manage, Run, Engage, and Measure, in particular for attended bots with Apps (see below). Other new features include:

  • Expanding the number of native integrations, for which UiPath and its partners are building 100s of connectors to business applications such as Salesforce and Google to provide functionality including launching bots from the business application. Newer native applications are available via the UiPath Go! Storefront
  • A new tray will feature in the next release.

Engage phase (allowing users direct connection to bots with Apps)

Apps act as a direct connection for users to interact with attended bots through the use of forms, tasks, and chatbots. In Studio, developers can add a form with the new form designer to ask for inputs directly from the user. For example, combined with an OCR confidence score, a bot could trigger a form to be filled in should the confidence score of the OCR be substandard due to a low-quality image.

Bots that encounter a need for human intervention through Apps will automatically suspend, add a task to the centralized inbox, and move on to running another job. When a human has completed the required interaction, the job is flagged to be resumed by a bot.

With the addition of Apps, the development required to capture inputs from the business users is minimized to allow for a deeper human-bot connection, reduction of development timelines, and helping to enable the goal of ‘a bot for every person’.

Measure phase (now with Insights to measure bot performance)

Insights expands UiPath’s reporting capabilities. Specifically, Insights features customizable dashboarding facilities for process and bot metrics. Insights also features the ability to send pulses, i.e. notifications, to users on metrics, such as if an SLA falls below a threshold. Dashboards can be filtered on processes and bots and can be shared through a URL or as a manually sent or scheduled PDF update.

What does this mean for the future of UiPath?

While UiPath and its competitors have long-standing partnerships with the likes of Celonis for process mining, the addition of native process mining through the acquisitions of SnapShot and ProcessGold, in addition to the expanded reporting capabilities, position UiPath as more of an end-to-end RPA provider.

With ProcessGold, NelsonHall believes that UiPath will continue the development of Explorer, which could lead to a nirvana state in which a client deploys ProcessGold, ProcessGold maps the processes and identifies areas that are ideal for automation, and Explorer Expert helps the bot creator to design this process by linking directly with Studio. While NelsonHall has had conversations with niche process mining and automation providers that are focusing on developing bots through a combination of transaction logs and recording users, UiPath is currently the best positioned of the big 3 intelligent automation platform providers to invest in this space.

StudioX is a big step towards allowing citizen developers. During our build-a-bot session, it was clear that the simplified version of the platform is more user-friendly, resulting in the NelsonHall team powering ahead of the instructor at points. However, we were somewhat concerned that while StudioX opens up the ability to develop bots to a larger scope of personae, the slight disconnects between Studio and StudioX could lead to users learning StudioX and wanting to leverage activities that are currently restricted to Studio (such as error handling) becoming frustrated. NelsonHall believes that the lines between Studio and StudioX will blur, with StudioX receiving simplified functions currently restricted to Studio, which will enable more bots to be passed between the two personae.

Conclusion

With the announcements at the Forward III event, it is clear that UiPath is enabling organizations to connect the business users directly with automation; be that through citizen developers with StudioX, the Connected Enterprise Hub to forge stronger connections between business users and automation CoEs, Explorer to allow the CoE to have greater understanding of the processes, or Apps to provide direct access to the bot.

This multi-pronged push approach to connect the developer and automation to the business user will certainly reduce frustrations around bot development and reduce the feeling from business users that automation is something that is thrust upon them rather than being part of their organization's journey to a more efficient way of working.

]]>
<![CDATA[Genpact Acquires Rightpoint to Strengthen 'Experience' Capability]]>

 

Enterprise operations transformation requires three critically important capabilities:

  • Domain process expertise and the ability to identify new “digital” target operating models
  • Transformational technology capability, leveraging technologies such as cloud platforms and intelligent automation to elevate straight-through processing and self-service principles ahead of agent-based processing
  • Experience design and implementation, now highly important to optimize the experience across entire customer, employee, and partner populations.

Genpact has strong domain process expertise and, in recent years, has developed strong transformational technology capability but, despite its acquisition of TandemSeven, has historically possessed lower levels of capability in “experience” design and development.

However, TandemSeven’s experience capability was becoming highly important to Genpact even in core activities such as order management and collections, and Genpact recognized that “experience” was potentially a key differentiating factor for the company. Accordingly, having seen the benefits of integrating TandemSeven, Genpact increasingly looked to go up the value chain in experience capability by both enhancing and scaling its existing capabilities.

Rightpoint Judged to be Highly Complementary to TandemSeven

Rightpoint was then identified as a possible acquisition target by the Genpact M&A team, with Genpact judging that Rightpoint’s assets and capabilities were highly complementary to those of TandemSeven.

Rightpoint currently employs ~450 personnel and positions as a full-service digital agency offering multidisciplinary teams across strategy, design, content, engineering, and insights. The company was formed with the thesis that employee experience is paramount, with the company initially focusing on employee experience, a key area for Genpact, and subsequently developing an increasing emphasis on consumer experience in recent years.

Genpact perceives that Rightpoint can make a significant contribution to helping organizations “define the creative, define the interactive, and hence define a higher experience.” The company’s clients include Aon, Sanofi, M Health, Grant Thornton, Flywheel, and Walgreens. For example, Rightpoint has defined and designed the entire employee experience for Grant Thornton, where the company developed an employee information sharing and knowledge management platform. In addition, Rightpoint has assisted a large pharmaceutical company in creating a patient engagement application to encourage patients to monitor their insulin and sugar levels.

In addition to a complementary skillset, Rightpoint is also complementary to TandemSeven in industry presence. TandemSeven has a strong focus on financial services, with Rightpoint having a significant presence in healthcare and clients in consumer goods, auto, and insurance.

Maximizing the Synergies Between Genpact & Rightpoint

Genpact expects to grow both Rightpoint’s and its own revenues by exploiting the synergies between the two organizations.

One initial synergy being targeted by Genpact is providing end-to-end and “closed loop” services to its clients. Rightpoint employs both creative and technology personnel, with its creative personnel typically having a blend of technology capability allowing them to go from MVP to first product to roll-out. Rightpoint is a Microsoft Customer Engagement Alliance National Solution Provider, a Sitecore Platinum Partner, a certified Google Developer Agency, and also has partnerships with Episerver and Salesforce.

However, the company lacks the process and domain expertise that Genpact can bring to improve process target models and process controls & management. For example, for the medical company example above, Rightpoint could develop the app, while Genpact could run the app and provide the analytics to improve patient engagement, with Rightpoint then modifying the app accordingly.

Secondly, Genpact will support Rightpoint’s growth by bringing financial muscle to Rightpoint, facilitating:

  • An ability to invest in new technology capability in platforms such as Shopify and Adobe
  • The financial means to be able to spend a significant amount of time doing discovery work with clients and prospects, and hence targeting larger-scale assignments.

However, Genpact is being careful not to overstretch Rightpoint. The company intends to be highly disciplined in introducing Rightpoint to its accounts, initially targeting just those champion accounts where Rightpoint will enable Genpact to create a significant level of differentiation.

Genpact also perceives that it can learn from Rightpoint delivery methodologies. Rightpoint has a strong methodology in driving agile delivery and makes extensive use of gig workers (with ~10-15% of its workforce being gig workers) and these are both areas where Genpact perceives it can apply Rightpoint practice to its wider business.

Rightpoint Will Retain its Identity, Culture & Management

Rightpoint and TandemSeven are planned to be integrated with a porting of expertise and resources between both companies, and with Ross Freedman heading an expanded Rightpoint capability and reporting into Genpact’s transformation services lead.

In terms of the current organization, RightPoint has an experience practice and a digital operations practice. This includes an offshore delivery center in Jaipur and technology practice groups. However, while the practices are national, most of Rightpoint’s client delivery work is carried out in regional centers to give strong client proximity. The company’s HQ is in Chicago, with regional centers in Atlanta, Boston, Dallas, Denver, Detroit, Los Angeles, New York, and Oakland.

In due course, Genpact will likely further restructure some of the delivery, with a greater proportion of non-client-facing activity being moved into offshore CoEs.

]]>
<![CDATA[Automation Anywhere’s Enterprise A2019, Simpler to Use, Quicker to Scale]]> ‘Anything Else is Legacy’ was the messaging presented at Automation Anywhere’s Enterprise A2019 launch, hosted in New York.

The event, the first under new CMO Riadh Dridi, showcased improvements in the new version of the Automation Anywhere platform around:

  • Experience – the most immediate change is in the UI. While prior versions utilized code, workflow, and mixed code/workflow views, the new version features a completely revamped workflow view that simplifies the UX with little coding environment
  • Cloud –delivery now utilizes a completely web-based interface, allowing users to sign in and create bots in minutes with zero required installation. This speed of development was demonstrated live on stage with SVP of products Abhijit Kakhandiki successfully racing to create a simple bot against the arrival of an Uber ordered by CEO Mihir Shukla. The bot used in this example was part of Automation Anywhere’s RPA-aaS offering, hosted on Azure leveraging its partnership with Microsoft. Automation Anywhere was also keen to point out the ability to use the platform on-premise or in a private cloud, as is deployed at JP Morgan Chase, the client speaking at the event
  • Ecosystem – Automation Anywhere highlighted it has strong and growing ecosystem. With Microsoft, for example, the partnership has been operating for over a year and has so far featured the ability to embed Microsoft’s AI tools into bots, and the above-mentioned Azure partnership. The event featured a demonstration of the integration of Automation Anywhere into Office: a user was able to select and use bots from Excel, as a single joined experience
  • Intelligent Automation – in addition to leveraging the ecosystem for its ability to drag and drop third-party AI components, another improvement in A2019 was the integration of the capabilities gained through the Klevops acquisition earlier this year to improve assisted automation capabilities, providing a greater bot and human collaboration across teams and workflows

The majority of these enhancements are already analyzed in NelsonHall’s profile of Automation Anywhere’s capabilities as part of the Intelligent Automation Platform NEAT assessment.

Using the above enhancements, Automation Anywhere estimates that whereas previously clients required 3 to 6 months to POC, and a further 6 to 24 months to scale, it now takes 1-4 months to POC and 4-12 months to scale.

Absent from the event were enhancements to the governance procedure of bots, vitally important as the access to build bots increases, and the bot store for which curation could still be an issue.

While the messaging of the event was ‘Anything Else is Legacy’, there were some natural points in which the announcement looks unfinished – the partnership with Office currently only extends to Excel, the rest of the suite will follow, and the Community version of Automation Anywhere, which is how a large proportion of users dip their toes in the water of automation, is set to be updated to match A2019 later in Q4 2019. Likewise, while the improvement to the workflow view is much cleaner, easier to use than competitors, leading to quicker bot development, the competitor platforms more easily handle complex, branching operations. Therefore, while A2019 can be ideal for organizations that are looking to have citizen developers build simple bots, organizations looking to automate more complex workflows should include the competing platforms in shortlisting.

NelsonHall's profile on the Automation Anywhere platform can be found here.

The recent NEAT evaluation of Intelligent Automation Platforms can be found here.

]]>
<![CDATA[NelsonHall Launches Industry-First Intelligent Automation Platform Evaluation]]>

 

NelsonHall has just launched an industry-first evaluation of Intelligent Automation (IA) platforms, including platforms from Antworks, Automation Anywhere, Blue Prism, Datamatics, IPsoft, Jacada, Kofax, Kryon, Redwood, Softomotive, and UiPath.

As RPA and artificial intelligence converge to address more sophisticated use cases, we at NelsonHall feel it is now time for an evaluation of IA platforms on an end-to-end basis and based on the use cases to which IA platforms will typically be applied. Accordingly, NelsonHall has evaluated IA platforms against five use cases:

  • Ability for Business Process Owners to Develop Automations
  • Bot/Human Co-Working SSC Capability
  • Ease of IA Adoption & Scaling
  • End-to-End IA Capability
  • Overall.

Ability for Business Process Owners to Develop Automations – as organizations move to a ‘bot for every worker’, platforms must support the business process owners in developing automations rather than select individuals as part of an automation CoE. Capabilities that support business process owners in developing an automation include a strong bot development canvas, a well-populated app/bot store, and process discovery functionality, all in support of speed of implementation.

Bot/Human Co-Working SSC Capability – in addition to traditional unassisted back-office automation and assisted individual automations, bots are increasingly required to provide end-to-end support for large-scale SSC and contact center automation. This increasingly requires bot/human rather than human/bot co-working, with the bot taking the lead in processing SSC transactions, queries and requests. The key capabilities here include conversational intelligence, ability to handle unidentified exceptions, and seamless integration of RPA and machine learning.

End-to-End IA Capability – the ability for a platform to support an automation spanning an end-to-end process, leveraging ML and artificial intelligence, either through native technologies or through partnerships. While many IA implementations remain highly RPA-centric, it is critical for organizations to begin to leverage a wider range of IA technologies if they are to address unstructured document processing and begin to incorporate self-learning in support of exception handling. Key capabilities here include computer vision/NLP, ability to handle unidentified exceptions, and seamless integration of RPA and machine learning in support of accurate document/data capture, reduced error rates, and improved transparency & auditability of operations.

Ease of IA Adoption & Scaling – the ability for organizations to roll out automations at scale. Key criteria here include the ability to leverage the cloud delivery of the IA platform and the strength of the bot orchestration/management platform.

Overall – a composite perspective of the strength of the IA platforms across capabilities, delivery options, and the benefits provided to clients.

No single platform is the most appropriate across all these use cases, and the pattern of capability varies considerably by use case. And this area is ill-understood, even by the vendors operating in this market, with companies that NelsonHall has identified as leaders unknown even to some of their peers. However, the NelsonHall Evaluation & Assessment Tool (NEAT) for IA platforms enables organizations to see the relative strengths and capabilities of platform vendors for all the use cases described above in a series of quadrant charts.

If you are a buy-side organization, you can view these charts, and even generate your own charts based on criteria that are important to you, FREE-OF-CHARGE at NelsonHall Intelligent Automation Platform evaluation.

The full project, including comprehensive profiles of each vendor and platform, is also available from NelsonHall by contacting either Guy Saunders or Simon Rodd.

]]>
<![CDATA[Democratizing RPA through the Connected Entrepreneur Enterprise]]>

 

Following on from the Blue Prism World Conference in London (see separate blog), NelsonHall recently attended the Blue Prism World conference in Orlando. Building on the significant theme around positioning the ‘Connected Entrepreneur Enterprise’, the vendor provided further details on how this links to the ‘democratization’ of RPA through organizations.

In the past, Blue Prism has seen automation projects stall when being led from the bottom up (due to inabilities to scale and apply strong governance or best practices from IT), or from the top down (which has issues with buy-in and with speed of deployments). However, their Connected Entrepreneur Enterprise story aims to overcome these issues by decentralizing automation. So how is Blue Prism enabling this?

Connected Entrepreneur Enterprise

The Connected RPA components, namely Blue Prism’s connected-RPA platform, Blue Prism Digital Exchange, Blue Prism Skills, and Blue Prism Communities, all aim to facilitate this. In particular, the likes of Blue Prism Communities acts as a knowledge-sharing platform for which Blue Prism envisions that clients will access forums for help in building digital workers (software robots), share best practices, and (with its new connection into Stack Overflow) collaborate on digital worker development.

Blue Prism Skills helps in lightening the load with knowledge requirements for users to begin digital worker development. with the ability to drag and drop in AI components into processes such as any number of computer vision AI solutions.

Decipher for document processing was developed by Blue Prism’s R&D lab, and features ML which can be integrated into digital workers, and in turn can have skills such as language detection from Google dropped into the process. The ability to drag and drop these skills continues the work in allowing business users who know the process best to quickly and easily build AI into digital workers. Additionally, Decipher introduces human-in-the-loop capability into Blue Prism to assist in cases for which the OCR lacks confidence in its result. The beta version of Decipher is set to launch this summer with a focus on invoice processing.

Decipher will also factor in the new cloud-based and mobile-enabled dashboard capabilities in the new dashboard notification area which, in addition to providing SLA alerts, provides alerts when queues for Decipher’s human-in-the-loop feature are backing up.

Client example

An example of Blue Prism being used to democratize RPA is for marquee client EY. EY, Blue Prism’s fifth largest client, spoke during the conference about its automation journey. During the 4.5-year engagement, EY has deployed 2k digital workers, with 1.3k performing client work and 700 working internally on 500 processes. Through the deployment of the digital workforce, EY has saved 2 million-man hours.

In democratizing RPA, EY federated the automation to the business, while using a centralized governance model and IT pipeline. A benefit of having an IT pipeline was that the automation of processes was not a stop-start development.

When surveying its employees, EY found that the employees who had been involved in the development of RPA had the highest engagement.

Likewise, Blue Prism had market surveys performed with a partner that found that in 87% of cases in the U.S., employees are willing to reskill to work alongside a digital workforce.

Summary

There is further work to be done in democratizing RPA as part of this Connected Entrepreneur Enterprise, and Blue Prism is currently looking into upgrading the underlying architecture and is surveying its partners with regard to UI changes; in addition, it is moving aspects of the platform to the cloud, starting with the dashboarding capability. Also, while Blue Prism has its university partnerships, these are often not heavily marketed and are in competition with other RPA vendors in the space offering the likes of community editions to encourage learnings.

]]>
<![CDATA[IPsoft Looks to Reduce Time to Value While Increasing Return on AI]]>

 

NelsonHall recently attended the IPsoft Digital Workforce Summit in New York and its analyst events in NY and London. For organizations unfamiliar with IPsoft, the company has around 2,300 employees, approximately 70% of these based in the U.S. and 20% in Europe. Europe is responsible for aproximately 30% of the IPsoft client base with clients relatively evenly distributed over the six regions: U.K., Spain & Iberia, France, Benelux, Nordics, and Central Europe.

The company began life with the development of autonomics for ITSM in the form of IPcenter, and in 2014 launched the first version of its Amelia conversational agent. In 2018, the company launched 1Desk, effectively combining its cognitive and autonomic capabilities.

The events outlined IPsoft’s positioning and plans for the future, with the company:

  • Investing strongly in Amelia to enhance its contextual understanding and maintain its differentiation from “chatbots”
  • Launching “Co-pilot” to remove the currently strong demarcation between automated and agent interactions
  • Building use cases and a partner program to boost adoption and sales
  • Positioning 1Desk and its associated industry solutions as end-to-end intelligent automation solutions, and the key to the industry and the future of IPsoft.

Enhancing Contextual Understanding to Maintain Amelia’s Differentiation from Chatbots

Amelia has often suffered from being seen at first glance as "just another chatbot". Nonetheless, IPsoft continues to position Amelia as “your digital companion for a better customer service” and to invest heavily to maintain Amelia’s lead in functionality as a cognitive agent. Here, IPsoft is looking to differentiate by stressing Amelia’s contextual awareness and ability to switch contexts within a conversation, thereby “offering the capability to have a natural conversation with an AI platform that really understands you.”

Amelia goes through six pathways in sequence within a conversation to understand each utterance and the pathway with highest probability wins. The pathways are:

  • Intent model
  • Semantic FAQ
  • AIML
  • Social talk
  • Acknowledge
  • Don’t know.

The platform also separates “entities” from “intents”, capturing both of these using Natural Language Understanding. Both intent and entity recognition is specific to the language used, though IPsoft is now simplifying implementation further by making processes language-independent and removing the need for the client to implement channel-specific syntax.

A key element in supporting more natural conversations is the use of stochastic business process networks, which means that Amelia can identify the required information as it is provided by the user, rather than having to ask for and accept items of information in a particular sequence as would be the case in a traditional chatbot implementation.

Context switching is also supported within a single conversation, with users able to switch between domains, e.g. from IT support to HR support and back again in a single conversation, subject to the rules on context switching defined by the organization.

Indeed, IPsoft has always had a strong academic and R&D focus and is currently further enhancing and differentiating Amelia through:

  • Leveraging ELMo with the aim of achieving intent accuracy of >95% while using only half of the data required in other Deep Neural Net models
  • Using NLG to support Elaborate Question Asking (EQA) and Clarifying Question & Answer (CQA) to enable Amelia to follow-up dynamically without the need to build business rules.

The company is also looking to incorporate sentiment analysis within voice. While IPsoft regards basic speech-to-text and text-to-speech as commodity technologies, the company is looking to capture sentiment analysis from voice, differentiate through use of SLM/SRGS technology, and improve Amelia’s emotional intelligence by capturing aspects of mood and personality.

Launching Co-pilot to Remove the Demarcation Between Automated Handling and Agent Handling

Traditionally, interactions have either been handled by Amelia or by an agent if Amelia failed to identify the intent or detected issues in the conversation. However, IPsoft is now looking to remove this strong demarcation between chats handled solely by Amelia and chats handled solely by (or handed off in their entirety) to agents. The company has just launched “Co-pilot”, positioned as a platform to allow hybrid levels of automation and collaboration between Amelia, agents, supervisors, and coaches. The platform is currently in beta mode with a major telco and a bank.

The idea is to train Amelia on everything that an agent does to make hand-offs warmer and to increase Amelia’s ability to automate partially, and ultimately handle, edge cases rather than just pass these through to an agent in their original form. Amelia will learn by observing agent interactions when escalations occur and through reinforcement learning via annotations during chat.

When Amelia escalates to an agent using Co-pilot, it will no longer just pass conversation details but will now also offer suggested responses for the agent to select. These responses are automatically generated by crowdsourcing every utterance that every agent has created and then picking those that apply to the particular context, with digital coaches editing the language and content of the preferred responses as necessary.

In the short term, this assists the agent by providing context and potential responses to queries and, in the longer term as this process repeats over queries of the same type, Amelia then learns the correct answers, and ultimately this becomes a new Amelia skill.

Co-pilot is still at an early stage with lots of developments to come and, during 2019, the Co-pilot functionality will be enhanced to recommend responses based on natural language similarity, enable modification of responses by the agent prior to sending, and enable agents to trigger partial automated conversations.

This increased co-working between humans and digital chat agents is key to the future of Amelia since it starts to position Amelia as an integral part of the future contact center journey rather than as a standalone automation tool.

Building Use Cases & Partner Program to Reduce Time to Value

Traditionally, Amelia has been a great cognitive chat technology but a relatively heavy-duty technology seeking a use case rather than an easily implemented general purpose tool, like the majority of the RPA products.

In response, IPsoft is treading the same path as the majority of automation vendors and is looking to encourage organizations (well at least mid-sized organizations) to hire a “digital worker” rather than build their own. The company estimates that its digital marketplace “1Store” already contains 672 digital workers, which incorporate back-office automation in addition to the Amelia conversational AI interface. For example, for HR, 1Store offers “digital workers” with the following “skills”: absence manager, benefits manager, development manager, onboarding specialist, performance record manager, recruiting specialist, talent management specialist, time & attendance manager, travel & expense manager, and workforce manager.

At the same time, IPsoft is looking to increase the proportion of sales and service through channel partners. Product sales currently make up 56% of IPsoft revenue, with 44% from services. However, the company is looking to steer this ratio further in support of product, by targeting 60% per annum growth in product sales and increasing the proportion of personnel, currently approx. two-thirds, in product-related positions with a contribution from reskilling existing services personnel. 

IPsoft has been late to implement its partner strategy relative to other automation software vendors, attributing this early caution in part to the complexity of early implementations of Amelia. Early partners for IPcenter included IBM and NTT DATA, who embedded IPsoft products directly within their own outsourcing services and were supported with “special release overlays” by IPsoft to ensure lack of disruption during product and service upgrades. This type of embedded solution partnership is now increasingly likely to expand to the major CX services vendors as these contact center outsourcers look to assist their clients in their automation strategies.

So, while direct sales still dominate partner sales, IPsoft is now recruiting a partner/channel sales team with a view to reversing this pattern over the next few years. IPsoft has now established a partner program targeting alliance and advisory (where early partners included major consultancies such as Deloitte and PwC), implementation, solution, OEM, and education partners.

1Desk-based End-to-End Automation is the Future for IPsoft

IPsoft has about 600 clients, including approx. 160 standalone Amelia clients, and about a dozen deployments of 1Desk. However, 1Desk is the fastest-growing part of the IPsoft business with 176 enterprises in the pipeline for 1Desk implementations, and IPsoft increasingly regards the various 1Desk solutions as its future.

IPsoft is positioning 1Desk by increasingly talking about ROAI (the return on AI) and suggesting that organizations can achieve 35% ROAI (rather than the current 6%) if they adopt integrated end-to-end automation and bypass intermediary systems such as ticketing systems.

Accordingly, IPsoft is now offering end-to-end intelligent automation capability by combining the Amelia cognitive agent with “an autonomic backbone” courtesy of IPsoft’s IPcenter heritage and with its own RPA technology (1RPA) to form 1Desk.

1Desk, in its initial form, is largely aimed at internal SSC functions including ITSM, HR, and F&A. However, over the next year, it will increasingly be tailored to provide solutions for specific industries. The intent is to enable about 70% of the solution to be implemented “out of the box”, with vanilla implementations taking weeks rather than many months and with completely new skills taking approx.. three 3 months to deploy.

The initial industry solution from IPsoft is 1Bank. As the name implies, 1Bank has been developed as a conversational banking agent for retail banking and contains preformed solutions/skills covering the account representative, e.g. for support with payments & bills; the mortgage processor; the credit card processor; and the personal banker, to answer questions about products, services, and accounts.

1Bank will be followed during 2019 by solutions for healthcare, telecoms, and travel.

]]>
<![CDATA[Blue Prism Offers A Lever for Culture Change to Mature Enterprises]]> Blue Prism adopted the theme “Connected RPA – Powering the Connected Entrepreneur Enterprise” at its recent Blue Prism World conferences, the key components of connected-RPA being the Blue Prism connected-RPA platform, Blue Prism Digital Exchange, Blue Prism Skills, and Blue Prism Communities:

 

Components of Blue Prism's connected-RPA

 

Blue Prism is positioning by offering mature companies the promise of closing the gap with digital disruptors, both technically and culturally. The cultural aspect is important, with Blue Prism technology positioned as a lever to help organizations attract and inspire their workforce and give digitally-savvy entrepreneurial employees the technology to close the “digital entrepreneur gap” and also close the gap between senior executives and the workforce.

Within this vision, the Blue Prism roadmap is based around helping organizations to:

  • Automate more – here, Blue Prism is introducing intelligent automation skills, ML-based process discovery, and DX
  • Automate better – with more expansive and scalable automations
  • Automate together – by learning from the mistakes and achievements of others.

Introducing intelligent document processing capability

When analyzing the interactions on its Digital Exchange (DX), Blue Prism unsurprisingly found that the single biggest use, with 60% of the items being downloaded from DX, was related to unstructured document processing.

Accordingly, Blue Prism has just announced a beta intelligent document processing program, Decipher. Decipher is positioned as an easy on-ramp to document processing and is a document processing workflow that can be used to ingest & classify unstructured documents. It can be used “out-of-the-box” without the need to purchase additional licenses or products, and organizations can also incorporate their own document capture technologies, such as Abbyy, or document capture services companies within the Decipher framework.

Decipher will clean documents to ensure that they are ready for processing, apply machine learning to classify the documents, and then to extract the data. Finally, it will apply a confidence score to the validity of the data extracted and pass to a business user where necessary, incorporating human-in-the-loop assisted learning.

Accordingly, Decipher is viewed by Blue Prism as a first step in the increasingly important move beyond rule-based RPA to introduce machine learning-based human-in-the-loop capability. Not surprisingly, Blue Prism recognizes that, as machine learning becomes more important, people will need to be brought into the loop much more than at present to validate “low-confidence” decisions and to provide assisted learning to the machine learning.

Decipher is starting with invoice processing and will then expand to handle other document types.

Improving control of assets within Digital Exchange (DX)

The Digital Exchange (DX) is another vital component in Blue Prism’s vision of connected-RPA.

Enhancements planned for DX include making it easier for organizations to collaborate and share knowledge and facilitating greater security and control of assets by enabling an organization to control the assets available to itself. Assets will be able to be marked as private, effectively providing an enterprise-specific version of the Blue Prism digital exchange and within DX, there will be a “skills” drag-and-drop toolbar so that users, and not just partners, will be able to publish skills.

Blue Prism, like Automation Anywhere, is also looking to bring an e-commerce flavor to its DX: developers will be able to create skills and then sell them. Initially, Blue Prism will build some artifacts themselves. Others will be offered free-of-charge by partners in the short-term, with a view in the near term to enabling partners to monetize their assets.

Re-aligning architecture & introducing AI-related skills

Blue Prism has been working closely with cloud vendors to re-align its architecture, and in particular to rework its UI to appeal to a broader range of users and make Blue Prism more accessible to business users.

Blue Prism is also improving its underlying architecture to make it more scalable as well as more cloud-friendly. There will be a new, more native and automated means of controlling bots via a browser interface available on mobiles and tablets that will show the health of the environment in terms of meeting SLAs, and provide notifications showing where interventions are required. Blue Prism views this as a key step in moving towards provision of a fully autonomous digital workforce that manages itself.

Data gateways (available on April 30, 2019 in v6.5) are also being introduced to make Blue Prism more flexible in its use of generated data. Organizations will be able to take data from the Blue Prism platform and send it to ML for reporting, etc.

However, Blue Prism will continue to use commodity AI and is looking to expand the universe of technologies available to organizations and bring them into the Blue Prism platform without the necessity for lots of coding. This is being done via continuing to expand the number of Blue Prism partners and by introducing the concept of Blue Prism skills.

At Blue Prism World, the company announced five new partners:

  • Bizagi, for process documentation and modeling, connecting with both on-premise and cloud-based RPA
  • Hitachi ID Systems, for enhanced identity and access management
  • RPA Supervisor, an added layer of monitoring & control
  • Systran, providing digital workers with translation into 50 languages
  • Winshuttle, for facilitating transfer of data with SAP.

At the same time, the company announced six AI-related skills:

  • Knowledge & insight
  • Learning
  • Visual perception: OCR technologies and computer vision
  • Problem-solving
  • Collaboration: human interaction and human-in-the-loop
  • Planning & sequencing.

Going forward

Blue Prism recognizes that while the majority of users presenting at its conferences may still be focused on introducing rule-based processes (and on a show of hands, a surprisingly high proportion of attendees were only just starting their RPA journeys), the company now needs to take major strides in making automation scalable, and in more directly embracing machine learning and analytics.

The company has been slightly slow to move in this direction, but launched Blue Prism labs last year to look at the future of the digital worker, and the labs are working on addressing the need for:

  • More advanced process analytics and process discovery
  • More inventive and comprehensive use of machine learning (though the company will principally continue to partner for specialized use cases)
  • Introduction of real-time analytics directly into business processes.
]]>
<![CDATA[Automation Anywhere Monetizes Bot Store to Provide ‘Value as a 2-Way Street’]]> Automation Anywhere’s current Bot Store contains ~500 bots and has received ~40K downloads. In January 2019, these bots were complemented by Digital Workers, with bots being task-centric and Digital Workers being persona- and skill-centric.

 

 

So far, downloads from the Bot Store have been free-of-charge, but Automation Anywhere perceives that this approach potentially limits the value achievable from the Bot Store. Accordingly, the company is now introducing monetization to provide value back to developers contributing bots and Digital Workers to the Bot Store, and to increase the value that clients can receive. In effect, Automation Anywhere is looking to provide value as a two-way street.

The timing for introducing monetization to the Bot Store will be as follows:

  • April 16, 2019: announcement and start of sales process validation with a small number of bots and bot bundles priced within the Bot Store. Examples of “bot bundles” include a number of bots for handling email operation around Outlook or bots for handling common Excel operations
  • May 2019: Availability of best practice guides for developers containing guidelines on how to write bots that are modular and easy to onboard. Start of developer sign-up
  • Early summer 2019: customer launch through the direct sales channel. At this stage, bots and Digital Workers will only be available through the formal direct sales quotation process rather than via credit card purchases
  • Late summer 2019: launch of “consumer model” and Bot Store credit card payments.

Pricing, initially in US$ only, will be per bot or Digital Worker, with a 70:30 revenue split between the developer and Automation Anywhere, with Automation Anywhere handling the billing and paying the developer monthly. Buyers will have a limited free trial period, initially 30 days but under review, but IP protection is being introduced so that buyers will not have access to the source code. The original developer will retain responsibility for building, supporting, maintaining, and updating their bots and Digital Workers. Automation Anywhere is developing some Digital Workers itself in order to seed the Bot Store with some examples, but Automation Anywhere has no desire to develop Digital Workers medium-term itself and may, once the concept is well-proven, hand over/license the Digital Workers it has developed to third-party developers.

Automation Anywhere clearly expects that a number of smaller systems integrators will switch their primary business model from professional services to a product model, building bots for the Bot Store, and is offering developers the promise of a recurring revenue stream and global distribution ultimately through not only the Bot Store but through Automation Anywhere and its partners. Although payment will be monthly, developers will receive real-time transaction reporting to assist them in their financial management. For professional services firms retaining a strong professional services focus, but used to operating on a project basis, Automation Anywhere perceives that licensing and updating Digital Workers within this model could provide both a supplementary revenue stream, and possibly, more importantly, a means to maintain an ongoing relationship with the client organization.

In addition to systems integrators, Automation Anywhere is targeting ISVs who, like Workday, can use the Bot Store and Automation Anywhere to facilitate deployment and operation of their software by introducing Digital Workers that go way beyond simple connectors. Although the primary motivation of these firms is likely to be to reduce the time to value for their own products, Automation Anywhere expects ISVs to be cognizant of the cost of adoption and to price their Digital Workers at levels that will provide both a reduced cost of adoption to the client and a worthwhile revenue stream to the ISV. Pricing of Digital Workers in the range $800 to as high as $12k-$15K per annum has been mentioned.

So far, inter-enterprise bot libraries have largely been about providing basic building blocks that are commonly used across a wide range of processes. The individual bots have typically required little or no maintenance and have been disposable in nature. Automation Anywhere is now looking to transform the concept of bot libraries to that of bot marketplaces to add a much higher, and long-lived, value add and to put bots on a similar footing to temporary staff with updateable skills.

The company is also aiming to steal a lead in the development of such bots and, preferably Digital Workers, by providing third-parties with the financial incentive to develop for its own, rather than a rival, platform.

]]>
<![CDATA[Automation Anywhere Looking to 'Deliver the Digital Workforce for Everyone']]> Automation Anywhere, as with the RPA market in general, continues to grow rapidly. The company estimates that it now has 1,600 enterprise clients, encompassing 3,800 unique business entities across 90 countries with ~10,000 processes deployed. At end 2018, the company had 1,400 employees, and it expects to have 3,000 employees by end 2019.

The company was initially slow to go-to-market in Europe relative to Blue Prism and UiPath, but estimates it has more than tripled its number of customers in Europe in the past 12 months.

NelsonHall attended the recent Automation Anywhere conference in Europe, where the theme of the event was “Delivering Digital Workforce for Everyone” with the following sub-themes:

  • Automate Everything
  • Adopted by Everyone
  • Available Everywhere.

Automate Everything

Automation Anywhere is positioning as “the only multi-product vendor”, though it is debatable whether this is entirely true and also whether it is desirable to position the various components of intelligent automation as separate products.

Nonetheless, Automation Anywhere is clearly correct in stating that, “work begins with data (structured and unstructured) – then comes analysis to get insight – then decisions are made (rule-based or cognitive) – which leads to actions – and then the cycle repeats”.

Accordingly, “an Intelligent RPA platform is a requirement. AI cannot be an afterthought. It has to be part of current processes” and so Automation Anywhere comes to the following conclusion:

Intelligent digital workforce = RPA (attended + unattended) + AI + Analytics

Translated into the Automation Anywhere product range, this becomes:

 

 

Adopted by Everyone

Automation Everywhere clearly sees the current RPA market as a land grab and is working hard to scale adoption fast, both within existing clients and to new clients, and for each role within the organization.

The company has traditionally focused on the enterprise market with organizations such as AT&T, ANZ, and Bank of Columbia using 1,000s of bots. For these companies, transformation is just beginning as they now look to move beyond traditional RPA, and Automation Anywhere is working to include AI and analytics to meet their needs. However, Automation Anywhere is now targeting all sizes of organization and sees much of its future growth coming from the mid-market (“automation has to work for all sizes of organization”) and so is looking to facilitate adoption here by introducing a cloud version and a Bot Store.

The company sees reduced “time to value” as key to scaling adoption. In addition to a Bot Store of preconfigured bots, the company has now introduced the concept of downloadable “Digital Workers” designed around personas, e.g. Digital SAP Accounts Payable Clerk. Automation Anywhere had 14 Digital Workers available from its Bot Store as at mid-March 2019. These go beyond traditional preconfigured bots and include pretrained cognitive capability that can process unstructured data relevant to the specific process, e.g. accounts payable.

In addition, Automation Anywhere believes that to automate at the enterprise-wide level you have to onboard your workforce very fast, so that you can involve more of the workforce sooner. Accordingly, the company is providing role-based in-product learning and interfaces.

To enable the various types of user to ramp up quickly, the coming version of Automation Anywhere will provide a customizable user interface to support the differing requirements of the business, IT, and developers, providing unique views for each. For example:

  • The business user interface can be set up with a customized tutorial on how to build a simple bot using a Visio-like graphical interface. The advanced functionality can be hidden when they start using the tool. Alternatively, the business user can use the recorder to create a visual representation of what needs to be done, including documenting cycle times and savings information, etc., then passing this requirement to a developer
  • Advanced developers, on the other hand, can be set up with advanced functionality including, for example, the ability to embed their own code in, say, Python
  • An IT user can learn about and manage user management, including roles and privileges, and license management.

The Automation Anywhere University remains key to adoption for all types of user. Overall, Automation Anywhere estimates that it has trained ~100K personnel. The Automation Anywhere University has:

  • An association with 200 educational institutions
  • 26 training partners
  • 9 role-based learning tracks
  • 120 certified trainers
  • Availability in 4 course languages.

An increased emphasis on channel sales is also an important element in increasing adoption, with Automation Anywhere looking to increase the proportion of sales through partners from 50% to 70%. The direct sales organization consists of 13 field operating units broken down into pods, and this sales force will be encouraged to leverage partners with a “customer first/partner preferred” approach.

Partner categories include:

  • BPOs with embedded use of Automation Anywhere, and Automation Anywhere is now introducing tools that will facilitate support for managed service offerings
  • Global alliance partners (major consultancies and systems integrators)
  • The broader integrator community/local SIs
  • A distributor channel. Automation Anywhere is currently opening up a volume channel and has appointed distributors including TechData and ECS
  • Private Equity. Automation Anywhere has set up a PE practice to go after the more deterministic PEs who are very prescriptive with their portfolio companies.

In addition, Automation Anywhere is now starting to target ISVs. The company has a significant partnership with Workday to help the ISV automate implementation and reduce implementation times by, for example, assisting in data migration, and the company is hoping that this model can be implemented widely across ISVs.

Automation Anywhere is also working on a partner enablement platform, again seen as a requisite for achieving scale, incorporating training, community+, etc. together with a demand generation platform.

Customer success is also key to scaling. Here, Automation Anywhere claims a current NPS of 67 and a goal to exceed the NPS of 72 achieved by Apple. With that in mind, Automation Anywhere has created a customer success team of 250 personnel, expected to grow to 600+ as the team tries to stay ahead of customer acquisition in its hiring. All functions with Automation Anywhere get their feedback solely through this channel, and all feedback to clients is through this channel. In addition, the sole aim of this organization is to increase the adoptability of the product and the organization’s NPS. The customer success team does not get involved in up-selling, cross-selling, or deal closure.

Available Everywhere

“Available Everywhere” encompasses both a technological and a geographic perspective. From a hosting perspective, Automation Anywhere is now available on cloud or on-premise, with the company clearly favoring cloud where its clients are willing to adopt this technology. In particular, the company sees cloud hosting as key to facilitating its move from the enterprise to increasingly address mid-market organizations.

At the same time, Automation Anywhere has “taken installation away” with the platform, whether on-premise or on cloud, now able to be accessed via a browser. The complete cloud version “Intelligent Automation Cloud” is aimed at allowing organizations to start their RPA journey in ~4 minutes, while considerably reducing TCO.

 

 

In terms of languages, the user interface is now available in eight languages (including French, German, Japanese, Spanish, Chinese, and Korean) and will adjust automatically to the location selected by the user. At the same time, the platform can process documents in 190 languages.

Automation Anywhere also provides a mobile application for bot management.

Summary

In summary, Automation Anywhere regards the keys to winning a dominant market share in the growth phase of the RPA market as being about simultaneously facilitating rapid adoption in its traditional large enterprise market and moving to the mid-market and SMEs at speed.

The company is facilitating ongoing RPA scaling in large enterprises by recognizing the differing requirements of business users, IT, and developers, and establishing separate UIs to increase their acceptance of the platform while increasingly supporting their need to incorporate machine learning and analytics as their use cases become more sophisticated. For the smaller organization, Automation Anywhere has facilitated adoption by introducing free trials, a cloud version to minimize any infrastructure hurdles, and a Bot Store to reduce development time and time to value.

]]>
<![CDATA[D-GEM: Capgemini’s Answer to the Problem of Scaling Automation]]> Finance & accounting is at the forefront of the application of RPA, with organizations attracted by its high volumes of transactional activity. Consequently, activities such as the movement and matching of data within purchase-to-pay have been a frequent start-point for organizational automation initiatives.

Organizations starting on RPA are initially faced with the challenges of understanding RPA tools and approaches and typically lack the internal skills necessary to undertake automation initiatives. Once these skills have been acquired, RPA is then often applied in a piecemeal fashion, with each use case considered by a governance committee on its own merits. However, once a number of deployments have been achieved, organizations then look to scale their automation initiatives across the finance function and are confronted by the sheer complexity, and impossibility, of managing the scaling of automation while maintaining a ‘piecemeal’ approach. At this point, organizations realize they need to modify their approach to automation and adopt a guiding framework and target operating model if they are to scale automation successfully across their finance & accounting processes.

In response to these needs, Capgemini has introduced its Digital Global Enterprise Model (D-GEM to assist organizations in scaling automation across processes such as finance & accounting more rapidly and effectively.

Introducing D-GEM

The basic premise behind D-GEM is that organizations need both a vision and a detailed roadmap if they are to scale their application of automation successfully. Capgemini is taking an automation-first approach to solutioning, with the client vision initially developed in “Five Senses of Intelligent Automation” workshops. Here, Capgemini runs workshops for clients to demo the various technologies and the possibilities from automation, and to establish their new target operating model taking into account:

  • The key outcomes sought within finance & accounting under the new target operating model. For example, key outcomes sought could be reduced DSO, increased working capital, and reduced close days
  • How the existing processes could be configured and connected better using “five senses”:
    • Act (RPA)
    • Think (analytics)
    • Remember (knowledge base)
    • Watch (machine vision & machine learning)
    • Talk (chatbot technology).

However, while the vision, goals, and technology are important, implementing this target operating model at scale requires an understanding of the underlying blueprint, and here Capgemini has developed D-GEM as the “practitioners’ guidebook, a repository showing (e.g., for finance & accounting) what can be achieved and how to achieve it at a granular level (process level 4).

D-GEM essentially aims to provide the blueprint to support the use of automation and deliver the transformation. It is now being widely used within Capgemini and is being made available not just to the company’s BPO clients but for wider application by non-BPO clients within their SSCs and GBS organizations.

From GEM to D-GEM

Capgemini’s original GEM (Global Enterprise Model) was used for solutioning and driving transformation within BPO clients prior to the advent of intelligent automation technologies. Its transformation focus was on improving the end-to-end process and eliminating exceptions. It aimed to introduce best-in-class processes while optimizing the location mix and improving domain competencies and reflected the need to drive standardization and lean processes to deliver efficiency.

While the focus of D-GEM remains the introduction of “best-in-class” processes, best-in-class has now been updated to take into account Intelligent Automation technologies, and the transformation focus has changed to the application of automation to facilitate best-in-class. For example, industrialization of the inputs needs to be taken into account at an early stage if downstream processes are to be automated at scale. Alongside the efficiency focus on eliminating waste, it also looks to use technology to improve the user experience. For instance, rather than eliminating non-standard reporting as has often been a focus in the past, deployment of reporting tools and services on top of standardized inputs and data can enhance the user experience by allowing them to produce their own one-off reports based on consistent and accurate information.

D-GEM provides a portal for practitioners using the same seven levers as GEM, namely:

  • Grade Mix
  • Location Mix
  • Competencies
  • Digital Global Process Model
  • Technology
  • Pricing and Cost Allocations
  • Governance.

However, the emphasis within each of these levers has now changed, as explained in the following sections.

Role of the Manager Changes from Managing Throughput to Eliminating Exceptions

Within Grade Mix, Capgemini evaluates the impact of automation on the grade mix, including how to increase the manager’s span of control by adding bots as well as people, how to use knowledge to increase the capability at different grades, and how to optimize the team structure.

Under D-GEM, the role of the manager fundamentally changes. With the emphasis on automation-first, the primary role of the manager is now to assist the team in eliminating exceptions rather than managing the throughput of team members. Essentially, managers now need to focus on changing the way invoices are processed rather than managing the processing of invoices.

The needs of the agents also change as the profile of work changes with increased levels of task automation. Typically, agents now need to have a level of knowledge that will enable them to act as problem-solvers and trainers of bots. Millennials typically have great problem-solving skills, and Capgemini is using Transversal and the process knowledge base within D-GEM to skill people up faster and ensure Process Champions are growing within each delivery team, so knowledge management tools have a key role to play in ensuring that knowledge is effectively dispersed and able junior team members can expand their responsibility more quickly.

The required changes in competency are key considerations within digital transformations, and it is important to understand how the competencies of particular roles or grades change in response to automation and how to ensure that the workforce knows how automation can enrich and automate their capabilities.

The resulting team structure is often portrayed as a diamond. However, Capgemini believes it is important not to end up with a top-heavy organization as a result of process automation. The basic pyramid structure doesn’t necessarily change, but the team now includes an army of robots, so while the span of managers will typically be largely unchanged in terms of personnel, they are now additionally managing bots. In addition, tools such as Capgemini’s “prompt” facilitate the management of teams across multiple locations.

Within Location Mix, as well as evaluating that the right processes are in the right locations and how the increased role of automation impacts the location mix, it is now important to consider how much work can be transitioned to a Virtual Delivery Center.

Process & Technology Roadmaps Remain Important

Within Digital Global Process Model, D-GEM provides a roadmap for best-practice processes powered by automation with integrated control and performance measures. Capgemini firmly believes that if an organization is looking to transform and automate at scale, then it is important to apply ESOAR (eliminate, standardize, optimize, automate, and then apply RPA and other intelligent automation technologies) first, not just RPA.

Finance & accounting processes haven’t massively changed in terms of the key steps, but D-GEM now includes a repository for each process, based on ESOAR, which shows which steps can be eliminated, what can be standardized, how to optimize, how to automate, how to robotize, and how to add value.

Within the Technology lever, D-GEM then provides a framework for identifying suitable technologies and future-proofing technology. It also indicates what technologies could potentially be applied to each process tower, showing a “five senses” perspective. For example, Capgemini is now undertaking some pilots applying blockchain to intercompany accounting to create an internal network. Elsewhere, for one German organization, Capgemini has applied Tradeshift and RPA on top of the organization’s ERP to achieve straight-through processing.

In addition, as would be expected, D-GEM includes an RPA catalog, listing the available artifacts by process, together with the expected benefits from each artifact, which greatly facilitates the integration of RPA into best practices.

Governance is also a critical part of transformation, and the Governance lever within D-GEM suggests appropriate structures to drive transformation, what KPIs should be used to drive performance, and how roles in the governance model change in the new digital environment.

Summary

Overall, D-GEM has taken Capgemini’s Global Enterprise Model and updated it to address the world of digital transformation, applying automation-first principles. While process best practice remains key, best practice is now driven by a “five senses” perspective and how AI can be applied in an interconnected fashion across processes such as finance and accounting.

]]>
<![CDATA[AntWorks Positioning BOT Productivity and Verticalization as Key to Intelligent Automation 2.0]]> Last week, AntWorks provided analysts with a first preview of its new product ANTstein SQUARE, to be officially launched on May 3.

AntWorks’strategy is based on developing full stack intelligent automation, built for modular consumption, and the company’s focus in 2019 is on:

  • BOT productivity, defined as data harvesting plus intelligent RPA
  • Verticalization.

In particular, AntWorks is trying to dispel the idea that Intelligent Automation needs to consist of three separate products from three separate vendors across machine vision/OCR, RPA, and AI in the form of ML/NLP, and show that AntWorks can offer a single, though modular, “automation” across these areas end-to-end.

Overall, AntWorks positions Intelligent Automation 2.0 as consisting of:

  • Multi-format data ingestion, incorporating both image and text-based object detection and pattern recognition
  • Intelligent data association and contextualization, incorporating data reinforcement, natural language modelling using tokenization, and data classification. One advantage claimed for fractal analysis is that it facilitates the development of context from images such as company logos and not just from textual analysis and enables automatic recognition of differing document types within a single batch of input sheets
  • Smarter RPA, incorporating low code/no code, self-healing, intelligent exception handling, and dynamic digital workforce management.

Cognitive Machine Reading (CMR) Remains Key to Major Deals

AntWorks’ latest release, ANTstein SQUARE is aimed at delivery of BOT productivity through combining intelligent data harvesting with cognitive responsiveness and intelligent real-time digital workforce management.

ANTstein data harvesting covers:

  • Machine vision, including, to name a modest sub-set, fractal machine learning, fractal image classifier, format converter, knowledge mapper, document classifier, business rules engine, workflow
  • Pre-processing image inspector, where AntWorks demonstrated the ability of its pre-processor to sharpen text and images, invert white text on a black background, remove grey shapes, and adjust skewed and rotated inputs, typically giving a 8%-12% uplift
  • Natural language modelling.

Clearly one of the major issues in the industry over the last few years has been the difficulty organizations have experienced in introducing OCR to supplement their initial RPA implementations in support of handling unstructured data.

Here, AntWorks has for some time been positioning its “cognitive machine reading” technology strongly against traditional OCR (and traditional OCR plus neural network-based machine learning) stressing its “superior” capabilities using pattern-based Content-based Object Retrieval (CBOR) to “lift and associate all the content” and achieve high accuracy of captured content, higher processing speeds, and ability to train in production. AntWorks also takes a wide definition of unstructured data covering not just typed text, but also including for example handwritten documents and signatures and notary stamps.

AntWorks' Cognitive Machine Reading encompasses multi-format data ingestion, fractal network driven learning for natural language understanding using combinations of supervised learning, deep learning, and adaptive learning, and accelerators e.g. for input of data into SAP.

Accuracy has so far been found to be typically around 75% for enterprise “back-office” processes, but the level of accuracy depends on the nature of the data, with fractal technology most appropriate where the past data strongly correlates with future data and data variances are relatively modest. Fractal techniques are regarded by AntWorks as being totally inappropriate in use cases where the data has a high variance e.g. crack detection of an aircraft or analysis of mining data. In such cases, where access to neural networks is required, AntWorks plans to open up APIs to, for example, Amazon and AWS.

Several examples of the use of AntWorks’ CMR were provided. In one of these, AntWorks’ CMR is used in support of sanction screening within trade finance for an Australian bank to identify the names of the parties involved and look for banned entities. The bank estimates that 89% of entities could be identified with a high degree of confidence using CMR with 11% having to be handled manually. This activity was previously handled by 50 FTEs.

Fractal analysis also makes its own contribution to one of ANTstein’s USPs: ease of use. The business user uses “document designer”, to train ANTstein on a batch of documents for each document type, but fractal analysis requires lower numbers of cases than neural networks and its datasets also inherently have lower memory requirements since the system uses data localization and does not extract unnecessary material.

RPA 2.0 “QueenBOTs” Offer “Bot Productivity” through Cognitive Responsiveness, Intelligent Digital Automation, and Multi-Tenancy

AntWorks is positioning to compete against the established RPA vendors with a combination of intelligent data harvesting, cognitive bots, and intelligent real-time digital workforce management. In particular, AntWorks is looking to differentiate at each stage of the RPA lifecycle, encompassing:

  • Design, process listener and discoverer
  • Development, aiming to move towards low code business user empowerment
  • Operation, including self-learning and self-healing in terms of exception handling to become more adaptive to the environment
  • Maintenance, incorporating code standardization into pre-built components
  • Management, based on “central intelligent digital workforce management.

Beyond CMR, much of this functionality is delivered by QueenBOTs. Once the data has been harvested it is orchestrated by the QueenBOT, with each QueenBOT able to orchestrate up to 50 individual RPA bots referred to as AntBOTs.

The QueenBOT incorporates:

  • Cognitive responsiveness
  • Intelligent digital automation
  • Multi-tenancy.

“Cognitive responsiveness” is the ability of the software to adjust automatically to unknown exceptions in the bot environment, and AntWorks demonstrated the ability of ANTstein SQUARE to adjust in real-time to situations where non-critical data is missing or the portal layout has changed. In addition, where a bot does fail, ANTstein aims to support diagnosis on a more granular basis by logging each intermittent step in a process and providing a screenshot to show where the process failed.

AntWorks’ is aiming to put use case development into the hands of the business user rather than data scientists. For example, ANTstein doesn’t require the data science expertise for model selection typically required when using neural network based technologies and does its own model selection.

AntWorks also stressed ANTstein’s ease of use via use of pre-built components and also by developing its own code via the recorder facility and one client talking at the event is aiming to handle simple use cases in-house and just outsourcing the building of complex use cases.

AntWorks also makes a major play on reducing the cost of infrastructure compared to traditional RPA implementations. In particular, ANTstein addresses the issue of servers or desktops being allocated to, or controlled by, an individual bot by incorporating dynamic scheduling of bots based on SLAs rather than timeslots and enabling multi-tenancy occupancy so that a user can use a desktop while it is simultaneously running an AntBOTs or several AntBOTs can run simultaneously on the same desktop or server.

Building Out Vertical Point Solutions

A number of the AntWorks founders came from a BPO background, which gave them a focus on automating the process middle- and back-office and the recognition that bringing domain and technology together is critical to process transformation and building a significant business case.

Accordingly, verticalization is a major theme for AntWorks in 2019. In addition to support for a number of horizontal solutions, AntWorks will be focusing on building point solutions in nine verticals in 2019, namely:

  • Banking: trade finance, retail banking account maintenance, and anti-money laundering
  • Mortgage (likely to be the first area targeted): new application processing, title search, and legal description
  • Insurance: new account set up, policy maintenance, claims handling, and KYC
  • Healthcare & life sciences: BOB reader, PRM chat, payment posting, and eligibility
  • Transportation & logistics: examination evaluation
  • Retail & CPG: no currently defined point solutions
  • Telecom: customer account maintenance
  • Media & entertainment: no currently defined point solutions
  • Technology & consulting: no currently defined point solutions.

The aim is to build point solutions (initially in conjunction with clients and partners) that will be 80% ready for consumption with a further 20% of effort required to train the bot/point solution on the individual company’s data.

Building a Partner Ecosystem for RPA 2.0

The company claims to have missed the RPA 1.0 bus by design (the company commenced development of “full-stack ANTstein in 2017) and is now trying to get out the message that the next generation of Intelligent Automation requires more than OCR combined with RPA to automate unstructured data-heavy industry-specific processes.

The company is not targeting companies with small numbers of bot implementations but is ideally seeking dozens of clients, each with the potential to build into $10m relationships. Accordingly the bulk of the company’s revenues currently comes from, and is likely to continue to come from, CMR-centric sales with major enterprises either direct or through relationships with major consultancies.

Nonetheless, AntWorks is essentially targeting three market segments:

  • Major enterprises with CMR-centric deals
  • RPA 2.0, through channels
  • Point solutions.

In the case of major enterprises, CMR is typically pulling AntWorks’ RPA products through to support the same use cases.

AntWorks is trying to dissociate itself from RPA 1.0, strongly positioning against the competition on the basis of “full stack”, and is slightly schizophrenic about whether to utilize a partner ecosystem which is already tied to the mainstream RPA products. Nonetheless, the company is in the early stages of building a partner ecosystem for its RPA product based on:

  • Referral partners
  • Authorized resellers
  • Managed Services Program, where partners such as EXL build their own solutions incorporating AntWorks
  • Technology Alliance partners
  • Authorized training partners
  • University partners, to develop up a critical mass of entry-level automation personnel with experience in AntWorks and Intelligent Automation in general.

Great Unstructured Data Accuracy but Needs to Continue to Enhance Ease of Use

A number of AntWorks’ clients presented at the event and it is clear that they perceive ANTstein to deliver superior capture and classification of unstructured data. In particular, clients liked the product’s:

  • Superior natural language-based classification using limited datasets
  • Ability to use codeless recorders
  • Ability to deliver greater than 70% accuracy at PoC stage

However, despite some the product’s advantages in terms of ease of use, clients would like further fine tuning of the product in areas such as:

  • The CMR UI/UX is not particularly user-friendly. The very long list of options is hard for business users to understand who require shorter more structured UI
  • Improved ease of workflow management including ability to connect to popular workflows.

So, overall, while users should not yet consider mass replacement of their existing RPAs, particularly where these are being used for simple rule-based process joins and data movement, ANTstein SQUARE is well worth evaluation by major organizations that have high-volume industry-specific or back-office processes involving multiple types of unstructured documents in text or handwritten form and where achieving accuracy of 75%+ will have a major impact on business outcomes. Here, and in the industry solutions being developed by AntWorks, it probably makes sense to use the full-stack of ANTstein utilizing both CMR and RPA functionality. In addition, CMR could be used in standalone form to facilitate extending an existing RPA-enabled process to handle large volumes of unstructured text.

Secondly, major organizations that have an outstanding major RPA roll-out to conduct at scale, are becoming frustrated at their level of bot productivity, and are prepared to introduce a new RPA technology should consider evaluating AntWorks' QueenBOT functionality.

The Challenge of Differentiating from RPA 1.0

If it is to take advantage of its current functionality, AntWorks urgently needs to differentiate its offerings from those of the established RPA software vendors and its founders are clearly unhappy with the company’s past positioning on the majority of analyst quadrants. The company aimed to achieve a turnaround of the analyst mindset by holding a relatively intimate event with a high level of interaction in the setting of the Maldives. No complaints there!

The company is also using “shapes” rather than numbers to designate succeeding versions of its software. Quirky and could be incomprehensible downstream.

However, these marketing actions are probably insufficient in themselves. To complement the merits of its software, the company needs to improve its messaging to its prospects and channel partners in a number of ways:

  • Firstly, the company’s tagline “reimagining, rethink, recreate” shows the founders’ backgrounds and is arguably more suitable for a services company than for a product company
  • Secondly, establishing an association with Intelligent Automation 2.0 and RPA 2.0 is probably too incremental to attract serious attention.

Here the company needs to think big and establish a new paradigm to signal a significant move beyond, and differentiation from, traditional RPA.

]]>
<![CDATA[A First Look at Blue Prism’s New RPA Initiatives]]>

 

Today’s announcement from Blue Prism covers new product capabilities, new service and design support services, and a new go-to-market framework that underscores the importance of automation as a means to enable legacy organizations to compete with 'born-digital' startups. Blue Prism’s announcement is equal parts perspective, product, and process. Let’s examine each in turn.

Perspective

The perspective Blue Prism is bringing to the table today is the notion of empowering digital entrepreneurs within an organization (under the flag ‘connected RPA’) with the intent of either disruption-proofing that organization or at least enabling self-disruption as part of a deliberate strategy.

In Blue Prism’s view, this is best accomplished through a package of three organizational automation design concepts. The first is the federation of the center of excellence concept – which is not to say that existing CoEs are obsolete, but rather now serve as a lighthouse for other disciplinary CoEs within, for example, finance, production, and customer care. Pushing more organizational automation authority and responsibility outward into the organization, in Blue Prism’s view, enables legacy organizations to begin acting more like ‘born-digital’ disruptors.

The second such principle, enabled by the first, is the concept of significantly accelerating the process of moving from proof of concept to at-scale testing to enterprise deployment. Again, the company positions this as a means to emulate born-digital firms and build both proactive and reactive organizational change speed through rapid automation technology deployment.

And third, Blue Prism is emphasizing the value of peer-to-peer interaction among organizational automation executives, a plank of its strategy that is being served through the rollout of Blue Prism Community – an area in Blue Prism Digital Exchange for sharing best practices and collaborating on automation challenges.

Product

The product announcements supporting this new go-to-market perspective include a process discovery capability, which will be available on the Blue Prism website. For those readers who recall seeing Blue Prism announce a partner relationship with Celonis in September of 2018, this may come as a surprise, but the firm has every intention of maintaining that relationship; this new software offering is intended as a lighter process exploration tool with the ability to visualize and contextualize process opportunities.

Blue Prism is careful to distinguish here between process discovery – the identification of processes representing a good fit for automation – and process mining, a deeper capability offered by Celonis that includes analysis of the specific stepwise work done within those processes.

Blue Prism also announced today the availability of its London-based Blue Prism AI Research Lab and accompanying AI roadmap strategy, which focuses on three areas: understanding and ingesting data in a broader variety of formats, simplifying automation design, and improving the relationship between humans and digital workers in assisted automations.

In addition, in an effort to put its expanded product set in the hands of more organizations, Blue Prism is also going to open up access to the company’s RPA software making it easy for people to get started, learn more, and explore what’s possible with an intelligent digital workforce.

Process

Finally, the process of engaging Blue Prism is changing as well. The company has established, through its experience in deployments, that the early stages of organizational automation initiatives are critical to the long-term success of such efforts, and has staged more support services and personnel into this period in response. Far from being a rebuke of channel partner efforts, this packaged service offering will actually increase the need for delivery partner resources ‘on the ground’ to service customers’ automation capabilities.

Blue Prism’s own customer success and services organization will offer to provide Blue Prism expertise into the customer programs through a series of pre-defined interventions that complement and augment the customers’ and partners’ efforts. The offering, entitled Success Accelerator, is designed around Blue Prism’s Robotic Operating Model (ROM), the company’s design and deployment framework. The intent of this new product is accelerating and accentuating client ROI by establishing sound automation delivery principles based on lessons Blue Prism has learned in its deployment history to date.

Summary

Blue Prism’s suite of product, process and perspective announcements today underscore an emerging trend in the sector – namely, the awareness that automation offers real improvements in organizational speed and agility, two characteristics that will be important for legacy organizations to develop if they are to compete with fast, reactive, born-digital disruptive startups.

The connected RPA vision that Blue Prism has outlined highlights the evolving power of automation. It extends beyond the limits of traditional RPA, giving users a compelling automation platform which includes AI and cognitive features. Furthermore, the new roadmap, capabilities, and features being introduced today enable Blue Prism’s growing community of developers, customers, channel partners, and technology alliances.

]]>
<![CDATA[Get Ready for Quantum Computing: 5 Steps to Take in 2019]]>

 

IBM recently announced the first ‘commercial-ready’ quantum computer, the 20-qubit Q System One. The date is certainly worth recording in the annals of computing history. But, in much the same way that mainframes, micros, and PCs all began with an ‘iron launch’ and then required a long pragmatic use case maturity curve, so too will this initial offering from IBM be the first step on a long evolution path. With so much conjecture and contemplation happening in the industry surrounding this announcement, let’s unpack what IBM’s announcement means – and how organizations should be reacting.

First, although Q System One is being billed as commercial-ready, that designation means that the product is ready for usage on a traditional cloud computing basis, not necessarily that it is ready to contribute meaningfully to solving business problems (although the device will certainly mature quickly in both capability and speed). What Q System One does offer is a keystone for the industry to begin working with quantum technology in much the same way that any other cloud utility supercomputing devices are available, and a testbed for beginning to explore and develop quantum code and quantum computing strategies. As such, while Q System One may not outperform traditional cloud computing resources today, its successors will likely do so in short order – perhaps as soon as 2020.

As I noted in my blockchain predictions blog for 2019, quantum computing has long been the shadow over blockchain adoption, owing to the concern that quantum computing will make blockchain’s security aspect obsolete. That watershed lies years in our future, if indeed at all, and it is important to note that quantum computing can as easily be tasked to enhance cryptographic strength as it can to break it down. As a result, expect that the impact of quantum computing on blockchain will net to a zero-sum game, with quantum capabilities powering ever-more evolved cryptographic standards in much the same way that the cybersecurity arms race has proceeded to date.

With this in mind, what should organizations have on their quantum readiness roadmaps? The short answer is that quantum readiness is more the beginning of many long-term projects rather than the consummation of any short-term ones, so quantum is more a component of IT strategy than near-term tactical change. Here are five recommendations I’m making for beginning to ready your organization for quantum computing during 2019.

Migrate to SHA-3 – and build an agile cybersecurity faculty

There is no finish line for cybersecurity, especially with quantum capabilities on the horizon, but when I speak with enterprise organizations on the subject, I recommend that a combination of NIST and RSA/ECC technologies approximates to something that will be quantum-proof for the foreseeable future. Migration off of SHA-2 is a strong prescriptive regardless, given the flaws that platform shared with its predecessor. But perhaps more importantly than the construction of a cryptographic standard to meet quantum’s capabilities is the design of an agile cybersecurity faculty that can shorten the time to transition from one standard to the next. Quantum computing will produce overnight gains in both security and exposure as the technology evolves; being ready to take swift counteraction will be key in the next decade of information technology.

Begin asking entirely new questions in a Quantum CoE

Traditional computing technology has taught us clear phase lines of the possible and impossible with respect to solving business problems. Quantum, over the course of the next decade, will completely redraw those lines, with more capability coming online with each passing year (and, eventually, quarter). Tasks like modeling new supply chain algorithms, new modes of product delivery, even new projections of complex M&A activity in a sector over a long forecast span will become normal requests by 2030.

Make sure data hygiene and MDM protocols are quantum-ready

Already, there have been multiple technologies – Big Data, automation, and blockchain are just three – that have strongly suggested the need to ensure that organizations are running on clean, reliable data.

As business task flow accelerates, and more cognitive automation and smart contracts touch and interact with information as the first actor in the process chain, it is increasingly vital to ensure that these technologies are handling quality data. Quantum may be the last such opportunity to bring the car into the pit for adjustments before racing at full speed commences in sectors like retail, telecom, technology, and logistics. This is a to-do that benefits a broad array of technological deployment projects, so while it may not be relevant for quantum computing until the next decade begins, the benefits will begin to accrue from these efforts today.

Aim at a converged point involving data, analytics, automation & AI

Quantum computing is often discussed in the context of moonshot computing problems – and, indeed, the technology is currently best deployed against problems outside the realm of capability for legacy iron. But quantum will also power the move from offline or nearline processing to ‘now’ processing, so tasks that involve putting insights from Big Data environments to work in real-time will also fall within reach over the course of the next decade. What you may find from a combination of this action and the two prior is that some of the questions and projects you had slated for a quantum computing environment may actually be addressable today through a combination of cognitive technologies.

Reach out to partners, suppliers & customers to build a holistic quantum perspective

Legacy enterprise computing grew up as a ‘four-walls’ concept in part because of the complexity of tackling large, complex business optimization problems that involved moving parts outside the organization. Quantum does not automatically erase those boundary lines from an integration perspective, but the next decade will see more than enough computing power come online to optimize long, global supply chain performance challenges and cross-border regulatory and financing networks. Again, efforts in this area can also benefit organizational initiatives today; projects in IoT and blockchain, in particular, can achieve greater benefits when solutions are designed with partners, suppliers, regulators, and financiers involved up front.

Conclusion

Quantum computing is not going to change the landscape of enterprise IT tomorrow, or next month, or even next year. But when it does effect that change, organizations should expect its new capabilities to be game-changers – especially for those firms that planned well in advance to take advantage of quantum computing’s immense power.

This short checklist of quantum-readiness tasks can provide a framework for pre-quantum technology projects, too – making them an ideal roster of 2019 ‘to-dos’ for enterprise organizations.

]]>
<![CDATA[7 Blockchain Predictions for 2019]]>

Blockchain has progressed considerably as an emerging technology during 2018. Many of 2017’s PoCs have become deployed commercial solutions as standards have begun to solidify, and with more organizations beginning to explore the potential of distributed ledger architecture and smart contracting.

We are still at the very beginning of the lifecycle of this particular technology, and nowhere close to seeing its full potential yet. But as the year comes to an end, what might 2019 bring in terms of distributed ledger maturity and trends? Here are my seven predictions for blockchain in 2019.

The use case landscape shakes out

Blockchain has a clear goodness-of-fit spectrum, as I have written about in a previous blog, and to date, that spectrum has often been tested at the low end with mixed results. Blockchain has clear strengths in a number of well-defined use cases, most notably supply chain and parts management, multiparty shipping and logistics tasks, remittances, securities clearance, and more. As 2019 dawns, we will begin to see providers focus less on exploring the use case spectrum and more on building more capability into those use cases that have been proven to be blockchain-relevant.

Use cases become playbooks

The secondary benefit of a more focused approach on the part of blockchain service providers is the swift emergence of proven playbooks for specific blockchain applications. Already, providers are beginning to slash the number of discussed use cases as it becomes obvious that cold-chain pharma, farm-to-fork agricultural provenance, and airplane parts sourcing and documentation, for example, are functionally multiple iterations of the same basic design with domain knowledge added per the specific deployment.

Interoperability fades as a limiting factor

Blockchain has presented something of a Betamax/VHS (or Blu-Ray/HD-DVD, for younger readers) quandary to date, with multiple standards each offering a unique source of value but on a mutually exclusive basis. But more providers are beginning to focus on hybrid blockchain solutions and platform interoperability, and the announcement in late October that Hyperledger Fabric will be able to execute smart contracts written for Ethereum certainly signals that we are entering the next phase of the market, in which multiple market leaders will need to play responsibly in the sandbox for this technology to take deep root.

Throughput speeds improve – but DB-like operation at-speed/at-scale is more likely in 2020

Blockchain’s primary drawback up to this point is that it can operate at speed, or at scale, but not both. That is slowly changing, with more blockchain accelerators emerging in the marketplace (Microsoft’s CoCo being just one example), and greater attention being paid to purpose-built platforms (like Symbiont Assembly) that are architected for at-speed/at-scale operation. Sharding and layer-2 protocols, both under exploration by Ethereum, show promise for keeping the core value of a distributed ledger system and adding the ability to accelerate transaction throughput to near-database speeds.

Quantum computing comes in from the cold

QC has been the hobgoblin looming over blockchain in the media for years, almost always framed as a technology that sits in opposition to blockchain – either as a security threat or a technology that will make the distributed ledger concept obsolete. But, like most technologies, it will emerge from the threatening media gloom to take its place at the solution table, in the form of a blockchain acceleration and security-improving offering. Quantum computing is still some way off from making a material difference in the IT landscape, but 2019 will bring a dose of sanity in removing the oppositional rhetoric from its emerging presence.

Automation, AI & IoT combine with blockchain for next-gen digital transformation

Blockchain is often discussed as if immutability and transaction security are its primary value proposition. But smart contracting and autonomous action within a DLT environment are at least as important in terms of overall value to the enterprise – and these are capabilities enriched and informed by other emerging technologies, including IoT, artificial intelligence, and cognitive automation. Increasingly, these four technologies are combining to form the basis of next-generation digital transformation for organizations seeking results beyond the limited promise of the initial wave of early transformational work (circa 2014-2017).

Convergence sets the stage for a viable long-term replacement for ERP

What these combined technologies are capable of reaches beyond the ‘four walls’ of the transformational enterprise; they enable whole supply chains to work together as extended ERP fabrics, and to incorporate financial, regulatory, and technology entities surrounding the production and distribution cycle. The discussions around these possibilities are just beginning as 2018 draws to a close, but we expect 2019 to bring more blueprinting and ecosystem construction conversations.

One final, overarching perspective for blockchain and DLT in general: we are progressing past the point of questioning whether these technologies have a role to play in the broader business IT ecosystem. When deployed against the right business challenges, on the right architecture for the task, with the right partner, blockchain is capable of remarkable improvements – and becomes a more strategic technology when considered as a transformational component alongside IoT, AI and automation. The future isn’t built exclusively on blockchain, but it is increasingly a part of the future of business transaction management.

]]>
<![CDATA[UiPath’s Go! Automation Marketplace Aims to Accelerate RPA Adoption in Enterprise Clients]]>

 

UiPath held its 2018 UiPathForward event October 3-4, 2018, in Miami, Florida. The focus of proceedings was the October release of the company’s software and a related trio of major announcements: a new automation marketplace, new investment in partner technology and marketing, and a new academic alliance program.

The analyst session included a visit from CEO Daniel Dines and an update on the company’s performance and roadmap. UiPath has grown from a $1m ARR to a $100m ARR in just 21 months, and the company is trending on a $140m ARR for 2018 en route to Dines’ forecasted $200m ARR in early 2019. UiPath is adding nearly six enterprise clients a day and has begun staking a public claim – not without defensible merit – to being the fastest-growing enterprise software company in history.

During the event, UiPath announced a new academic alliance program, consisting of three sub-programs – one aimed at training higher education students for careers in automation, another providing educators with resources and examples to utilize in the classroom setting, and the third focused on educating youth in elementary and secondary educational settings. UiPath has a stated goal of partnering with ~1k schools and training ~1m students on its RPA platform.

The centerpiece of the event, however, was Release 2018.3 (Dragonfly), which was built around the launch of UiPath Go!, the company’s new online automation marketplace. It would be easy to characterize Go! as a direct response to Automation Anywhere’s Bot Store, but that would be overly simplistic. Where currently the Bot Store skews more toward apps as automation task solutions, Go! is an app store for particulate task components – so while the former might offer a complete end-to-end document processing bot, Go! would instead offer a set of smaller, more atomic components like signature verification, invoice number identification, address lookup and correction, etc.

The specific goal of Go! is to accelerate adoption of RPA in enterprise-scale clients, and the component focus of the offering is intended to fill in gaps in processes to allow them to be more entirely automated. The example presented was, the aforementioned signature verification; given that a human might take two seconds to verify a signature, is it really worth automating this phase of the process? Not in and of itself, but failing to do so creates an attended automation out of an unattended one, requiring human input to complete. With Go!, companies can automate the large, obvious task phases from their existing automation component libraries, and then either build new components or download Go! components to complete the task automation in toto.

Dragonfly is designed to integrate Go! components into the traditional UiPath development environment, providing a means for automation architects to combine self-designed automation components with downloaded third-party components. Given the increased complexity of managing project automation software dependencies for automations built from both self-designed and downloaded components, UiPath has also improved the dependency and library management tools in 2018.3. For example, automation tasks that reuse components already developed can include libraries of such components stored centrally, reducing the amount of rework necessary for new projects.

In addition, the new dependencies management toolset allows automation designers to point projects at specific versions of automations and task components, instead of defaulting to the most recent, for advanced debugging purposes. Dragonfly also moves UiPath along the Citrix certification roadmap, as this release is designated Ready for Citrix, another step toward becoming Certified for Citrix. Finally, Dragonfly also adds new capabilities in VDI management, new localization capabilities in multiple languages, and UI improvements in the Studio environment.

In the interest of spurring development of Go! components, UiPath has designated $20m for investment in its partners during 2019. The investment is split between two funds, the UiPath Venture Innovation Fund and the UiPath Partner Acceleration Fund. The first of these is aimed directly at the Go! marketplace by providing incentives for developers to build UiPath Go! components. In at least one instance, UiPath has lent developers directly to an ISV along with funding to support such development. UiPath expects that these investment dollars will enable the Go! initiative to populate the store faster than a more passive approach of waiting for developers to share their automation code.

The second fund is a more traditional channel support fund, aimed at encouraging partners to develop on the UiPath platform and support joint marketing and sales efforts. The timing of this latter fund’s rollout, on the heels of UiPath’s deal registration/marketing and technical content portal announcement, demonstrates the company’s commitment to improving channel performance. Partners are key to UiPath’s ability to sustain its ongoing growth rate and the strength of its partner sales channels will be vital in securing the company’s next round of financing. (UiPath's split of partner/direct deployments is approaching 50/50, with an organizational goal of reaching 100% partner deployments by 2020.) Accordingly, it is clear that the company’s leadership team is now placing a strong and increasing emphasis on channel management as a driver of continued growth.

]]>
<![CDATA[7 Process Characteristics That Are Key to Blockchain Adoption]]>

 

With the rise of every new technology, there is a parallel rush among enterprises to incorporate it into the near-term IT roadmap, and for good reason: new technologies offer cost savings, CX improvement, better risk management, and a host of other benefits that look terrific in annual reports and quarterly earnings documentation.

Nowhere is that trend more prevalent than in blockchain, a technology that has created significant adoption pressure for enterprise clients amid a flurry of questions regarding platform selection, process redesign, and partner engagement. Blockchain’s benefits are much vaunted (security, immutability, fault tolerance, decentralization), but then there is no shortage of drawbacks (throughput speed, fragmented platform landscape, interoperability). Moreover, there is already talk of technologies that could replace or bypass the benefits of blockchain, from hashgraph to quantum computing, adding further to the murk surrounding the process of evaluating blockchain for organizational use.

In an environment spurring on organizations to adopt blockchain, there’s actually good cause to slow the overall technological rush and ensure that a blockchain solution is the right choice for a specific business challenge and commercial ecosystem. Blockchain is a transformative technology; it changes the fundamental way that transactions are encoded, stored, and tracked. In the right setting, it can be a material lever for unlocking value within the organization, in the supply chain, and among banking and regulatory interactors. In the wrong setting, it can be an expensive dead-end that diverts resources and time from a broader slate of digital transformation activities.

So how can organizations correctly sort the real blockchain opportunities out? In the course of my research in this area, I’ve identified seven key characteristics of business processes that form the basis of an organizational blockchain ‘goodness of fit’ checklist:

1. Transactional processes

Note that this is not the same as being financially transactional; any process where information changes hands between parties, even if compensation is not a part of the exchange, can be a candidate for blockchain deployment. Blockchain excels at documenting the transfer of value or information, and fiscal gains tend to accumulate with greater volumes of handshakes. As a result, the higher the transaction count in one cycle of a given process task, the more relevant blockchain becomes.

2. Frictional processes

Process friction can take many forms – from time delays in passing information from one party to the next, to per-message costs (such as SWIFT messaging expenses in financial services), to partner fatigue in disputing invoices or claims. The more time and expense accumulates within the process, the better a fit for blockchain technology a process is.

3. Non real-time, low-volume processes

Speed is not currently a significant blockchain platform strength, so processes that need to happen in real-time at scale may be a poor fit for the technology in its current form. While some specialized platforms – most notably Digital Asset, Symbiont, and Waves – offer compelling speed at scale, most of the big names in the platform space are not yet performing at speeds comparable to a relational database, so real-time processes happening in volume may be good candidates to consider when blockchain catches up in the next two years.

4. Simpler processes

The term smart contracts tends to be a confusing one in blockchain, as it suggests more intelligence than is really present in the technology currently. Smart contracts are smart in their management of the tasks surrounding a transaction, like document processing, notarization, and approval; a smart contract is self-executing in these areas and does not require additional input.

But for all that transactional intelligence, smart contracts remain relatively ‘dumb’ in terms of overall contract complexity. So, while most can follow relatively simple ‘if-then’ logic, complicated transactions with multiple forks and ‘fuzzy’ interpretation are beyond the current reach of most smart contract platforms. Again, this is a development priority for many platform providers, so expect to see this evolve swiftly in parallel with developments in AI – but at the time of writing, simpler processes are a better fit for blockchain implementation.

5. Oppositional processes

Transparency and trust are cornerstone components of an effective blockchain implementation, particularly so when there is an element of opposed goals in a process environment (payor versus payee being the most common such example). When all parties can monitor and oversee the documented process of content or payment through a process transparently from end to end, trust is improved and disputes tend to decline both in volume and in time required for resolution.

6. Fragmented processes

Intra-organizational applications of blockchain can produce meaningful benefits, but the real value is unlocked when a blockchain connects multiple parties operating in different domains – for example, in an ocean cargo management setting, exporters, banks, insurers, regulators, shipping providers, importers, and distributors. In such an environment, where responsibility and input are being passed among many organizations, the relevance of a blockchain solution increases considerably.           

7. Risk-accumulative processes

Corporate risk management is an accumulative function to begin with, as the audit task normally demands a large volume of signed and documented data – so the ability to produce the supporting documentation without significant organizational effort or data reconstruction is a vital task. Blockchain offers historically unparalleled data immutability and signed witness status, making it an exceptionally good fit for processes that accumulate large volumes of risk-relevant exchanges over time.

In conclusion

What can operations and IT executives take from this in planning for blockchain deployment? Currently, the most compelling fiscal and performance returns are coming from highly transactional processes with considerable process friction, prioritizing real-time transparency in low transaction volume, with minimal complexity, high levels of fragmentation, and considerable risk exposure.

However, it is critical to maintain a perspective on the role of implementing blockchain for these processes within the scope of a broader digital transformation initiative; blockchain demands many of the same transformation readiness checkpoints (big data capability, master data management and hygiene, and automation readiness) that other transformational initiatives do.

Finally, in assessing blockchain’s weaknesses, keep a weather eye on the horizon: blockchain’s two principal shortcomings to date (managing real-time transaction volume at scale and handling complex smart contracts) will increasingly become priorities for the major platform providers over the next two years.

]]>
<![CDATA[IPsoft’s Challenging Vision for Cognitive Automation]]>

 

I recently attended IPsoft’s Digital Workforce Summit in New York City, an intriguing event that in some ways represented a microcosm of the challenges clients are experiencing in moving from RPA to cognitive automation.

The AI challenge

Chetan Dube loomed large over proceedings. IPsoft’s president and CEO was onstage more than is common at events of this type, chairing several fireside chats himself in addition to his own technology keynote, and participating (with sleeves rolled up) at the analyst day that followed. He brought a clear challenge to the stage, while at the same time conveying the complexity and capability of IPsoft’s flagship cognitive products, Amelia and 1DESK, and making them understandable to the audience, in part by framing them in terms of commercial value and ROI.

RPA vendors have a simpler form of this challenge, but both robotic process automation and cognitive automation vendors have a hill to climb in gaining clients’ trust in the underlying technology and reassuring service buyers that automation will be both a net reducer of cost and a net creator of jobs (rather than a net displacer of them).

From a technological perspective, RPA sounds from the stage (and sells) much more like enterprise software than neuroscience or linguistics, so the overall pitch can be sited much more in the wheelhouse of IT buyers. The product does what it says on the tin, and the cavalcade of success stories that appear on event stages are designed to put clients’ concerns to rest. To be sure, RPA is by no means easy to implement, nor is it yet a mature offering in toto, but the bulk of the technological work to achieve a basic business result has been done. And overall, most vendors are working on incremental and iterative improvements to their core technology at this time.

AI differs in that it is still at the start of the journey towards robust, reliable customer-facing solutions. While Amelia is compelling technology (and is performing competently in a variety of settings across multiple industries), the version that IPsoft fields in 2025 will likely make today’s version seem almost like ELIZA by comparison, if Dube’s roadmap comes to fruition. He was keen to stress that Amelia is about much more than just software development, and he spent a lot of time explaining aspects of the core technology and how it was derived from cognitive theory. The underlying message, broadly supported by the other presenters at the event, was clearly one of power through simplicity.

IPsoft’s vision

The messaging statements coming from the stage during the event portrayed a diverse and wide-ranging vision for the future of Amelia. Dube sees Amelia as an end-to-end automation framework, while Chief Cognitive Officer Edwin van Bommel sees Amelia as a UI component able to escape the bounds of the chatbox and guide users through web and mobile content and actions. Chief Marketing Officer Anurag Harsh focused on AI though the lens of the business, and van Bommel presented a mature model for measuring the business ROI of AI.

Digging deeper, some of what Dube had to say was best read metaphorically. At one point he announced that by 2025 we will be unable to pass an employee in the hallway and know if he or she is human or digital. That comment elicited some degree of social media protest. But consider that what he was really saying is that most interaction in an enterprise today is performed electronically – in that case, ‘the hallways’ can be read as a metaphor for ‘day-to-day interaction’.

The question discussed by clients, prospects, and analysts was whether Dube was conveying a visionary roadmap or fueling hype in an often overhyped sector. Listening to his words and their context carefully, I tend towards the former. Any enterprise technology purchase demands three forms of reassurance from the vendor community:

  • That the product is commercially ready today and can take up the load it is promising to address
  • That the company has a long-term roadmap to ensure that a client’s investment stays relevant, and the product is not overtaken by the competition in terms of capacity and innovation
  • And perhaps most importantly, that the roadmap is portrayed realistically and not in an overstated fashion that might cause clients to leave in favor of competitors’ offerings.

I took away from Digital Workforce Summit that Dube was underscoring the first and second of these points, and doing so through transparency of operation and vision.

There are only two means of conveying the idea that you sell a complex product which works simply from the user perspective – you either portray it as a black box and ask that clients trust your brand promise, or you open the box and let clients see how complex the work really is. IPsoft opted for the latter, showing the product’s operation at multiple levels in live demonstrations. Time and again, Dube reminded the audience that it is unnecessary to grasp evolved scientific principles in order to take advantage of technologies that use those principles – so light switches work, in Dube’s example, without the user needing to grasp Faraday’s principles of induction. It still benefits all parties involved to see the complexity and grasp the degree to which IPsoft has worked to make that complexity accessible and actionable.

Conclusion

The challenge, of course, is that clients attend events of this kind to assess solutions. The majority of attendees at Digital Workforce Summit were there to learn whether IPsoft’s Amelia, in its latest form, is up-to-speed to manage customer interactions, and will continue to evolve apace to become a more complete conversational technology solution and fulfill the company’s ROI promises.

I came away with the sense that both are true. Now it is up to the firm’s technology group to translate Dube’s sweeping vision into fiscally rewarding operational reality for clients.

]]>
<![CDATA[Infosys Announces Blockchain-Powered Nia Provenance to Manage Complex Supply Chains]]>

 

EdgeVerve, an Infosys Product subsidiary, this week announced a new blockchain-powered application for supply chain management as part of its product line. Nia Provenance is designed to address the challenges faced by organizations managing complex supply chain networks with multiple IT stacks engaged across multiple stakeholders. Here I take a quick look at the new application and its potential impact.

Supply chain traceability, transparency & trust

Nia Provenance is designed to provide traceability of products from source of origin to point of purchase with full transparency at every point along the supply chain. The product establishes trust through the utilization of a version of Bitcore, the blockchain architecture used by Bitcoin. While this can be a relatively simple task in agribusiness and other supply environments in which a product involves only processing as it moves through the supply chain, environments such as consumer electronics or medical devices are much more complex, involving integration and assembly of multiple components along the way. The ability to isolate a specific component and trace it to its source of origin, through phases of value addition timestamped on a blockchain ledger, is invaluable in case of recall or consumer danger.

Transparency in Nia Provenance is provided through proof of process as the product or commodity moves through the system – so attributes that must be agreed on at specific phases of the supply chain, such as conflict-free or locally-sourced, can be seen in the system as they are accumulated. Similarly, regulatory inspections and certifications are more easily tracked and audited through a blockchain solution like Nia Provenance.

Finally, trust is gained in a system with a combination of data immutability, equality in network participation as a result of decentralization of the overall SCM ledger, and cryptographic information security. Over time, the benefits of a blockchain SCM environment accrue both to the organizational bottom line, in the form of cost savings, and to the organization’s brand as a function of increased consumer trust in the brand promise.

Agribusiness client case

As one example of how Nia Provenance is being leveraged in the real world, a global agribusiness firm undertook a proof of concept for its coffee sourcing division in Indonesia to track the journey of coffee from the growing site, through the roasting plant, the blend manufacturer, the quality control operation, the logistics providers, and on to the importer. This enabled the trader to provide trusted accreditation and certification information to the importer for properties such as organic or fair trade status, or that the coffee was grown using sustainable agriculture standards.

Providing strategic blockchain reach

Nia Provenance provides Infosys with three important sources of strategic blockchain ‘reach’ in an increasingly competitive market, because:

  • It is platform-agnostic and purpose-built to dock with multiple blockchain architectures. A supply chain solution that relies too heavily on the specific capabilities of one common blockchain architecture or another – for example, Ethereum or HyperLedger – would encounter difficulty working with other upstream or downstream architectures. By keeping the DLT technology in an abstraction layer, Nia Provenance eases the process of incorporating different blockchain architectures in a complex SCM task environment
  • It is designed to benefit multiple supply chain stakeholders, not just the client. Blockchain adoption becomes more appealing to upstream and downstream stakeholders, as well as horizontal entities like banks, insurers and regulators, when the ecosystem is built with clear benefits for them as well as the organizing entity. Nia Provenance is designed from the ground up with a mindset inclusive of suppliers, inspectors, insurers, shippers, traders, manufacturers, banks, distributors, and end customers
  • It is designed to span multiple industries. Although the platform has its origins in agribusiness, Nia Provenance looks to be up to the task of SCM applications in manufacturing, consumer goods/FMCG, food and beverage, and specialized applications such as cold-chain pharmaceuticals.

Summary

Supply chain provenance is a core application for blockchain, and one that we expect to be a clear value delivery vehicle for blockchain technology through 2025. The combination of – as Infosys puts it – traceability, transparency, and trust that blockchain provides is a compelling proposition. Nia Provenance offers a solution across a broad variety of industry applications for organizations seeking lower cost and greater security in their supply chain operations.

]]>
<![CDATA[The Advantages of Building a Bespoke Blockchain Platform]]>

 

For all the discussion in the blockchain solution industry around platform selection (are they choosing Fabric or Sawtooth? Quorum or Corda?), you’d be forgiven for thinking that every provider’s first stop is the open-source infrastructure shelf. But the reality is that blockchain is more a concept than a fixed architecture, and the platforms mentioned do not encompass the totality of use case needs for solution developers. As a result, some solution developers have elected to start with a blank sheet of paper and build blockchain solutions from the ground up.

One such company is Symbiont, who started down this road much earlier than most. Faced with the task of building a smart contracts platform for the BFSI industry, the company examined what was available in prebuilt blockchain platform infrastructure and did not see their solution requirements represented in those offerings – so they built their own. Symbiont’s concerns centered around the two areas of scalability and security, and for the firm’s pursuit target accounts in capital markets and mortgages, those were red-letter issues.

The company addressed these concerns with Symbiont Assembly, the company’s proprietary distributed ledger technology. Assembly was designed to address three specific demands of high-volume transactional processes in the financial services sector: fault tolerance, volume management, and security.

Supporting fault tolerance

Assembly addresses the first of these through the application of a design called Byzantine Fault Tolerance (BFT). Where some blockchain platforms allow for node failure within a distributed ledger environment, platforms using BFT broaden that definition to include the possibility of a node acting maliciously, and can control for actions taken by these nodes as well. The Symbiont implementation of BFT is on the BFT-SMaRt protocol.

Volume management

In addressing the volume demands of financial services processing, deciding on the BFT-SMaRt protocol was again important, as it enables Assembly to reach performance levels in the ~80k/s range consistently.

This has two specific benefits, one obvious and one less so. First, it means that Assembly can manage the very high-volume transaction pace of applications in specialized financial trading markets without scale concerns. Secondly, it means that in lower-volume environments, the extra ‘headroom’ that BFT-SMaRt affords Assembly can be used to store related data on the ledger without the need to resort to a centralized data store to hold, for example, scanned legal documents that support smart contracts.

Addressing security concerns

The same BFT architecture that supports Assembly’s fault tolerance also provides an additional layer of security, in that malicious node activity is actively identified and quarantined, while ‘honest’ nodes can continue to communicate and transact via consensus. Add in encryption of data, whereby Assembly creates a private security ledger within the larger ledger, and the result is a robust level of security for applications with significant risk of malicious activity in high-value trading and exchange.

Advantages of building a bespoke blockchain platform

Building its own blockchain platform cost Symbiont many hours and R&D dollars that competitors did not have to spend, but ultimately this decision provides Symbiont with three strategic advantages over competitors:

  • Assembly is purpose-built for BFT-relevant, high-volume environments. As a result, the platform has performance and throughput benefits for applications in these environments compared with broader-use blockchain platforms that are intended to be used across a variety of business DLT needs. To some degree this limits the flexibility of the platform in other use cases, but just as a Formula One engine is a bespoke tool for a specific job, so too is Assembly specifically designed to excel in its native use case environment. That provides real benefits to users electing to build their banking DLT applications on the Assembly architecture
  • Symbiont can provide for third-party smart contract writing, should it elect to do so. While this is not in the roadmap for the moment, and Symbiont appears content to build client solutions on proprietary deliverables from the contract-writing layer through the complete infrastructure of the solution, the company could elect to allow clients to write their own smart contracts ‘at the top of the stack’. Symbiont does intend to keep the core Assembly platform proprietary to the company for the foreseeable future
  • Assembly may attract less malicious activity interest than traditional platforms. The rising number of blockchain projects based on HyperLedger and Ethereum is certain to attract more malicious activity based on the commonality of the architecture across a broader common base of technology. In much the same way that Windows historically attracted more virus incursions than the OS platform, Assembly will tend to attract less attention than platforms with broader user bases. Moreover, Assembly’s BFT foundations will enable it to deal more effectively with those events that do occur.

Summary

Symbiont isn’t alone in developing its own proprietary blockchain technology architecture rather than choose from the broadly available offerings in the space, and as blockchain enters the mainstream of enterprise business, other provider organizations will surely go the same route.

What Symbiont has established is an exemplar for developing a purpose-built blockchain platform, beginning with the specific needs of the task environment at scale, and proceeding to address those needs carefully in the development process. 

]]>
<![CDATA[6 Ways to Prepare for Cognitive Automation During RPA Implementation]]>

 

2017 brought a surge of RPA deployments across industries, and in 2018 that trend has accelerated as more and more firms begin exploring the many benefits of a digital workforce. But even as some firms are just getting their RPA projects started, others are beginning to explore the next phase: cognitive automation. And a common challenge for firms is the desire to begin planning for a more intelligent digital workforce while automating simpler rule-based processes today.

Having spoken with organizations at different stages of their journeys from BI to RPA and on to cognitive, there are tasks that companies can begin during RPA implementation to ensure that they are well positioned for the machine learning-intensive demands of cognitive automation:

Design insight points into the process for machine learning

Too often, the concept of STP gets conflated with the idea of measuring task automation only on completion. But for learning platforms, it is vital to understand exactly where variance and exceptions arise in the process – so allow your RPA platform to document its progress in detail from task inception to task completion.

At each stage, provide a data outlet to track the task’s variance on a stage-by-stage basis. A cognitive platform can then learn where, within each task, variance is most likely to arise – and it may be the case that the work can be redesigned to give straightforward subtasks to a lower-cost RPA platform while cognitive automation handles the more complex subtasks.

Build a robot with pen & paper first

One of the basic measures for determining whether a process can be managed by BPM, by RPA, or by cognitive automation is the degree to which it can be expressed as a function of rigorous rules. So, begin by building a pen-and-paper robot – a list of the rules by which a worker, human or digital, is expected to execute against the task.

Consider ‘borrowing’ an employee with no familiarity with the involved task to see if the task is genuinely as straightforward and rule-bounded as it seems – or whether, perhaps, it involves a higher order of decision-making that could require cognitive automation or AI.

Use the process to revisit the existing work design

In many organizations, tasks have ‘grown up’ inorganically around inputs from multiple stakeholders and have been amended and revised on the fly as the pace of business has demanded. But the migration first to RPA and then on to cognitive automation is a gift-wrapped opportunity to revisit how, where, and when work is done within an organization.

Can key task components be time-shifted to less expensive computing cycles overnight or on weekends? Can whole tasks be re-divided into simpler and more complex components and allocated to the lowest-cost tool for the job?

Dock the initiative with in-house ML & data initiatives

Cognitive automation does not have to remain isolated to individual task areas or divisions within an organization. Often, ML initiatives produce better results when given access to other business areas to learn from. What can cognitive automation learn about customer service tasks from paying a ‘virtual visit’ to the manufacturing floor via IoT? Much, potentially – especially if specific products or parts are difficult to machine to tolerance within an allowed margin of error, they may be more common sources of customer complaints and RMAs.

Similarly, a credit risk-scoring ML platform can learn from patterns of exception management in credit applications being managed in a cognitive automation environment. For ML initiatives, enabling one implementation to learn from others is a key success factor in producing ‘brilliant’ organizational AI.

Revisit the organizational data hygiene & governance models

Data scientists will be the first to underscore the importance of introducing clean data into any environment in which decision-making will be a task stage. Data with poor hygiene, and with low levels of governance surrounding the data cleaning and taxonomy management function, will create equally poor results from cognitive automation technology that utilizes it to make decisions.

Cognitive software is no different than humans in this respect; garbage in, garbage out, as the old saying goes. As a result, a comprehensive visitation of organizational data hygiene and governance models will pay dividends down the road in cognitive work.

Discuss your vendor’s existing technology & roadmap in cognitive & AI

Across the RPA sector, cognitive is a central concept for most vendors’ 2018-2020 roadmaps. Scheduling a working session now on migrating the organization from RPA to cognitive automation provides clients with insight on their vendor’s strengths and capability set. It also enables vendors to get a close look at ‘on the ground’ cognitive automation needs in different organizational task areas.

That’s win/win – and it helps ensure that an existing investment in vendor technology is well-positioned to take the organization forward into cognitive based on a sound understanding of client needs.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[Application of RPA & AI to Unstructured Data Processing: The Next Big Milestone for Shared Services]]>

 

Shared Services Centers (SSCs) have made progress in the initial application of RPA, gained some experience in its application, and are typically now looking to scale their use of RPA widely across their operations. However, although organizations have often undertaken some level of standardization and simplification of their processes to facilitate RPA adoption, one stumbling block that still frequently inhibits greater levels of automation and straight-through processing is an inability to process unstructured data. And this is limiting the value organizations are currently able to realize from automation initiatives.

NelsonHall recently interviewed 127 SSC executives across industries in the U.S. and Europe to understand the progress made in adopting RPA & AI, along with their satisfaction levels and future expectations. To quote from one executive interviewed, “I think the main strategy in the past has been to avoid unstructured data or pre-process it to make it structured. Now we are beginning to embrace the challenge of unstructured data and are growing an internal understanding of how to piece together automation.”

Low Satisfaction in Handling Unstructured Data Widespread in SSCs

This is an important next step. Unstructured data remains rife in organizations within customer and supplier emails and documents, with, for example, supplier invoices taking on a myriad of supplier-dependent formats and handwritten material far from extinct within customer applications.

This need to process unstructured data impacts not just mailroom document management, but a wide range of shared services processes. By industry sector, the processes that have a combination of high levels of unstructured data and a significant level of dissatisfaction with its capture and processing are:

  • Retail & Commercial Banking: new account set-up and customer service
  • P&C Insurance: fraud detection, claims processing, mailroom document management, policy maintenance, and customer service
  • Telecoms: customer service.

Within finance & accounting shared services, the same issues are found within supplier & catalog management, purchase invoice processing, and 3-way matching.

So, it is highly important that SSCs get to grips within handling unstructured documents and data within these process areas. However, this is unknown territory for many SSCs; they are typically in the early stages of automating handling of unstructured data and lack expertise in effective identification and implementation of suitable technologies. In addition, SSCs often lack the necessary experience in process change management and speed of process change when handling RPA & AI projects. Indeed, SSCs have often struggled in the early stages of automation with the challenge of realizing the expected cost savings from this technology. Applying automation is one thing, but realizing its benefits through effective process change management and ensuring that unexpected exceptions don’t derail the process and the associated cost realization, has sometimes been a significant issue.

Combining OCR & Machine Learning is Critical to Processing Unstructured Data

Accordingly, it is critical that SSCs now automate data classification and extraction from their unstructured documents. At present, 80% of SSCs across sectors are still manually classifying documents, with OCR only used modestly and not to its full potential. However, there are strong levels of intention to adopt OCR and RPA & AI technologies in support of processing unstructured data within SSCs during 2018 and 2019, as shown below:

 

SSCs are considering a broad range of technologies for processing unstructured data, with OCR clearly a key technology, but further supported by machine learning in its various forms for effective text classification and extraction. To quote from one executive interviewed, “We want to speed up deployment of automation within the mailroom, we want more OCR and natural language processing in place.”

Need for Improved Turnaround Times Now the Main Driving Force

However, in terms of benefits achievement, there is currently quite a significant difference between organizations’ current automation aspirations and what they have already achieved. While organizations placed a high initial emphasis within their automation initiatives on cost savings, and the achievement of cost savings remains very important to SSCs, the focus of executives within SSCs has now increasingly turned to improving process turnaround times.

Within the telecoms sector, this leads to a high expectation of improved customer satisfaction. However, executives with property & casualty and finance & accounting SSCs tend to attach an equal or higher importance to the impact of these technologies on employee satisfaction - by automating some of the least satisfying types of work within the organization, thus allowing personnel to focus on more added value aspects of the process (i.e. other than finding and entering data from customer documents and invoices).

The principal benefits sought by SSCs from implementing RPA & AI in support of processing of unstructured data are shown below:

 

70% of SSCs Highly Likely to Purchase Operational Service Covering Unstructured Data Processing

While automation is often depicted as having an adverse impact on the outsourcing industry, the reality is often quite the opposite, and organizations seek help in effectively deploying new digital technologies. Indeed, this is certainly the case with unstructured data processing.

SSCs will tend to implement unstructured data handling in-house where the information being handled is highly sensitive, where security is critically important, and where regulation or the set-up of internal systems inhibits use of a third-party service. However, elsewhere, where these constraints do not apply, SSC executives express a high level of intent to purchase external services in support of document classification and extraction of unstructured data. ~70% of SSCs are highly likely to purchase operational services for document processing, including document classification and extraction of unstructured data, while only a minority express a high intent to implement in-house or via a systems integrator.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[Kryon’s Rebranding Focuses on the Business Benefits of RPA]]>

 

Kryon has today launched a new brand presence, along with a new strategic perspective on RPA focused on delivering business benefits. The former Kryon Systems (now simply Kryon) will now be organized around a three-pronged approach the company refers to as ‘Discover, Automate, Optimize’.

As part of this brand migration, several aspects of Kryon’s go-to-market approach will change, as described below.

Focusing on the human side of the RPA equation

Kryon’s former branding package included limited personification of the RPA offering under the Leo name, and also featured an anthropomorphized robot ‘mascot’ in much of the company’s promotional and industry relations materials. That component of the company’s branding has been eliminated from its new visual identity, which now focuses much more on the human side of the RPA equation and the concept of integrating RPA into a hybrid human-digital workforce.

A new focus on business benefits rather than technological innovation

As more RPA features begin to become ‘table stakes’ within the sector, NelsonHall has expected vendors to begin the shift from focusing on product features to business outcomes. Kryon joins that trend with its rebranding, which will include more case studies and success stories represented as a function of business KPIs, while keeping the technological conversation within the context of real-world improvements in cost, efficiency, and quality.

A new framework for the brand

The ‘Discover, Automate, Optimize’ theme speaks to Kryon’s three primary offering areas:

  • Process discovery (already soft-launched, but due for a more formal product rollout in early summer of 2018)
  • Traditional RPA
  • Analytics/AI.

To date, these have been marketed as components, but under the new branding they become part of a larger solution intended to reposition Kryon as an end-to-end provider of business process optimization solutions.

A clear effort to differentiate its offerings

Kryon has sometimes suffered in terms of its ability to break out from the pack of RPA providers and carve out a differentiated and sustainable niche for itself. Under the new brand positioning, the company is making a clear effort to differentiate its offerings based on the ability to do more than automate simple, repetitive tasks.

The company talks about enabling human workers to be mindful and focused on creative tasks by eliminating background work entirely though the application of RPA combined with AI and machine learning. While other firms offer similar messaging, Kryon’s new branding package treats repetitive work as ‘background noise’ to be removed from the typical employee’s workday.

A new name, logo & tagline

While these are often secondary in importance from a technology and business analyst’s perspective, it is worth mentioning what is and what, importantly, is not included in Kryon’s visual rebrand. Gone is the word ‘Systems’ from the old Kryon logo, in a clear effort to migrate the firm towards a broader service mandate.

The tagline ‘Be Your Future’ is added in place of ‘Systems’, again suggesting a broadening of the brand. Finally, the letter ‘O’ in the logo is given a half-gold, half-blue treatment to emphasize the hybrid human/digital nature of its offering.

Summary

2018 and 2019 are expected to be watershed years in the RPA sector, as competitive positioning begins to come into focus and leadership niches become occupied as the sector matures. Kryon is taking clear steps to include itself in the ‘tier one’ vendor conversation through a set of brand migration moves that position the company to compete well into the next decade.

]]>
<![CDATA[Redwood Introduces Disruptive New RPA Pricing Model]]>

 

Today, Redwood announced a new pricing model for its RPA software in which users pay only for units of work completed, and on a cost basis equivalent to efficient human work on the same task. As a result, if a Redwood robot sends an email, or retrieves specific data, or performs reconciliation work, the organization is charged on completion for specific amounts relevant to the parallel human cost of execution in a ‘perfect work efficiency’ environment.

This is a fundamental change from the prevalent model in the industry of paying for licenses for RPA software and estimating how many licenses will be necessary to perform specific tasks. While other pricing models exist – ranging from paying for the process rather than the robot, to buying robots outright as owned software properties – this is the first time that pricing is available both on completion and on a granular, task-centric basis. In essence, Redwood is enabling organizations implementing RPA to pay on a piecework basis, and only after the work is performed.

The new pricing model will mark the second major transition in the company’s client contracting medium in the last five years. Historically, Redwood sold its software on a perpetual licensing basis, which changed over time to a more traditional annual licensed offering (although some clients are still on perpetual licenses). Redwood will need to manage a transition period in which clients can switch to the utility pricing model on the anniversary of their licenses, which may introduce some unevenness to the company’s financial performance during 2018-2019.

There are more implications for Redwood, and for the RPA industry, as a result of deploying this new pricing model:

The new model changes the revenue & profit mix for Redwood…

The company expects to see some flattening of topline revenue as a result of this change, but improved margins, with an overall increase in transaction volume. Redwood believes that by reducing barriers to entry in RPA through enabling payment by the task, and after the fact, more prospective clients will adopt the Redwood solution. This is a logical evolution of the Redwood business model in that it promotes Redwood’s library of prebuilt robots to a larger prospective audience and smooths the on-ramp to Redwood adoption for more organizations.

…and demands that Redwood’s pricing model is appealing

The company has researched levels of productivity and cost in both Western and offshore economies and modeled a function that prices Redwood tasks at roughly 20 Euro cents per moderate-duty task (retrieving a report, reconciling data, sending an email, etc.) based on a perfectly-efficient Western worker performing 156 such tasks per hour for a fully-loaded employment cost of €50k. (A low-cost economy worker performs half as many such tasks per hour for half the cost in Redwood’s model.)

In order for Redwood to unlock the full potential value of this new pricing model, these assumptions and metrics need to be appealing to buyers.

Redwood creates more pressure on the traditional licensing model

This is still a relatively young industry in terms of establishing pricing and contracting norms, so disruptive acts (and Redwood’s new pricing model will certainly be disruptive at some level) creates pressure on ‘safer’, more traditional modes of client engagement. Redwood holds a degree of advantage in that the company has an extensive library of ~35,000 prebuilt robots that it can price and sell on this model, as opposed to RPA providers that provide software that is customized and deployed within the client organization. It will be more difficult for traditional RPA providers to cost-effectively match the Redwood model in the market.

Reporting & invoicing challenges are addressed through Redwood Robotics itself

Transitioning from a license-based contracting structure to a high-resolution, granular use-based contracting structure would normally be a steep challenge for a software organization accustomed to annual licensing, given the degree of reporting and invoicing complexity involved. Fortunately for Redwood, these processes are being handled in their entirety by additional automations, deployed to the client organization at no charge, which monitor and document Redwood automation usage and generate regularly-scheduled invoices for the client.

Summary

Redwood has put forth a compelling new framework for equating robotic and human labor costs, and for enabling organizations to pay only for work done rather than paying for the abstraction layer inherent to a robot license.

In effect, Redwood offers piecework rates in a market predominated by ‘salaried-FTE’ model robots. While this is unlikely to become the norm for RPA pricing, it provides Redwood with a new, and potentially sustainable, source of competitive differentiation.

]]>
<![CDATA[UiPath Gains Unicorn Status with Series B Funding; To Expand into AI]]>

 

This morning, UiPath announced that the company will be receiving $153m in Series B funding from a consortium including the company’s existing investors, with two new names involved – Kleiner Perkins and Capital G, the late-stage growth venture capital fund financed by Alphabet Inc.

The latter is of note as this arm of Google focuses on profit-centric investment rather than acquiring to serve Google’s overall strategic goals. Its notable investments to date have included Gusto (then ZenPayroll) in 2015, Airbnb and Snap in 2016, and Lyft in 2017.  As a result of these investments, Laela Sturdy of Capital G and John Doerr of Kleiner Perkins will be joining UiPath’s strategic advisory board.

This latest round of financing is meaningful on several fronts:

It places UiPath into unicorn territory

This round of funding places UiPath’s market valuation in the vicinity of $1.1bn, implying that the company has grown from seed funding to unicorn status in just 36 months. By contrast, fellow RPA unicorn Blue Prism was founded in 2001 and only recently crossed into unicorn status with a market value of $1.02bn.

…which requires more resources to support rapid growth

While this is both impressive supernormal growth on its own, and a rate that suggests that UiPath has taken considerable share in the past twelve months, it carries with it its own slate of challenges, as referenced in the profile of UiPath that NelsonHall published earlier this year. The company’s  level of growth needs infrastructural backfill in multiple areas, from R&D to sales and marketing. This is a company that is adding 2.5 customers a day on its existing funding levels and operating cashflow. What might UiPath’s organic growth trajectory look like with significantly deeper sales, marketing, deployment, and R&D capabilities? We are about to find out.

It positions the company to acquire in the AI space

The company now boasts a combined war chest of ~$200m in cash, more than enough for a tactical bolt-on or two in the areas of cognitive automation and AI. UiPath already has evolved partnerships with Celonis and Enate, so the company is likely to look outside of those firms’ service footprints for acquisitions. Specifically, UiPath is looking for capabilities in the areas of natural language processing, machine learning, and identity recognition. There will be no shortage of good candidates for UiPath to choose from in these areas, but betting correctly and acquiring for maximum value will be critical in positioning UiPath for success.

It ties the company closer to Google

The CapitalG investment certainly suggests a closer relationship between UiPath and Google, which might have already manifested in UiPath’s decision to utilize Google Cloud for its cloud machine learning initiative. Given Blue Prism’s alignment with IBM, the major RPA providers are beginning to find their technology partners for long-term competition in the segment.

Google will be able to provide UiPath with a host of competitive advantages in terms of technology licensure, partner ecosystem development, and market presence. It would be interesting to see where UiPath might be in a year’s time with a closer relationship with Google’s TensorFlow team, for example, or with its Generative Adversarial Networks working groups.

It likely launches the next wave of innovation in the segment

Armed with a substantive war chest of cash with which to build and acquire new capabilities, UiPath’s actions during 2018 are not likely to go unanswered by other segment leaders. As a result, UiPath’s next moves will likely signal the beginning of the next stage of evolution in the RPA sector – one we expect to bring out the best in technological innovation among those leaders. We see UiPath as a leader in that evolutionary process.

]]>
<![CDATA[7 Essential Tasks Prior to Any RPA Implementation]]>

 

With every new software release from RPA sector leaders, there is always much to be excited about as vendors continue to push the technological boundaries of workplace automation. Whether those new capabilities focus on cognition, or security, or scalability, the technology available to us continues to be a source of inspiration and innovative thinking in how those new capabilities can be applied.

But success in an RPA deployment is not entirely dependent just on the technology involved. In fact, the implementation design framework for RPA is often just as important – if not more so – in determining whether a deployment is successful. Install the most cutting-edge platform available into a subpar implementation design framework, and no amount of technological innovation can overcome that hindrance.

With this in mind, here are seven tasks that should be part of any RPA implementation plan before organizations put pen to paper to sign up with an RPA platform vendor.

Create a cohesive vision of what automation will achieve

Automation is the ultimate strict interpretation code: it does precisely as it’s told, at speed, and in volume. But it must be pointed at the right corporate challenges, with a long-term vision for what it is (and is not) expected to do in order to be successful in that mission. That process involves asking some broad-ranging questions up-front:

  • What stakeholders are involved – internally and externally – in the automation initiative?
  • What are our organization’s expectations of the initiative?
  • How will we know if we succeeded or fail?
  • What metrics will drive those assessments?
  • Where will this initiative go next within our organization?
  • Will we involve our supply chain partners or technology allies in this process?

Ensure a staff model that can scale at the speed of enterprise automation

We tend to spend so much time talking about FTE reduction in the automation sector that we overlook the very real issue of FTE sourcing (in volume!) in relation to the implementation of automation at enterprise scale. Automation needs designers, coders, project managers, and support personnel, all familiar with the platform and able to contribute new code and thoughtware assets at speed.

Some vendors are addressing this issue head-on with initiatives like Automation Anywhere University, UiPath Academy, and Blue Prism Learning and Accreditation, and others have similar initiatives in the works. It is also important that organizational HR professionals be briefed on the specific skillsets necessary for automation-related hires; this is a relatively new field, and partnering up-front on talent acquisition can yield meaningful benefits down the road.

Plan in detail for a labor outage

The RPA sector is rife with reassurances about digital workers: they never go on strike; they don’t sleep or require breaks; they don’t call in sick. But things do go wrong. And while the RPA vendors offer impressive SLAs with respect to getting clients back online quickly, sometimes it’s necessary to handle hours, or even days, of automated work manually. Having mature high-availability and disaster recovery capability built into the platform – as Automation Anywhere included in Enterprise Release 11 – mitigates these concerns to a specific degree, but planning for the worst means just that.

Connect with the press and the labor community

Don’t skip this section because it sounds like organized labor management only, although that’s a factor too. Automation stories get out, and local and national press alike are eager to cover RPA initiatives at large organizations. It’s a hot-button topic and an easily accessible story.

Unfortunately, it’s also all too easy to take an automation story and run with the sensationalist aspects of FTE displacement and cost reduction. By interacting with journalist and labor leaders in advance of launching an automation initiative, you’re owning the story before it can be owned elsewhere in the content chain.

Have a retraining and upskilling initiative parallel to your automation COE

Automation can quickly reduce the number of humans necessary in a work area by half or even more. What is your organization’s plan for redeployment of that human capital to other, higher-value tasks? Who occupies those task chairs now – and what will they be doing?

Once the task of automation deployment is complete, there is still process work to be done in finding value-added work for humans who have a reduced workload due to automation. Some organizations are finding and unlocking new sources of enterprise value in doing so – for example, front-line workers who have their workloads reduced through automation can often ‘see the forest’ better and can advise their superiors on ways to streamline and improve processes.

Similarly, automation can bring together working groups on tasks that have connected automations between departments, allowing for new conversations, strategies, and processes to take shape.

Have an articulation plan for RPA and other advanced technologies

RPA and cognitive automation do more than improve the quality and consistency of work – they also improve the quality and consistency of task-related data. That is an invaluable characteristic of RPA from the organizational data and analytics perspective, and one that is often overlooked in the planning process.

While it might take days for a service center to spot a trend in common product complaints, RPA platforms could see the same trend in hours, combine that data in an organizational data discovery environment with IoT data from the production line, and identify a product fault faster and more efficiently than a traditional workforce might. When designing an automation initiative, it is vital to take these opportunities into account and plan for them.

Create a roadmap to cognitive automation and beyond

RPA is no more a destination than business rules engines were, or CRM, or ERP. These were all enabling technologies that oriented and guided organizations towards greater levels of agility, awareness and capability. Similarly, deploying RPA provides organizations with insight into the complexity, structure and dependencies of specific tasks. Working towards task automation yields real clarity, on a workflow-by-workflow basis, of what level of cognition will be necessary to achieve meaningful automation levels.

While many tasks can be achieved by current levels of vendor RPA capability, others will require more evolved cognitive automation, and some will be reserved for the future, when new AI capabilities become available. By designating relevant work processes to their automation ‘containers’, an enterprise roadmap to cognitive automation and AI begins to take shape.

]]>
<![CDATA[7 Predictions for RPA in 2018]]>

 

The RPA sector is defined as one of rapid technological evolution, and every year it seems like what we thought to be bleeding-edge capability in January turns out to be proven and deployed technology long before year’s end. With this rapid pace of growth and maturation in mind, where might the RPA sector be by the end of 2018? Here are seven predictions.

The first wave of automation-inclusive UI design

To date, RPA has been adaptive in nature – automation software has done the interpretive labor to ‘see’ the application screen as humans do. But as more and more repetitive-task work becomes automated, software designers will begin taking the strengths and weaknesses of computer vision into account in designing applications that will be shared between human and digital workers. This will show up in small ways at first, particularly in interface areas that are challenging for RPA software to learn quickly, but over the course of 2018, ‘hybrid workforce UI design’ will become a new standard for enterprise software vendors.

Process mining makes RPA more accessible for midmarket & emerging large market segments

Early adopters of RPA have already established that detailed process mapping is key to successful task automation across the extended enterprise. For Fortune 1000 firms, that can be fairly straightforward, with retained consulting and systems integration partners on hand to assist in the process of mapping task flows for RPA implementation. Smaller firms, however, don’t always have the luxury of engaging large consulting firms to assist in this process – so vendors developing their own automated process mapping technology, or partnering with third-party providers like Celonis, will find demand booming in the midmarket.

Human skill bottleneck hits providers without education/certification plans

It’s ironic that human skill capital will end up as the limiting factor in the growth rate of successful RPA implementations, but 2018 will close with a clear shortage of qualified automation designers and deployment management professionals. Those organizations (like UiPath, Blue Prism, and Automation Anywhere) that saw this coming early on and established academic settings for the education and certification of on-platform skilled practitioners, will thrive. But those lacking these programs may find themselves in a skill bottleneck in the market – one that will begin to materially inhibit growth.

RPA becomes a designed-in factor for disruptors

In conversations I had with organizations implementing RPA during 2H17, one common factor came to the fore: that their initial FTE rationalization gains had already been realized, and going forward, they were looking to RPA as a means to manage significant growth in their operations.

For organizations coming to market as disruptors, this trend is even more pronounced, and organizations with designs on being disruptive forces are increasingly building automation capabilities into their growth plans from the ground up. Building an organization on a foundation of a hybrid human-digital workforce is a different endeavor entirely from retrofitting an existing company with automation – and as a result, we should begin seeing some real innovation in organizational design beginning this year.

Japan becomes the adoption template geo for big bets

To date, Japan has produced some of the largest implementations of RPA, with UiPath’s late 2017 deployment at SMBC pushing the envelope still further. Japan is betting big on RPA to become a sustainable source of competitive differentiation, and as more large organizations there implement large-scale RPA projects, the best practices library for RPA deployment at scale will expand in kind.

Companies worldwide looked to Japan for guidance in implementing robotics once before, during the rise of robotic manufacturing in the automotive sector. 2018 will see a second such wave.

RPA proves its case as a source of compliance gains

RPA has been marketed with a number of different value creation characteristics already, with the obvious cost reduction and quality improvement factors taking center stage. But RPA has significant benefits to offer organizations in regulated industries, most notably in the ability to secure access to sensitive information, systematize the process of accessing and modifying that information, and standardizing the documentation process and audit logging work associated with it.

2018 will be the year that organizations begin to see meaningful returns from adopting RPA as a solution to compliance task challenges.

Demand for specialist implementation navigators grows significantly

RPA implementation has been a partnered endeavor since the technology first arrived on the scene, with software vendors allying themselves closely with large consulting firms and systems integrators to optimize their client deployments. But demand is emerging for focused, automation-centric services, and right on time, the industry is seeing a surge of new RPA specialist service providers like Symphony and Agilify.

As buying organizations begin to ask more of their new – or revamped – RPA implementations, demand for these providers’ services will grow swiftly during 2018.

]]>
<![CDATA[CSS Corp’s Contelli Automation Platform Driving Improvements in Enterprise Network Management]]>

 

As 2018 begins, the RPA sector is starting to produce more segment specialists from within its vendor base. Whereas just two years ago the sector was still finding its footing in addressing common back- and front-office application automation, enterprise customers today have the luxury of building best-of-breed solutions that often incorporate two or more vendors working in concert to automate a broader spectrum of tasks.

CSS Corp’s Contelli is a relatively new automation platform, but one that is gaining attention for its capability set in a complex and high-value enterprise support area – namely, automated network management. Contelli received an elevated role at CSS in the wake of the company’s late 2016 reorganizaton, which saw CSS' board elect to change the direction of the firm. As part of this strategic direction change (one that saw an influx of new management talent take place in the executive suite), the company transitioned from a corporate focus heavy on legacy IT services to one centered on customer engagement and digital transformation. That transition also included an elevated role for CSS' automation platform, which was rebranded from AIMS (Automated Infrastructure Management Solution) to Contelli. 

The product continuously analyzes client IT operations and uses network traffic data, paired with algorithmic analysis of historical data, to predict downtime, reconfigure traffic for improved efficiency, dynamically provision and de-provision IT assets, and resolve repetitive support tasks. CSS estimates ~30-40% improvements in operational efficiency in IT operations, and ~45% to ~65% reduction in FTEs, in typical deployments of Contelli IT Management Engine.

Although Contelli’s brand name may be a new one in the market, the platform has already achieved success. For a leading managed network services provider with 450k network devices under management, Contelli software provided the client with a 25% improvement in average handle time for open ticket calls, a 22% improvement in case closure rate, and, perhaps most importantly, a 100% success rate in case audits performed on work Contelli automated.

Three factors make Contelli an appealing offering for organizations seeking to reduce their network management costs:

  • It touches a broad range of KPIs. Network optimization isn’t always realized by identifying a few significant sources of cost savings and quality improvement potential; often, the task involves incremental improvement of multiple KPIs, from throughput and traffic efficiency to asset provisioning speed, to support ticket resolution turnaround cycle. Contelli’s position within the network management stack enables the product to offer a broad array of improvements in KPIs across multiple task areas
  • It learns continuously from network data. Automating a fluid process is among the steepest challenges in intelligent automation today. As variables change within the task area to be automated, the RPA platform of choice must not only be able to adapt on the fly, but learn entirely new sets of events and exceptions as topologies and assets evolve. Contelli’s development team has invested considerable time and resources in the product’s machine learning layer to enable dynamic network management automation
  • It is a focus area for CSS’ Innovation Labs. Contelli is a mature offering today, but CSS has significant plans to improve and upgrade the product’s machine learning capabilities in the company’s Innovaton Labs, an R&D environment for continuous improvement of the platform.  CEO Manish Tandon has circled Innovation Labs in red as a key strategic plank for the company’s evolution, and Contelli is slated for considerable time ‘up on the lift.’

Contelli isn’t a ‘one stop shop’ for front- and back-office enterprise automation, but for organizations seeking to self-fund a larger-scale RPA initiative with a broad slate of KPI improvements in a critical business task area, it’s an appealing choice for network management administrators. 

]]>
<![CDATA[Intelligent Automation Summit Takeaways: Four Alternative Gain Frameworks for RPA]]>

 

At the Intelligent Automation (IA) event in New Orleans, December 6-8, snow in the Big Easy air was not the only surprise. As expected, there was plenty of technological innovation on show in the exhibition hall, but the event also played host to some energized discussions on human-centric gains to be realized from RPA implementation – suggesting that we are indeed moving into the next phase of considering automation holistically in the enterprise.

Specifically, many presentations and conversations shared a theme of human enablement within the enterprise – positioning the organization for greater long-term success, rather than focusing on the short-term fiscal gains of reductions in force and reduced cost to serve specific processes. Here are four automation gain frameworks I took away from the event that are focused on areas other than raw FTE reduction.

Automation as a disruption buffer

‘Disrupt or be disrupted’ has become a mantra for many change management executives across industries, and it was invoked numerous times during the IA event in relation to automation’s role as a buffer to disruptive change – in both directions. An automated workforce can quickly scale up (or down) as needed without costly and time-consuming facility management and workforce rationalization tasks. While there was some discussion regarding the downside containment role of RPA, far more participants at the event were looking to RPA as a tool to effectively manage explosive growth in their sectors

Automation as a ‘hazmat bot’

The idea of using bots to handle sensitive processes and data emerged as a strong theme for the near-term RPA sector roadmap. Where bots were actually less trusted with ‘low-touch’ environment data in highly-regulated industries, like BFSI and healthcare, the dialog is beginning to turn in favor of sending bots to touch and manipulate that data rather than humans.

The rationale is sound: bots can be coded with very narrowly-defined rights and credentials, self-document their own work without exception, and produce their own audit trails. Expect to see this trend gain steam in 2018 and beyond. ‘We send bots into nuclear reactors and onto other planets,’ one attendee told me. ‘We treat the data core in card issuance with no less of a hazmat perspective – where we can minimize human contact, we will, for everyone's benefit.'

Automation as a workflow stress diagnostic

The very process of automating workflows within the organization produces a wealth of usable data, and nowhere is that more evident than in analyzing those workflows for exception management stress points. In a given workflow, there are usually clearly defined and straightforward task components, and those that produce more than an average volume of exceptions. By mapping these workflows and using them to understand similar tasks in other areas of the organization, companies can leverage automation data to identify those phases of a workflow that are creating exception management stress for employees, and add support via process redesign, digitization, or assisted automation.

Automation as human capital churn ‘coolant’

Related to the previous point is the idea that RPA is beginning to serve as a very real source of ‘coolant’ for burnout-prone repetitive task areas in the organization by continuously separating work into automation-relevant and human-relevant. Eliminating the most burnout-causing task stages from the human workday reduces the proclivity for turnover and the total cost to the organization of managing the human side of the workforce.

Summary

Productivity, quality, and fiscal gains are often the first three topics of conversation when organizations discuss launching an RPA initiative. But automation has much more to offer, not only to the organizational bottom line, but to the human employees in the enterprise as well. As this sector’s technology offerings evolve and mature, so too do the use cases and benefit frameworks within customer organizations.

]]>
<![CDATA[Adventures in Blockchain: Mphasis Focuses on Client Revenue Growth, Supporting Compelling Use Cases]]>

 

In this article, I look at Mphasis’ Blockchain initiatives and at the segments they are focusing on for further development with their financial services clients. Mphasis began its Blockchain initiatives in 2016, initiating internal experiments and POCs to understand the technology and how it can be applied to business challenges.

Mphasis is working with a global financial services company on POCs and an approach to bringing a customer identity solution to the financial services market, in order to address consumer data challenges in a global environment. The customer and Mphasis are working to address multiple issues including:

  • Solution construct, design approach, and related technology considerations to select the right Blockchain technology from different options such as BigchainDB, HyperLedger, Ethereum, Multichain, network transaction currency and conversion to fiat, engagement layer and access point technologies
  • Industry ecosystem participation considerations – incentives, privacy protections, regulatory compliance considerations, trust and risk, and access point technologies to join the network
  • POC prototype and demo – for an initial MVP.

The POC took 7 weeks to demonstrate that the technology works and compliance is achievable. The solution was set up as a multi-node environment that enables the industry participants to transact, by enabling functions such as set-up and administration, search, crypto-payments, transaction administration, analytics, regulatory oversight and access.

Since then, Mphasis has built an ecosystem of Blockchain tools and best practices, and conducted multiple POCs. Clients are narrowing the range of use cases they wish to pursue further and are driving some of those into production.

Mphasis’ Blockchain services & use cases

Mphasis has a core group of 10+ engineers working on Blockchain initiatives who are based in Bangalore. Key attributes of Mphasis’ Blockchain ecosystem include:

  • POCs completed to date: 12, of which 50% were client requested and 50% internally undertaken
  • Clients engaging on Blockchain: 7 across banking, insurance, and airlines
  • COE founded: 2016
  • Platforms employed: Ethereum, Hyperledger, Multichain, and Bigchain.

Mphasis focuses on the Etherium and Hyperledger platforms in its Blockchain work, and expects to add a capability in Quorum soon. Key POCs to date include:

  • Trade finance for banks: enabling a decentralized network between importer, exporter, port authorities, and banks. Key issues addressed include document verification, fraudulent activity incidence, and document losses
  • Mortgage document management: the goal is to store documents on the DLT as a customer goes through the loan application process. This will allow vendors (e.g. insurance companies) to access the documents and speed up TAT, which will reduce cost of origination and improve customer experience
  • Record keeping: enabling a single version of the truth, with additional components including IOT and smart contracts
  • Patient health records: enabling confidential sharing of patient records and with intended participants
  • Baggage-as-a-service: distributed, decentralized system for tracking bags during travel by passenger using mobile device
  • Group insurance claims: stakeholders including hospitals, insureds, insurer, and third-parties transact and exchange documents to enable fast settlement of claims
  • Contract management: digital signing of documents on a Blockchain network to ensure transparency
  • KYC registry: enabling a KYC market utility using Blockchain.

Going forward, Mphasis will focus on:

  • Consulting for clients considering Blockchain initiatives
  • Delivering Blockchain implementations (POC or operational) with integrated application suites to reduce time to market and increase platform efficiency
  • Delivering operational support for Blockchain environments based on its solution experience.
  • Continuing to create use cases around KYC registry, mortgage document management, trade finance, baggage-as-a-service, and group insurance claims.

Conclusions

To date, most Blockchain services vendors have been focused on enabling small groups of direct stakeholders to use Blockchain to eliminate the need for third-party support. Mphasis has focused instead on enabling stakeholders to bring in third-parties as customers, and use Blockchain as a highly secure, reliable self-service tool. This should allow data holders, the sponsors of these initiatives, to monetize their investments in customer data and documents. This will allow Mphasis eventually to transition its Blockchain services towards operations support and cybersecurity. By supporting its clients’ efforts to drive revenue growth, Mphasis is able to support compelling use cases for employing this technology.

]]>
<![CDATA[In RPA Deployment, Slow Down... To Go Faster]]>

 

RPA software offers users the tantalizing possibility of being able to simply 'hit record and go' at the beginning of an enterprise automation initiative. But organizations that are seeing the greatest returns are slowing the initial process down, and framing their initiatives as they would treat any major technology migration.

At UIPath’s recent User Summit in New York City, one of the hottest topics was the right pace of RPA implementation, with UIPath’s customer and partner panels devoting a considerable amount of time to the topic. And the message was clear: RPA is a technology that encourages an implementation rate faster than the customer might want to sign up for.

That very idea is a strange one for most veteran IT and business executives, who are used to IT project implementations going slower than expected, with fiscal returns further in the future than they might have hoped. So when a technology like RPA does come along that promises to enable users to ‘hit record and go’, why shouldn’t beleaguered line of business heads take those promises at face value and get moving with automation today? After all, automation is often part of a larger digital transformation initiative, with expectations that projects will be self-funding through savings. Shouldn’t technologies, like RPA, that generate material cost reductions be implemented as quickly as possible?

It’s a fair question. But there are four simple reasons why RPA projects should still be managed in a stepwise fashion, like any other IT or business project:

  • Technical debt mounts quickly in too-quick RPA implementations. The ‘hit record and go’ philosophy might offer some minimal return in a short period of time, but federating the automation creation process means that multiple users often create similar automations for similar tasks, wasting time and resources consumed later in consolidating different versions of the same robot down to a single bot. In addition, individual users often create related-task bots based on their original automation scripts, multiplying the task of bot consolidation later. Often, organizations find that they have to start over completely, and only then do they undertake a more formal approach
  • Installing RPA through a traditional project framework brings stakeholders together. Automation is a technology that has the potential to bring IT and business stakeholders together in an enterprise service delivery partnership – or drive them apart with turf battles and finger-pointing. Establishing rules up front for which business units should be involved in automation design, which in automation coding, which in automation governance, and which in automation innovation establishes ground rules that all parties involved can respect and buy into for the long term, avoiding larger-scale conflict that can emerge when the process is entered into too quickly up front
  • Designing for scale demands both innovation and centralization. As automation demand scales both in terms of breadth of services within the organization and the number of workers involved, the need for centralization of automation design and deployment increases commensurately. Innovation can actually proceed faster in many organizations being managed from a CoE or automation ‘lighthouse’ than through trial and error at the desktop level. Add in the additional demands on automation systems that result from global organizations demanding localized automations and in-language service, and that scale factor becomes a critical component in achieving peak fiscal return from an RPA initiative
  • Most RPA providers rely on integration partners for ‘right-speed’ deployment and support. Across the RPA sector, strong partnerships have evolved between RPA software developers and major integrators and consulting service providers, and for good reason – the latter bring experience in change management, process design, and implementation at scale to the former’s technological innovations. This has quickly become a proven combination, and one that is returning significant fiscal and operational value to enterprise-scale organizations. Short-circuiting that value return chain by cutting partner perspective and capability out of the equation might again save some dollars and time in the short run, but will end up being more costly as RPA is scaled up.

RPA presents IT and business leaders with an alluring combination of immediacy of access, significant potential fiscal returns, and low to non-existent stack requirements on deployment. Organizations that have jumped into the deep end of enterprise automation from the ‘hit record and go’ perspective might see some immediate fiscal returns, but ultimately, they are selling short the full promise of professionally-managed automation projects executed in partnership between lines of business and IT. Providers like UIPath that are emphasizing speeding up implementation are doing so with a structured framework in mind – so that once the process is designed for scale, and implementation rules and procedures are put in place, the actual software component of the solution can proceed into deployment as quickly as possible.

But in the end, a few additional weeks or even months spent in up-front work can better enable enterprise-level organizations to achieve their peak automation return. Moreover, this approach saves costly rework and redesign stages that inevitably stretch a ‘hit record and go’ implementation out to the same project timeline, or often much longer, than a more structured approach. As strange as it may sound, the best practices in RPA deployment involve slowing down… in order to go faster. 

]]>
<![CDATA[Infosys’ Testing Practice Update: AI, Chatbots & Blockchain]]>

 

We recently caught up with Infosys to discuss where its Infosys Validation Solutions (IVS) testing practice is currently investing. This is a follow-up to a similar discussion we had with Infosys back in July 2016 that centered on applying AI and making sense of the data that client organizations have (see here).

Our most recent discussion looked at technologies such as AI, chatbots, and blockchain. The focus of IVS has expanded from immediate opportunities within software testing to Infosys’ overall development of new IT services offerings.

AI: more use cases are the priority

AI remains a priority for IVS, with the attention to date having centered on developing use cases in test case optimization and defect prediction. Its PANDIT IP correlates software new releases with past defects, feature changes, test cases, and determines what part of the new release’s code is responsible for defects. IVS points out that its implementation (in identifying the lines of code responsible for the defect) is relatively difficult. IVS is taking a gradual approach, and starting with COTS, the underlying rationale being that new releases of COTS are much more documented than custom applications: identifying the part of the code that is responsible for a bug is therefore easier and is likely to be in the custom code of the COTS.

Chatbots: testing response validity

The use of chatbots/virtual agents challenges the traditional functional testing model, which largely relies on a process, and on executing a test case (e.g. a user tries to login to a website), and to make sure the transaction outcome is valid (e.g. user is indeed logged in). With chatbots, the goal is not so much about process testing, but lies in response testing, for example:

  • Interpreting questions correctly
  • Dealing with the wide range of expression options end-users have for the same idea
  • Selecting the most appropriate response from a high number of potential responses.

Of course, as with any ML, this requires multiple iterations with SMEs for the virtual agent to learn, in addition to using language libraries; this is a work in progress with early PoCs with clients.

Blockchain: integration complexity & business rules testing

The complexities with blockchain are different from those with chatbot testing. With blockchain, as with IoT, the complexity lies in its principles: a decentralized architecture, and many parties/items involved. IVS is assessing how to conduct testing around authentication and security, communication across nodes, also making sure transactions are processed and replicated across nodes.

Looking ahead, there will be a challenge with functional testing, in testing the underlying business logic/ rules, while also complying with different local business regulations, and languages. IVS is developing approaches to validate these contracts and is in early phase of PoC with clients.

Conclusion: the challenge is to automate testing of complex software at scale

The challenge of testing chatbots and blockchain, also IoT, and physical robots, is not so much about effective functional and non-functional testing but about moving the testing of such technologies to an industrial level, using automation software that only exists partially today.

The good news is that the ecosystem of testing start-ups is vibrant, and larger software testing services providers like Infosys are investing now in preparation for the surge in adoption of such technologies. 

]]>
<![CDATA[Adventures in Blockchain: Capgemini Focuses on Helping Clients Develop Their Roadmap]]>

In this blog, I look at Capgemini’s Blockchain initiatives and what segments they are focusing on for further development with their financial services clients.

Initially, Blockchain engagements were focused on: 

  • Using POCs to develop an understanding of the capabilities and limitations of distributed ledger technology (DLT)
  • Developing business use cases, trying POCs to determine if there is an effective business application of the technology
  • Conducting due diligence on vendors to understand the supplier ecosystem.  

Recently, financial institutions have been narrowing the range of use cases and vendors they are willing to consider. They are looking to drive forward one or more use cases to full production, and their focus with Blockchain services vendors is to develop a selective roadmap for operational deployment of a few high priority engagements.

Capgemini’s Blockchain services & use cases

Capgemini has been pursuing Blockchain for two and a half years, and it has a group of 25+ engineers working on Blockchain initiatives, with seven engagements currently in play. Capgemini’s Blockchain practice believes successful initiatives require a combination of business domain and technology expertise, and it focuses on five areas:

  • Technology expertise: especially DLT, cybersecurity, communications, and data management
  • Domain expertise:
    • Structured finance: trade finance and factoring, non-listed, non-codified bilateral agreements
    • Payments: real-time international payments transactions, including compensation, settlement, and reporting
    • Capital markets: Post Trade Automation (including optimized Collateral operations), Syndicated & Commercial Lending, and Non-Listed Securities
    • Insurance and reinsurance: focused on European companies for smart contract management 
    • Digital identity: security and personal identity for access to the DLT
  • Program management: DLT projects are complex and agile, with the client and vendor are working together on the project  
  • Alliance partners: cloud providers, and product vendors. Capgemini participates on industry panels, especially on Hyperledger Fabric, to create and support roadmap development
  • Partner on business: platform-based operations delivery. Creation and governance of the utility that will provide service to the clients.

Currently, Capgemini works with four key technology stacks:

  • Symbiont
  • Hyperledger
  • R3 Corda
  • Ripple.

Capgemini believes it is differentiating to understand the current state environment within a given client (both business processes and technology processes). Further, that understanding is required to be able to effectively reimagine processes using any advanced technology, especially Blockchain.  

Ultimately, Capgemini wants to act as a universal integrator, partnering with technology providers to support clients redesigning their business with Blockchain centric services that also leverage complementary capabilities like AI or machine learning. Capgemini is aiming to serve as the Transformation Partner for their clients, where Distributed Ledger Technology is the transaction framework to deploy next generation, collaborative operating models. Working with key partners, they will continue to evolve core technical competencies in Blockchain to its clients, such as:

  • Blockchain as-a-service
  • Security as-a-service
  • Identity management as-a-service.  

Conclusions

To date, most Blockchain services vendors have been:

  • Delivering POC engagements to clients as clients work to identify opportunities to use Blockchain technologies, or…
  • Building Blockchain POCs for utilities they might productize for clients.

Capgemini is pursuing a third path of building on its extensive work with client legacy systems, and coupling that domain knowledge of the client with its own ability to coordinate multiple technology vendors to create faster, more effective business restructuring around Blockchain capabilities.

Ultimately, as Blockchain technology matures, Capgemini will transition to providing Blockchain infrastructure services focused on security and technology platform outsourcing. While the technology is still at a very early stage, adoption is increasingly looking to be done primarily by tier-one institutions. The technology will mature rapidly, and infrastructure providers will be harvesting most of the revenues being created for vendors in Blockchain.  

]]>
<![CDATA[Nvidia Draws on Gaming Culture to Compete for AI Chip Leadership]]>

 

Nvidia faces stiff new competition for the leadership position in the AI processing chip market. But the firm has a significant competitive advantage: a culture of innovation and production efficiency that was developed to address the demanding needs of a wholly different market.

Intel and Google have been making waves in the AI processing chip market, the former with the acquisitions of Nervana Systems and Mobileye, the latter with the new Tensor Processing Unit (TPU) announcement. Both are moves intended to compete more directly with Nvidia in the burgeoning market for AI processing chips.

James Wang of investment firm ARK recently set forth his long-term bet on the industry – and it favors Nvidia. Wang posits that products like TPU will be less efficient than Nvidia GPUs for the foreseeable future, arguing that “…until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.”

Wang is right, but his opinion may not actually go far enough in explaining why Nvidia should enjoy a sustainable advantage over other relative newcomers, despite their resources and experience in chipbuilding. That advantage, by the way, doesn’t have a thing to do with Google’s chip fabrication expertise, or Intel’s understanding of the needs of the AI market. It’s a deeper factor that’s seated firmly in Nvidia’s culture.

Cutting-edge engineering & savvy pricing: key strengths forged in the gaming cauldron

By the time 2017 dawned, Nvidia owned just over three-quarters of the graphics card segment (76.7%), compared with main competitor AMD’s one-quarter (23.2%). But that wasn’t always the case. In fact, for much of the past decade, Nvidia held an uncomfortable leadership position in the marketplace against AMD, sometimes leading by as few as ten points of market share (2Q10).

During that time, Nvidia understood that a misstep against AMD in bringing new products forth could yield the market leader position, and even send the company into an unrecoverable decline if gamers – a tough audience to say the least – lost confidence in Nvidia’s vision.

As such, Nvidia learned many of the principles of design thinking the hard way. They learned to fail fast, to find new segments in the market and exploit them – as they did with the GTX 970, a product that stunned the marketplace by being priced underneath its predecessor at launch – and to take and hold ground with innovation and rapid-cycle development. More importantly, they learned how to demonstrate value to a gamer community that wanted to buy long-term performance security when it was time for a hardware refresh. In short, they learned to understand the wants and needs of an extraordinarily demanding consumer public, in the form of gamers, and relentlessly squeezed their competition out with a combination of cutting-edge engineering and savvy segment pricing.

Much of the real-world output from that cultural core of relentless engineering improvement is the remarkable pace of platform efficiency that Nvidia has achieved in its GPU chips. The company maintained close ties with leading game publishing houses, and as a result kept clearly in mind what sort of processing speed – as well as heat output and energy draw – cutting-edge games were going to require. At multiple points in time, the standards for supporting new games have meaningfully advanced inside eighteen months. This often mandated that Nvidia turn over a new top-end GPU processing platform on a blistering production timeline.

In response, Nvidia turned to parallel computing, an ideal fit for GPUs, which already offered significantly more cores than their CPU cousins. As it turned out, Nvidia had put itself on the fast track to dominating the AI hardware market, since GPUs are far better suited for applications, like AI, that demand computing tasks work in parallel. In serving one market, Nvidia built a long-term engineering and fabrication roadmap nearly perfectly suited for another.

The competition is hot, but Nvidia poised to win?

Fast forward to 2017, and some are questioning whether Nvidia is in the fight of its life now with new, aggressive competitors seeking to take away part – or all – of its AI GPU business. While Wang has pushed his chips into the center of the table on Nvidia, others are unconvinced that Nvidia can hold its lead – especially with fifteen other firms actively developing Deep Learning chips. That roster includes such notable brands as Bitmain, a leading manufacturer of Bitcoin mining chips; Cambricon, a startup backed by the Chinese government; and Graphcore, a UK startup that hired a veritable ‘who’s who’ of AI talent. 

There’s no shortage of innovation and talent at these organizations, but hardware is a business that rewards sustained performance improvement over time at steadily reducing cost per incremental GFLOPS (where a GFLOP is one billion floating point operations per second). The first of these components is certainly an innovation-centric factor, but the second rewards organizations that have kept pace not only with the march of performance demands, but the need to justify hardware refresh with lower operating costs. Given that this is an area where Nvidia shines, as a function of its cultural evolution under identical circumstances in gaming, the sector’s long-term bet on Nvidia is the correct call. 

 

Dave Mayer is currently working on a major global project evaluating RPA & AI technology. To find out more, contact Guy Saunders.

]]>
<![CDATA[HCL's 3-Lever Approach to Business Process Automation: Risk & Control Analysis; Lean & Six Sigma; Cognitive Automation]]> HCL has undertaken ~200 use cases spanning finance & accounting, contact, product support and cross-industry customer onboarding, and claims processing, using products including Automation Anywhere, Blue Prism, UiPath, WorkFusion, and HCL’s proprietary AI tool Exacto.

This blog summarizes NelsonHall’s analysis of HCL's approach to Business Process Automation covering HCL’s 3-lever approach, its Integrated Process Discovery Technique, its AI-based information extraction tool Exacto, the company’s offerings for intelligent product support, and its use of its Toscana BPMS to drive retail banking digital transformation.

3-Lever Approach Combining Risk & Control Analysis, Lean & Six Ssigma, and Cognitive Automation

 

 

  • The 3 lever approach forms HCL’s basis for any “strategic automation intervention in business processes”. The automation is done using third-party RPA technologies together with a number of proprietary HCL tools including Exacto, a cutting-edge Computer Vision and Machine Learning based tool, and iAutomate for run book automation

  • HCL starts by conducting a 3-lever automation study and then creates comprehensive to-be process maps. As part of this 3-lever study, HCL also conducts complexity analysis to create the RPA and AI roadmap for organizations using its process discovery toolkit. For example, HCL has looked at their entire process repository for several major banks and classified their business processes into four quadrants based on scale and level of standardization

  • When generating the “to be” process map, HCL’s Integrated Process Discovery Technique places a high emphasis on ensuring appropriate levels of compliance for the automated processes and on avoiding the automation of process steps that can be eliminated

  • The orchestration of business processes is being done using HCL’s proprietary orchestration platform, Toscana©. Toscana© supports collaboration, analytics, case management, and process discovery and incorporates a content manager, a business rules management system, a process simulator, a process modeler, process execution engines, and integrated offering including social media monitoring & management.

Training Exacto AI-based Information Extraction Tool for Document Triage within Trade Processing, Healthcare, Contract Processing, and Invoice Processing

  • HCL’s proprietary AI enabled, machine learning solution, Exacto, is used to automatically extract and interpret information from a variety of information sources. It also has natural language and image based automated knowledge extraction capabilities

  • HCL has partnered with a leading U.S. University to develop its own AI algorithms for intelligent data extraction and interpretation for solving industry level problems, including specialist algorithms in support of trade processing, contract management, healthcare document triage, KYC, and invoice processing

  • Trade processing is one of the major areas of focus for HCL. Within capital markets trade capture, HCL has developed an AI/ML solution Exacto | Trade. This solution is able to capture inputs from incoming fax based transaction instructions for various trade classes such as Derivatives, FX, Margins, etc. with accuracy of over 99%.

Combining Watson-based Cognitive Agent with Run Book Automation to Provide “Intelligent Product Support”

  • HCL has developed a cognitive solution for Intelligent Product Support based on a cognitive agent LUCY, Intelligent Autonomics using for run book automation, and Smart Analytics with MyXalytics for dashboards and predictive analytics. LUCY is currently being used in support for IT services by major CPG, pharmaceuticals, and high-tech firms and in support of customer service for a major bank and a telecoms operator

  • HCL’s tool is used for run book automation, and HCL has already automated 1,500+ run books. uses NLP, ML, pattern matching, and text processing to recommend the “best matched” for a given ticket description. HCL estimates that it currently achieves “match rates” of around 87%-88%

  • HCL estimates that it can automate 20%-25% of L1 and L2 transactions and has begun automating internal IT infrastructure help-desks.

Positioning its Toscana Platform to Drive Digital Transformation in Retail Banking

  • HCL is embarking on digital transformation through this approach and has created predefined domain-specific templates in areas including retail banking, commercial lending, mortgages, and supply chain management. Within account opening for a bank, HCL has achieved ~ 80% reduction in AHT and a 40% reduction in headcount

  • In terms of bank automation, HCL has, for one major bank, reduced the absolute number of FTEs associated with card services by 48%, a 63% decrease based on the accompanying increase in the workload. Elsewhere, for another bank, HCL has undertaken a digital transformation including implementation of Toscana©, resulting in a reduction of the number of FTEs by 46%, the implementation of a single view of the customer, a reduction in cycle time of 80%, and a reduction in the “rejection rate” from 12% to 4%.

]]>
<![CDATA[Fast Data: The Smart Will Get Faster... and the Fast Will Get Smarter]]>

 

Fast Data is the emerging hot topic of discussion for business leaders seeking to get ahead of the next wave of data utilization. But Fast Data isn't just an evolution of Big Data; it's a market force unto itself that's asking more of traditional and start-up vendors in both traditional DBMS and AI.

I spent a (surprisingly snowy) morning this week talking with AI and Big Data thought leaders at the Global Data Summit 2017 in Colorado. While there’s no shortage of topics to hold their current interest, none was a higher business priority than solving the challenge of managing Fast Data through the application of AI. The consensus is certainly that the organizations that can best address this challenge will also be those best positioned to compete and win overall. But how best to get their arms around this opportunity and move forward effectively?

First, it's important to distinguish between the challenges of leveraging Big Data and Fast Data. Big Data is generally data at rest; it's explored at (relative) leisure, and doesn’t change so quickly or accumulate so rapidly that offline analytics become impossible. AI has no shortage of applications in Big Data, but in that environment, it's more the ability of an AI platform to manage complexity and work at scale that offers value.

Fast Data, by contrast, accumulates quickly and can change substantively within the course of a day or even an hour. Think adtech here, or online gaming, or vendor pricing with commodity costs as an input; vast amounts of data need to be ingested, analyzed, and understood by the second in order to secure the right ad placement at peak value, or to manage complex MMO games, or to ensure that pricing continuously secures competitive advantage at acceptable margin.

Fast Data becomes Big Data quickly, just by nature of its accumulation rate, and while it's often valuable to query the Big Data that Fast Data becomes to understand trends and cyclicality, Fast Data will always yield its peak value at the millisecond level. It’s the freshest layer that offers the most insight. The Big Data value proposition to retailers, for instance, is looking for cyclicality of demand and regional demand preferences over time; the Fast Data value proposition is understanding the products a shopper is looking at right now and making real-time recommendations for, say, footwear and accessories to match. AI can accomplish both tasks, but often needs to be set about different tasks – with different priorities and ground truths – to succeed. The implications for every phase of the organizational data analysis and workflow management platform – from MDM and data hygiene to machine learning and AI application – are immense.

In response, expect to start seeing considerably more focus from major AI platform vendors not just on depth of understanding by their products, but speed of reaction as well. Organizations big and small in the traditional data sector, from Oracle to VoltDB, are developing and marketing smarter Fast Data solutions, while AI leaders – like IBM and Wipro – are building capabilities for faster data management within their AI platforms.

Servicing this rapidly-growing need for Fast Data management will be a convergent effort: the smart will get faster… and the fast will get smarter.

 

Dave Mayer is a Senior Analyst responsible for NelsonHall's RPA & Cognitive Services research program, covering the areas of robotic process automation (RPA), artificial intelligence, cognitive business, and machine learning. He is currently working on a major global project evaluating RPA & AI technology. To find out more about the project, contact Dave Mayer or Guy Saunders.

]]>
<![CDATA[Adventures in Blockchain: Virtusa Focuses on Security & Privacy Issues in Permission-Based Environments]]>

Most Blockchain use cases have focused on reducing the need for (and cost of) infrastructure. And in Virtusa’s case, the vendor has focused on engagements where it can combine Blockchain technology with other emerging technologies such as QR codes, IoT, and encryption algorithms to deliver enhanced security and cost savings for environments lacking adequate supporting infrastructure. Here I take a look at Virtusa’s Blockchain initiatives.

Virtusa’s Blockchain services & use cases

Virtusa has been pursuing Blockchain for 3 years, and it has a group of 20+ engineers working on Blockchain initiatives, with 35 additional engineers in training in Hyderabad, who will be fully deployed by Q4 2017. Virtusa provides consulting and pilot services including:

  • Strategy and design:
    • workshops for awareness and adoption
    • Use case creation and validation
    • Advisory on technology and vendors
    • Research on 400+ Blockchain startups  
  • Sandbox:
    • Cloud-hosted experimentation
    • ~7 Blockchain variants, including: R3 Corda, Etherium, Multichain, Chain.com, Hyperledger, Quorum, and VP Blockchain
    • APIs to key platforms (primarily CRM and ERP)
    • Testing capabilities with very large datasets
  • Accelerators:
    • 100+ pre-compiled use cases across multiple industries
    • Solution accelerators (listed financial industry only): payments, credit monitoring, check fraud, trade finance, OTC derivatives, interest rate swaps, and covenant management
  • Advanced
    • Security (keyless cryptography, and homomorphic & format-preserving encryption)
    • Industry steering council participation in ISO TC-307 Blockchain and distributed ledger technology

To date, Virtusa has worked on ~100 use cases with clients, of which ~50 have been moved into pilots and remain active engagements. Of the active use cases, ~40 are in the financial services industry. Currently, Virtusa is working on three key use cases to develop them into operational deployments. The top three business patterns that establish strong use cases are:

  • Provenance: check books or other financial instruments can be validated as authentic from a chain of ownership. Example: use of QR code on checks for retail bank customers to reduce check fraud
  • Chain of custody: KYC, AML checks on transactions moving through an ecosystem. Example: rather than conduct comprehensive KYC/AML checks, as updates are required, banks can conduct KYC/AML checks from the last verified point in the Blockchain  
  • Permission-based sharing of information: third parties can now share information securely based on homomorphic encryption (low cost) and format preserving encryption (used extensively today in the cards processing business) and benefit from the blockchain enforcement of rules to remove the need for a trusted third party. Example: use of IoT to log usage of farm equipment leased to multiple parties. 

Virtusa is moving all three of these use cases into production with its clients over the next ten months. It believes that its most differentiated offering is the permission-based sharing of information, due to its access to very low-cost, strong encryption technology. All three of these engagements are based in APAC/Middle East markets. Deployment of operational Blockchain environments in the mature markets of the U.S. and Europe are less likely in the short run due to strong existing infrastructure and the need to establish industry standards. However, changes in the mature markets, such as Brexit in Europe, and the recent announcement of support in production e.g.  Hyperledger fabric version 1 are likely to drive adoption because those changes will either require costly new infrastructure or a group of partners sharing a Blockchain environment.   

Conclusions

The case for Blockchain operations is developing fastest where institutions operate with little infrastructure (physical or institutional) and services vendors can combine multiple technologies beyond Blockchain itself, to deliver the functionality of a mature marketplace without the industry-wide investment required to create a mature marketplace. This favors business cases where banks operate in an emerging market or where a new bank product is getting deployed which does not have competitors in the market today.

By developing a set of use cases for Blockchain in banking, Virtusa can support clients who differentiate themselves by unique product offerings. Virtusa can help those clients reduce their time to market, which will provide the longest time in market with a product which has no close competitive offerings. By adapting the mix of technology products it combines with Blockchain technologies, Virtusa will also benefit from time in market with few or no close competitive service offerings.   

]]>
<![CDATA[Adventures in Blockchain: TCS Focuses on the Building Blocks of a Successful Blockchain Ecosystem]]>

Many Blockchain services vendors have observed that up to 75% of proofs of concept for Blockchain fail to meet their goals. Analysis of drivers for such widespread failure indicates that the initial use case was flawed because it was constructed to justify experimentation rather than solve business challenges. However, TCS has focused its Blockchain efforts on developing uses cases that can drive successful adoption and, more importantly, define the ecosystem for successfully meeting a use case’s key performance criteria. In this latest blog on current Blockchain activities in the financial services industry, I look at TCS’ approach to Blockchain in banking.

TCS’ Blockchain initiatives

TCS has been pursuing Blockchain for 3 years, and it has a group of 100+ engineers working on Blockchain initiatives across all industries. In banking, TCS’ Blockchain group is based in Chennai. TCS’ primary goal is to develop effective Blockchain use cases for the banking industry, and to date has successfully developed 150+ uses cases across all industries.

The use cases for banks segment into key areas of interest for banks:

  • Trade settlement (securities, FX, payments, etc.)
  • KYC/AML
  • Trade services (import/export). 

The largest demand for Blockchain services so far is for KYC/AML services. The key drivers for these areas of interest are processes where one of the following conditions apply:

  • Process requires frequent document re-verification: KYC requires re-verification periodically, and for each new product sale. Trade finance requires re-verification as the documents pass along a chain of activities, with multiple counterparties
  • Timelines and chain of activities must be attested: dispute resolution in trade settlement and trade services requires the ability to trace back to the point in time where a discrepancy in the interpretation of activity occurred.

The processes are primarily from closed loop transactions.

TCS offers consulting, ITS, and process audit services for Blockchain activities. In financial services, TCS has blockchain initiatives in retail banking, investment banking, capital markets, commercial lending. While TCS has not completed the implementation of blockchain project in operations delivery, it has done several POCs for customers in payments, securities settlement, trade finance, "know your customer" and supply chain finance. It is currently involved in a live Blockchain operations environment for a large global bank for Blockchain support of payments), providing audit support for the project. This allows TCS to enhance its understanding of what works and doesn’t work in a Blockchain environment, of which there are few, and none of scale, at present.   

TCS works with major Blockchain technology vendors including Ericsson-Guardtime, IBM, Microsoft, and associations (e.g., MIT Media Lab Digital Currency Initiative) as well as through its COIN partners. It has a proprietary Blockchain solution, which it deploys as required in its POCs, but does not sell as a standalone solution.

Conclusions

Global financial institutions are heavily experimenting with Blockchain to understand how and where to use it in their business – or even better, how to use it to change their business model. However, our research shows 70% to 80% of Blockchain POCs fail to meet their initial business case. The biggest challenge in Blockchain is understanding what makes a good business case, and getting stakeholders to cooperate on adoption. The technology, despite its arcane and novel characteristics, is not the primary impediment to adoption.

TCS is focusing its Blockchain efforts on developing a granular understanding of how Blockchain works, and when it succeeds in a business environment. This approach will create efficiency in Blockchain adoption for financial institutions because they will waste less effort on “a solution in search of a problem” and spend more resources applying the right solution to business challenges. TCS is not there yet, but headed in the right direction.    

]]>
<![CDATA[Amelia Enhances its Emotional, Contextual, and Process Intelligence to Outwit Chatbots]]>

IPSoft's Amelia

 

NelsonHall recently attended the IPSoft analyst event in New York, with a view to understanding the extent to which the company’s shift into customer service has succeeded. It immediately became clear that the company is accelerating its major shift in focus of recent years from autonomics to cognitive agents. While IPSoft began in autonomics in support of IT infrastructure management, and many Amelia implementations are still in support of IT service activities, IPSoft now clearly has its sights on the major prize in the customer service (and sales) world, positioning its Amelia cognitive agent as “The Most Human AI” with much greater range of emotional, contextual, and process “intelligence” than the perceived competition in the form of chatbots.

Key Role for AI is Human Augmentation Not Human Replacement

IPSoft was at pains to point out that AI was the future and that human augmentation was a major trend that would separate the winners from the losers in the corporate world. In demonstrating the point that AI was the future, Nick Bostrom from the Future of Humanity Institute at Oxford University discussed the result of a survey of ~300 AI experts to identify the point at which high-level machine intelligence, (the point at which unaided machines can accomplish any task better and more cheaply than human workers) would be achieved. This survey concluded that there was a 50% probability that this will be achieved within 50-years and a 25% probability that it will happen within 20-25 years.

On a more conciliatory basis, Dr. Michael Chui suggested that AI was essential to maintaining living standards and that the key role for AI for the foreseeable future was human augmentation rather than human replacement.

According to McKinsey Global Institute (MGI), “about half the activities people are paid almost $15tn in wages to do in the global economy have the potential to be automated by adapting currently demonstrated technology. While less than 5% of all occupations can be automated entirely, about 60% of all occupations have at least 30% of constituent activities that could be automated. More occupations will change than can be automated away.”

McKinsey argues that automation is essential to maintain GDP growth and standards of living, estimating that of the 3.5% per annum GDP growth achieved on average over the past 50 years, half was derived from productivity growth and half from growth in employment. Assuming that growth in employment will largely cease as populations age over the next 50 years, then an increase/approximate doubling in automation-driven productivity growth will be required to maintain the historical levels of GDP growth.

Providing Empathetic Conversations Rather than Transactions

The guiding principles behind Amelia are to provide conversations rather than transactions, to understand customer intent, and to deliver a to-the-point and empathetic response. Overall, IPSoft is looking to position Amelia as a cognitive agent at the intersection of systems of engagement, systems of record, and data platforms, incorporating:

  • Conversational intelligence, encompassing intelligent understanding, empathetic response, & multi-channel handling. IPSoft has recently added additional machine learning and DEEP learning
  • Advanced analytics, encompassing performance analytics, decision intelligence, and data visualization
  • Smart workflow, encompassing dynamic process execution and integration hub, with UI integration (planned)
  • Experience management, to ensure contextual awareness
  • Supervised automated learning, encompassing automated training, observational learning, and industry solutions.

For example, it is possible to upload documents and SOPs in support of automated training and Amelia will advise on the best machine learning algorithms to be used. Using supervised learning, Amelia submits what it has learned to the SME for approval but only uses this new knowledge once approved by the SME to ensure high levels of compliance. Amelia also learns from escalations to agents and automated consolidation of these new learnings will be built into the next Amelia release.

IPSoft is continuing to develop an even greater range of algorithms by partnering with universities. These algorithms remain usable across all organizations with the introduction of customer data to these algorithms leading to the development of client-specific customer service models.

Easier to Teach Amelia Banking Processes than a New Language

An excellent example of the use of Amelia was discussed by a Nordic bank. The bank initially applied Amelia to its internal service desk, starting with a pilot in support of 600 employees in 2016 covering activities such as unlocking accounts and password guidance, before rolling out to 15,000 employees in Spring 2017. This was followed by the application of Amelia to customer service with a silent launch taking place in December 2016 and Amelia being rolled out in support of branch office information, booking meetings, banking terms, products and services, mobile bank IDs, and account opening. The bank had considered using offshore personnel but chose Amelia based on its potential ability to roll-out in a new country in a month and its 24x7 availability. Amelia is currently used by ~300 customers per day over chat.

The bank was open about its use of AI with its customers on its website, indicating that its new chat stream was based on the use of “digital employees with artificial intelligence”. The bank found that while customers, in general, seemed pleased to interact via chat, less expectedly, use of AI led to totally new customer behaviors, both good and bad, with some people who hated the idea of use of robots acting much more aggressively. On the other hand, Amelia was highly successful with individuals who were reluctant to phone the bank or visit a bank branch.

Key lessons learnt by the bank included:

  • The high level of acceptance of Amelia by customer service personnel who regarded Amelia as taking away boring “Monday-morning” tasks allowing them to focus on more meaningful conversations with customers rather than threatening their livelihoods
  • It was easier than expected to teach Amelia the banking processes, but harder than expected to convert to a new language such as Swedish, with the bank perceiving that each language is essentially a different way of thinking. Amelia was perceived to be optimized for English and converting Amelia to Swedish took three months, while training Amelia on the simple banking processes took a matter of days.

Amelia is now successfully handling ~90% of requests, though ~30% of these are intentionally routed to a live agent for example for deeper mortgage discussions.

Amelia Avatar Remains Key to IPSoft Branding

While the blonde, blue-eyed nature of the Amelia avatar is likely to be highly acceptable in Sweden, this stereotype could potentially be less acceptable elsewhere and the tradition within contact centers is to try to match the nature of the agent with that of the customer. While Amelia is clearly designed to be highly empathetic in terms of language, it may be more discordant in terms of appearance.

However, the appearance of the Amelia avatar remains key to IPSoft’s branding. While IPSoft is redesigning the Amelia avatar to capture greater hand and arm movements for greater empathy, and some adaptation of clothing and hairstyle are permitted to reflect brand value, IPSoft is not currently prepared to allow fundamental changes to gender or skin color, or to allow multiple avatars to be used to develop empathy with individual customers. This might need to change as IPSoft becomes more confident of its brand and the market for cognitive agents matures.

Partnering with Consultancies to Develop Horizontal & Vertical IP

At present, Amelia is largely vanilla in flavor and the bulk of implementations are being conducted by IPSoft itself. IPSoft estimates that Amelia has been used in 50 instances, covering ~60% of customer requests with ~90% accuracy and, overall, IPSoft estimates that it takes 6-months to assist an organization to build an Amelia competence in-house, 9-days to go-live, and 6-9 months to scale up from an initial implementation.

Accordingly, it is key to the future of IPSoft that Amelia can develop a wide range of semi-productized horizontal and vertical use cases and that partners can be trained and leveraged to handle the bulk of implementations.

At present, IPSoft estimates that its revenues are 70:30 services:product, with product revenues growing faster than services revenues. While IPSoft is currently carrying out the majority (~60%) of Amelia implementations itself, it is increasingly looking to partner with the major consultancies such as Accenture, Deloittes, PwC, and KPMG to build baseline Amelia products around horizontals and industry-specific processes, for example, working with Deloittes in HR. In addition, IPSoft has partnered with NTT in Japan, with NTT offering a Japanese-language, cloud-based virtual assistant, COTOHA.

IPSoft’s pricing mechanisms consist of:

  • A fixed price per PoC development
  • Production environments: charge for implementation followed by a price per transaction.

While Amelia is available in both cloud and onsite, IPSoft perceives that the major opportunities for its partners lie in highly integrated implementations behind the client firewall.

In conclusion, IPSoft is now making considerable investments in developing Amelia with the aim of becoming the leading cognitive agent for customer service and the high emphasis on “conversations and empathic responses” differentiates the software from more transactionally-focused cognitive software.

Nonetheless, it is early days for Amelia. The company is beginning to increase its emphasis on third-party partnerships which will be key to scaling adoption of the software. However, these are currently focused around the major consultancies. This is fine while cognitive agents are in the first throes of adoption but downstream IPSoft is likely to need the support of, and partnerships with the major contact center outsourcers who currently control around a third of customer service spend and who are influential in assisting organizations in their digital customer service transformations.

]]>
<![CDATA[Adventures in Blockchain: Wipro Focuses on Rapid Innovation with Ethereum & Hyperledger]]>

This is the second in a series of blogs on current activities, use cases, POCs, and pilots with Blockchain in the financial services industry. In this one, I look at some of what Wipro is doing to support banks and financial services companies in deploying Blockchain solutions.

Blockchain technology & services

Wipro has been active for the past three years in offering Blockchain consulting and development. During that time, it has worked primarily with Ethereum, and Hyperledger, to develop its Blockchain solutions. Wipro has decided to be agnostic about technology partners because of the rapid pace of development and innovations in Blockchain technology, but it does have partnerships for cloud-delivered services on Blockchain. Current partnerships for cloud-delivered Blockchain services include:

  • IBM Bluemix
  • Microsoft Azure DevTest Labs
  • AWS.

In Blockchain, Wipro provides the following sets of services:

  • Advisory: engagement with thought leaders and CXOs to ideate strategies, plan roadmaps, and build use cases
  • Technology: building POCs, pilots and production solutions with clients
  • Infrastructure:  BLaaS – Blockchain Lab-as- a-Service (which allows clients’ internal teams to experiment and co-develop with Blockchain technology).
  • Blockchain network services : to build Blockchain networks

Use cases & POCs

Wipro has developed use cases and POCs across industries. In banking and financial services (excluding its insurance use cases), Wipro has focused its efforts on five critical use cases to date:

  • Banking:
    • Skip trace
    • Cross-border payments
    • Trade Finance
  • Capital markets:
    • Triparty collateral management
    • Delivery-versus-Payment (DVP)

Each of these use cases has active POCs deployed on Ethereum and Hyperledger. Blockchain POCs could potentially use additional technologies. For example, skip trace could be deployed in concert with Wipro HOLMES Artificial Intelligence Platform, to engage predictive analytics on where the skipped person may have gone to.

Business executives at clients are the primary buyers of Blockchain engagements. They are concerned with POCs which provide flexibility, quick deployment, and scalability. To facilitate achieving these goals, Wipro has been engaged in the following initiatives:

  • Flexibility and Quick deployment: Wipro has been developing a set of use case frameworks to identify what works, including required technical tools, business cases, and product ecosystems. These frameworks of best practices codify learnings as well as challenges to rapid, effective deployment of Blockchain technology
  • Scalability: Wipro has been a launch partner for the Enterprise Ethereum Foundation. In that capacity, Wipro has done extensive testing of scalability on various variants of Blockchain technology, including Ethereum and Hyperledger, which has provided it with the expertise to understand the possibilities and requirements for scaling a Blockchain solution for production grade enterprise level deployments.

Also, Wipro actively promotes and expands its Blockchain partnerships to broaden its capabilities in this rapidly developing ecosystem. 

Summary

The key to successful business use of Blockchain technology is the size of the network using the Blockchain. Network size is impacted by adoption, which is in turn impacted by cost incurred and potential value received. Successful technology services vendors must work on building that ecosystem with their clients for it to be successful. Technology services vendors will be able to have the biggest impact on cost reduction by reducing the ideation and buildout costs. However, insight into how technology interacts with business operations will provide precision into how value will be delivered. Value delivered is even more compelling for prospective network participants than cost issues in their decision process.

It will take several years for large-scale adoption of successful Blockchain ecosystems to be operational. The primary driver of successful adoption will be the development of large, effective ecosystems of participants. Technology services vendors have a large part to play in identifying a realistic roadmap and support the realization of that journey. 

]]>
<![CDATA[Adventures in Blockchain: Genpact Tackles O2C]]>

This is the first in an occasional series of blog articles over the next year on Blockchain initiatives related to the financial services industry. Blockchain is an emerging technology for which there are no current operational deployments, with initiatives still primarily at the consulting and design stage. Pilots have been deployed, but are relatively rare despite the rapid growth in experimentation and POC trials.

This article focuses on Genpact’s Blockchain initiatives, leveraging its extensive F&A operations experience to develop Blockchain capabilities that can improve financial outcomes, customer experience and operations costs. Genpact has decided to focus on order-to-cash (O2C) processing to begin with because it has the following characteristics:

  • Multi-party process, where coordination across parties for technology and process structure is currently lacking or customized on a bi-lateral basis, not on a universal basis
  • Lack of consistent data structure and data management frameworks between parties
  • Need to drive customer experience by providing operational transparency
  • Impact of the process on financial metrics like cash flow and bottom line profits for a company.

Genpact believes any successful solution for O2C will have:

  • Blockchain: distributed ledger, which will require counterparties to adopt a common taxonomy and technology platform. To encourage common adoption, it is necessary to minimize the cost and complexity of deployment and maintenance 
  • Smart contracts: computer protocols that get triggered based on specific events and are programmed to execute a sequence of actions. Smart contracts aim to provide security and to reduce transaction costs through automation.

Genpact has started developing the solution for the manufacturing industry, given its experience and client base within this industry. The manufacturing industry is looking to reduce the high costs of O2C processing. Even with a focus on reduced costs for superior performance, adoption challenges would have to be addressed until use of Blockchain becomes industry standard. Successful adoption requires both the buyer (manufacturer) and the vendor (supplier) to adopt a Blockchain platform. To facilitate adoption, Genpact’s approach is to segment the client’s customers to identify the few large customers who would be more willing to adopt this transformative solution given its benefits, and for whom this will deliver a significant percentage of the overall benefits.

Banks’ involvement in the payments part of the O2C cycle would automate the end-to-end process value chain. However, adoption by the banks should be easier, due to high levels of bank interest in Blockchain initiatives. Banks will benefit from improved customer experience in their payments service, and reduced risk from disputed payments.

Genpact is developing the solution on the Hyperledger Blockchain platform, an open source collaboration hosted by the Linux Foundation. Design work is done primarily at client sites in collaboration with client teams, with solution development done by Genpact teams in Palo Alto, CA and Bangalore, India. Development teams work virtually with client teams when joint development takes place. Genpact is currently discussing and working with multiple clients who want to be early adopters to develop different POCs.

Blockchain adoption is growing very rapidly for business case development and POC trials. Full operational deployment remains a future aspiration for all vendors of this technology and supporting services. Domain expertise and opportunity prioritization are critical to getting Blockchain initiatives off the ground. Genpact has developed a strategy to go after a highly focused target market with a high value proposition for its Blockchain initiatives. Next, it will need to convert early experiments into compelling operational business cases to drive adoption and a successful business. 

]]>
<![CDATA[NelsonHall Takes Transformational Approach to the Role of the CFO]]> The nature of finance & accounting is changing rapidly with the advent of new technologies. RPA has already shown its potential to reduce transactional F&A costs by 20% while improving quality of service and the application of cognitive technologies over the next few years will multiply this existing impact several times over. And, with the advent of machine learning, the processes will increasingly be knowledge-based and self-learning.

And, at the same time that basic accounting processes are being automated, so is financial reporting and analytics. Here, natural language generation coupled with analytics is leading to automatic reporting and interpretation of results while predictive and prescriptive analytics are increasingly identifying appropriate company behavior.

Finally, as the operational accounting and reporting processes become automated, the outsourcing vendors are increasingly moving upstream into financial planning and analysis.

So, the nature of finance & accounting is undergoing dramatic transformation. But can the same be said for the role of the CFO? So far this seems to be relatively unchanged, and we believe the time is right for a corresponding transformation.

Hence, NelsonHall tasked its HR department to identify a new CFO for the modern age. In the spirit of design thinking and achieving 10X impact, we thought, “Why not change the role, so that involvement with the CFO, rather than increasing the stress of all concerned (as has often been our experience due to the usual requests for budget cuts and increased performance), actually lowered the stress of all concerned and enhanced the mental health of the organization?”. This would be a truly transformational outcome.

So we set out with a charter to change the role of the CFO from stress-inducing to stress-reducing and to measure the falls in blood pressure of personnel after encounters with the CFO. And while we don’t yet have definitive quantitative results, I think we can confidently assert that this approach is working in the initial pilots.

We decided to look beyond the CFO stereotype of someone with traditional finance skills and a laser focus on analysis, reporting and control. In fact (and this might be a useful tip for executive recruitment agencies), we used the latest thinking in talent acquisition and “consumerized” our hiring process. The result was a generation Z hire (born after 2000) who displays none of the uptight characteristics normally associated with a CFO. We believe he’s a real cool cat.

His background is unknown (background checking was something of an issue), but then why adopt traditional hiring techniques when you are seeking to be transformational? Having said that, he is street-wise and knows what he wants out of his career and life in general.

In terms of daily routine, he turns up for work at about 07.30 in the morning. He commences his duties in corporate stress reduction by welcoming each employee with a purr, and following a breakfast (we presume his second breakfast) he purrs even more. Purring actually has healing properties – it makes the human heart-rate slow down, it lowers blood pressure and stress, and it boosts the immune system, enabling humans to better cope with the day-to-day tasks. So, a truly transformational impact in the role of the CFO.

We thought you might be interested to find out more about our CFO (Chief Feline Officer) Leo’s typical day in the office, so we have outlined this below as an example to all organizations considering taking this approach:

 

07.30: Leo arrives at work, bright eyed and bushy tailed, looking forward to his second breakfast.

 

07.31: “Let me in please……I want my second breakfast.” He’s keen.

 

07.35: “That’s better……I feel I can start the day how I mean to go on.”

 

07.45: “Now what shall I do?......That’s a nice photograph of me……I’m quite handsome, aren’t I?” Who said CFOs were posers?

 

08:00: Zzzzzz

 

11.30: “Is it lunch time yet?......Oh well, I might go back to sleep!”

 

14.00: Zzzzzzzzzzzzzzzz

 

15:00: “Not impressed with the task list for today……anyway, we’ve gone digital, so I don’t need this paper, but it’s great to sit on!  Maybe I’ll get some more sleep.”

 

17:30: Zzzzzzzzzzzzzzzzzzzzzz

 

18:30: Time to leave, but this guy is a workaholic. “The only way you're going to get me out of the door is to feed me some more cat biscuits!”

 

We hope that this brief case study shows you how you might adapt the role of the CFO within your own organization to achieve a truly transformational impact on corporate well-being.

]]>
<![CDATA[Application of RPA & AI Technologies in Learning BPS]]>

 

My colleague, Gary Bragar, recently discussed RPA and AI initiatives in HR, including payroll, recruiting, and learning. Within learning BPS, the majority of RPA investments have been made at a basic level within learning administration, specifically around training scheduling. For example, it previously took ~40 FTEs to manage the entire scheduling process for ~1k classrooms, including identifying classrooms based on availability, identifying onsite facilitators for training days, sending notifications, etc. Through RPA, the same workload can be completed in 15 minutes.  

Vendors such as Raytheon Professional Services (RPS) and IBM, however, have used more advanced applications of RPA and AI throughout the learning lifecycle. IBM, for example, is currently expanding RPA to the design and development of learning content via its Cognitive Content Collator (C3). IBM is leveraging Watson to interpret structured and unstructured data to drastically reduce the number of man hours spent annually on tagging and chunking content and then matching it with curriculum, competence, and goals. Specifically, it takes ~50k man hours to tag, chunk, curate, and map structured courses for ~10k hours of learning content; with IBM’s C3, these activities are completed in 55 hours.

With respect to AI and cognitive, IBM has launched ‘Personalized Learning,’ which offers a consumer-grade experience for learners that provides recommendations to employees based on job role, business group, skill set, and personal learning history to encourage continuous employee development and skill growth. The experience includes ‘content channels’ that support a variety of needs and interests to facilitate simpler browsing, as well as a five-star rating system, and will include virtual job coaches that pull content for an individual to help them develop certain skills.

While interest in RPA and AI technologies by organizations is high, overall adoption rates for these technologies in learning BPS has been low for two reasons. First, RPA requires investment by organizations, which is often problematic since a company’s learning budget is typically low. In addition, RPA requires that an organization exposes its technology and data to the service vendor, which they are often hesitant to do, since learning technology relationships are often separated from service relationships.

Current adopters of RPA in learning BPS tend to be from heavily regulated industries, including financial services, healthcare/pharma/life sciences, oil and gas, and automobile manufacturing. These organizations are realizing a significant reduction in training resources, which is creating more time for value-added activities.

Over the next year, adoption rates for RPA within learning BPS will increase and still be applied mainly to learning administration services. To be successful, vendors will not only have to demonstrate the business case, expected ROI, and previous successful deployments of RPA, but will also need to have a consultative partnership in place within the client organization.

]]>
<![CDATA[NelsonHall’s Blogging Year: A Selection From 2016]]> NelsonHall analysts are regular bloggers, and while you might be familiar with a number of them, you might not be aware of the full range of topics that NelsonHall analysts blog about. We thought it was an opportune time to look back and pick out just a few of the many blog articles produced last year from different corners of NelsonHall research to give readers a flavour of the scope of our coverage.

 

 

We continue to keep abreast of unfolding developments in RPA and cognitive intelligence. In October and November, John Willmott wrote a sequence of three handy blogs on RPA Operating Model Guidelines:

Turning to Andy Efstathiou and some of his musings on FinTech and RPA developments in the Banking sector:

Regarding developments in Customer Management Services:

Fiona Cox and Panos Filippides have been keeping an eye on BPS in the Insurance sector. Two of their blogs looked at imminent vendor M&A activity:

Blogs in the HR Outsourcing domain have included innovation in RPO, and in employee engagement, learning at the beginning of the employee life cycle, talent advisory and analytics services, employer branding, improving the candidate experience, benefits administration and global benefits coverage, cloud-based HR BPS, and more! Here’s a couple on payroll services, so often an overlooked topic, that you might have missed:

Dominique Raviart continues to keep a close eye on developments in Software Testing Services. For example:

Dominique also keeps abreast of unfolding developments in the IT Services vendor landscape. For example, in November he wrote about Dell Services: the Glue for "One NTT DATA" In North America.

Staying with IT Services, David McIntire:

Meanwhile, Mike Smart has been blogging about IoT. Here are two of his earlier ones:

And Rachael Stormonth continues to consider the significance of unfolding developments in the larger and more interesting IT Services and BPS vendors:

That’s just a small sample of the wide-ranging themes and hot topics covered by NelsonHall blog articles in our trademark fact-based, highly insightful style.

Keep up with the latest blogs from these and other NelsonHall analysts throughout 2017 here, and sign up to receive blog and other alerts by topic area, or update your preferences, here

]]>
<![CDATA[Swiss Post Solutions – Applying Intelligent Automation to Move from Physical Document Management to Digital Workflow Management]]>

 

“We connect the physical and the digital worlds”

This current slogan for the Swiss Post Group is very relevant for its Swiss Post Solutions (SPS) subsidiary as it looks to fundamentally change the nature of its business from paper-based document management to digital workflow management.

SPS is one of the larger companies globally offering both inbound document management and outbound document production and also some multi-channel management services. To give a sense of scale, on the inbound side, it scans over 1bn documents and prints around 1.2bn transactional documents (e.g. invoices, bank statements) per annum.

In the last two years, since Joerg Vollmer started as CEO, there has been a major shift in positioning at SPS, with less of an emphasis on the company’s legacy capabilities in inbound and outbound document management services, and a very strong focus on ‘Intelligent Automation’, combining existing SPS capabilities such as scanning, OCR, data capture & extraction with RPA and AI, to support clients in benefiting from the digital revolution in document management. Far from being threatened by digitization, SPS is approaching it as an opportunity to be grasped to help it evolve both its market positioning, and its portfolio. In particular, SPS is extending its capability along the value chain from standalone document management into digital workflow management and processing.

So much for the positioning – where is SPS on this journey?

The first evidence of a transformation at SPS is in its financials. Document processing and document output services are both based on activities that traditionally have had wafer thin margins. SPS’ own margin had been improving, but slowly. Since Vollmer’s arrival in January 2015, there has been a major improvement. Operating profit (the company’s primary financial target set by Swiss Post Group) has grown, with a 50% y/y increase in Q1-Q3 2016, and operating margin has increased from 1.8% in 2014 to 3.6% in Q1-Q3 2016.

And while Intelligent Automation is yet to make a major impact on company financials, SPS already has ~20 clients where it has applied RPA and AI technology, principally using UiPath . In addition, SPS has been an early adopter in using Celaton inSTREAM software on unstructured documents, for example for email categorization and key data extraction, and is the first BPS provider to take Celaton out of the U.K., and install inSTREAM in its Document Processing Center and Shared Service Center Banking.

Continuing this journey, SPS is evaluating technologies such as IBM Watson to refine its ability to extract key information from large documents. It is also potentially interested in using platforms such as IPSoft's Amelia to enable users to request and find key information within a large range of documents.

Importance of Vietnam – offshore, plus continuous improvement and intelligent automation

NelsonHall recently attended an SPS event in Vietnam, where we had the opportunity to visit a flagship center in Ho Chi Minh City. While the major theme of the visit was how SPS has started to introduce RPA (primarily UIPath) to automate tasks like scanning, data capture, and data extraction, and also simple AI (Celaton) to extract data from unstructured documents, SPS’ Vietnam centers have initially made an impact in terms of labor arbitrage and continuous improvement.

Vietnam offers, inter alia, relatively low labor costs, a sizeable (nearly 55m), literate, labor force, good physical infrastructure in the major cities, government tax exemptions, and a stable macro-economic environment. SPS has just over 1,000 employees based in the country, which appears to have played an important part in the margin improvement. SPS shared some stats which revealed that since 2014 the average number of documents processed per employee has increased by 124%, with slightly fewer people processing twice as many documents, fewer people deployed on basic data entry and document processing, and more on more complex BPS tasks. So far, this has been largely achieved through continuous improvement rather than RPA/AI. Vietnam will clearly continue to be important to the future of SPS; plans for 2017 will nearly double the number of employees in this delivery hub. 

As well as being a major offshore delivery capability for SPS, the Ho Chi Minh City location has a Network Operations Center which also operates as a customer experience center for showing clients and prospects a future vision of what they could achieve with digitized, closed-loop document management. The importance of this positioning should not be underestimated – it is central to SPS’ future as a BPS provider.

Extending beyond document management into wider workflow management

As well as looking to use RPA and AI to further enhance its management of unstructured and semi-structured data within inbound document management, SPS is also looking to use RPA and AI technologies to extend downstream into workflow management, focusing on back-office and industry-specific processes. For example:

  • Within F&A, in accounts payable activities, extending beyond invoice scanning and data capture/indexing into end-to-end invoice processing
  • In the insurance sector, using RPA to upload structured data on complex claims generated by SPS' OCR and document management software onto client systems. If all the necessary data and documents are present, bots start processing claims. In this case, the combined benefits from OCR and RPA plus an element of process redesign have already included a 60% reduction in manual processing and a 50% reduction in average processing time
  • In the Banking sector, supporting credit card collections by using a bot to check the current accounts of credit card debtors and block them to enable the credit card debt to be cleared.

Unsurprisingly, the verticals on which SPS is focusing are B2C sectors that have to manage very large volumes of documents, including insurance, retail banking, utilities and healthcare. Moving further into workflows requires closer industry and process knowledge than a traditional document management BPS provider might arguably need, and SPS has been working on this.

Building intelligent e-delivery platform for outbound document management

While a main focus of the event was on applying Intelligent Automation in inbound document processing, in SPS’ target sectors, outbound communications are also a key component in organizations’ digital transformation strategies.

Accordingly, SPS is developing capability to apply RPA and AI technologies to outbound document management, and is building an e-delivery platform to handle both print and digital output channels that formats data for each channel and sends printed and/or electronic communications as appropriate; current offerings include e-billing.

The Future – digitized closed-loop document management

In short, SPS has an ambitious roadmap to build a new digitized closed-loop document management capability incorporating third-party RPA and AI technologies, enabling it to move beyond traditional data capture and document output services, and become a more important player downstream in workflow management and processing. This means it can potentially capture components of work (that previously would otherwise have been heavily manual) in some middle- and back-office process areas that can now be automated. Moving into areas such as invoice and claims processing to address work that historically was not often accessible to document management BPS vendors is clearly a major opportunity for SPS.

So with these capabilities, will SPS move into full-scope industry-specific or back-office BPS? This is highly unlikely in the near term. While it can offer some level of multi-channel, SPS has no contact center capabilities, and voice remains a key component of industry-specific or back-office BPS. What is important for SPS in the next few years is that its clients regard its services as important to their digital initiatives, and that it can use automation to retain and extend the scope of existing document management engagements.

In summary, SPS is moving beyond traditional document management into wider digital workflow management and processing, and ultimately will provide digitized closed-loop document management services. This will  support clients in their digital transformation agendas by applying a powerful combination of global delivery, plus continuous improvement, plus industry-specific process knowledge, plus intelligent automation.

Further details on SPS initiatives in RPA and AI

]]>
<![CDATA[RPA Operating Model Guidelines, Part 3: From Pilot to Production & Beyond – The Keys to Successful RPA Deployment]]>

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the third and final blog in a series presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at the stages of deployment, from pilot development, through design & build, to production, maintenance, and support.

Piloting & deployment – it’s all about the business

When developing pilots, it’s important to recognize that the organization is addressing a business problem and not just applying a technology. Accordingly, organizations should consider how they can make a process better and achieve service delivery innovation, and not just service delivery automation, before they proceed. One framework that can be used in analyzing business processes is the ‘eliminate/simplify/standardize/automate’ approach.

While organizations will probably want to start with some simple and relatively modest RPA pilots to gain quick wins and acceptance of RPA within the organization (and we would recommend that they do so), it is important as the use of RPA matures to consider redesigning and standardizing processes to achieve maximum benefit. So begin with simple manual processes for quick wins, followed by more extensive mapping and reengineering of processes. Indeed, one approach often taken by organizations is to insert robotics and then use the metrics available from robotics to better understand how to reengineer processes downstream.

For early pilots, pick processes where the business unit is willing to take a ‘test & learn’ approach, and live with any need to refine the initial application of RPA. Some level of experimentation and calculated risk taking is OK – it helps the developers to improve their understanding of what can and cannot be achieved from the application of RPA. Also, quality increases over time, so in the medium term, organizations should increasingly consider batch automation rather than in-line automation, and think about tool suites and not just RPA.

Communication remains important throughout, and the organization should be extremely transparent about any pilots taking place. RPA does require a strong emphasis on, and appetite for, management of change. In terms of effectiveness of communication and clarifying the nature of RPA pilots and deployments, proof-of-concept videos generally work a lot better than the written or spoken word.

Bot testing is also important, and organizations have found that bot testing is different from waterfall UAT. Ideally, bots should be tested using a copy of the production environment.

Access to applications is potentially a major hurdle, with organizations needing to establish virtual employees as a new category of employee and give the appropriate virtual user ID access to all applications that require a user ID. The IT function must be extensively involved at this stage to agree access to applications and data. In particular, they may be concerned about the manner of storage of passwords. What’s more, IT personnel are likely to know about the vagaries of the IT landscape that are unknown to operations personnel!

Reporting, contingency & change management key to RPA production

At the production stage, it is important to implement a RPA reporting tool to:

  • Monitor how the bots are performing
  • Provide an executive dashboard with one version of the truth
  • Ensure high license utilization.

There is also a need for contingency planning to cover situations where something goes wrong and work is not allocated to bots. Contingency plans may include co-locating a bot support person or team with operations personnel.

The organization also needs to decide which part of the organization will be responsible for bot scheduling. This can either be overseen by the IT department or, more likely, the operations team can take responsibility for scheduling both personnel and bots. Overall bot monitoring, on the other hand, will probably be carried out centrally.

It remains common practice, though not universal, for RPA software vendors to charge on the basis of the number of bot licenses. Accordingly, since an individual bot license can be used in support of any of the processes automated by the organization, organizations may wish to centralize an element of their bot scheduling to optimize bot license utilization.

At the production stage, liaison with application owners is very important to proactively identify changes in functionality that may impact bot operation, so that these can be addressed in advance. Maintenance is often centralized as part of the automation CoE.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on RPA, with your buy-side peers sharing their RPA experiences. To find out more, contact Matthaus Davies.  

This is the final blog in a three-part series. See also:

Part 1: How to Lay the Foundations for a Successful RPA Project

Part 2: How to Identify High-Impact RPA Opportunities

]]>
<![CDATA[HCL: Applying RPA to Reduce Customer Touch Points in Closed Book Life Insurance]]> This is the third in a series of blogs looking at how business process outsourcing vendors are applying RPA in the insurance sector.

HCL provides closed book life insurance outsourcing services, and is currently engaged in RPA initiatives with three insurance clients.

In order to capture customer data in a smarter, more concise way, HCL is using ‘enhancers’ at the front end, providing users with intuitive screens based on the selected administrative task. These input forms aim to request only the minimum, necessary data required with RPA now being used to transfer the data to the insurance system, ALPS, via a set of business rules.

For example, one RPA implementation undertaken can recognize the product type, policy ownership, values, and payment methods, and it can prepare and produce correspondence for the customer. If all rules are met, it is then able to move onto payment on the due date. This has been done with a view to reducing the number of touchpoints and engaging with the customer only when required. Indeed, HCL is working with its clients to devise a more exhaustive set of risk-based rules to further reduce the extent to which information needs to be gathered from customers.

Seeking a 25% cost take-out in high volume activities

On average, 11k customer enquiries are received by one HCL insurance contact center every month, and these were traditionally handed off to the back office to be resolved. However, HCL is now using RPA and business rules to enable more efficient handling of enquires/claims with limited user input, with the aim of creating capacity for an additional 4.4k customer queries per month to be handled within the contact center.

Overall, within its insurance operations, HCL is applying RPA-based business rules to ~10 core process areas that together amount to around 60% of typical day-to-day activity. These process areas include:

  • Payments out, including maturities, surrenders, and transfers

  • Client information, including change of address or, account information

  • Illustrations.

These processes are typically carried out by an offshore team and the aspiration is to reduce the effort taken to complete each of them by ~25%. In addition, HCL expects that capturing customer data in this new way will shorten the end-to-end journey by between 5% and 10%.

One lesson learned has been the need for robust and compatible infrastructure, both internally (ensuring that all systems and platforms are operating on the same network), and with respect to client infrastructure; e.g. ensuring that HCL is using the same version of Microsoft or Internet Explorer as the client environment.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 2: How to Identify High-Impact RPA Opportunities]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the second in a series of blogs presenting key guidelines for organizations embarking on an RPA project, covering project preparation, implementation, support, and management. Here I take a look at how to assess and prioritize RPA opportunities prior to project deployment.

Prioritize opportunities for quick wins

An enterprise level governance committee should be involved in the assessment and prioritization of RPA opportunities, and this committee needs to establish a formal framework for project/opportunity selection. For example, a simple but effective framework is to evaluate opportunities based on their:

  • Potential business impact, including RoI and FTE savings
  • Level of difficulty (preferably low)
  • Sponsorship level (preferably high).

The business units should be involved in the generation of ideas for the application of RPA, and these ideas can be compiled in a collaboration system such as SharePoint prior to their review by global process owners and subsequent evaluation by the assessment committee. The aim is to select projects that have a high business impact and high sponsorship level but are relatively easy to implement. As is usual when undertaking new initiatives or using new technologies, aim to get some quick wins and start at the easy end of the project spectrum.

However, organizations also recognize that even those ideas and suggestions that have been rejected for RPA are useful in identifying process pain points, and one suggestion is to pass these ideas to the wider business improvement or reengineering group to investigate alternative approaches to process improvement.

Target stable processes

Other considerations that need to be taken into account include the level of stability of processes and their underlying applications. Clearly, basic RPA does not readily adapt to significant process change, and so, to avoid excessive levels of maintenance, organizations should only choose relatively stable processes based on a stable application infrastructure. Processes that are subject to high levels of change are not appropriate candidates for the application of RPA.

Equally, it is important that the RPA implementers have permission to access the required applications from the application owners, who can initially have major concerns about security, and that the RPA implementers understand any peculiarities of the applications and know about any upgrades or modifications planned.

The importance of IT involvement

It is important that the IT organization is involved, as their knowledge of the application operating infrastructure and any forthcoming changes to applications and infrastructure need to be taken into account at this stage. In particular, it is important to involve identity and access management teams in assessments.

Also, the IT department may well take the lead in establishing RPA security and infrastructure operations. Other key decisions that require strong involvement of the IT organization include:

  • Identity security
  • Ownership of bots
  • Ticketing & support
  • Selection of RPA reporting tool.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s expensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held later this month: to find out more, contact Guy Saunders.  

In the third blog in the series, I will look at deploying an RPA project, from developing pilots, through design & build, to production, maintenance, and support.

]]>
<![CDATA[RPA Operating Model Guidelines, Part 1: Laying the Foundations for Successful RPA]]>

 

As well as conducting extensive research into RPA and AI, NelsonHall is also chairing international conferences on the subject. In July, we chaired SSON’s second RPA in Shared Services Summit in Chicago, and we will also be chairing SSON’s third RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December. In the build-up to the December event we thought we would share some of our insights into rolling out RPA. These topics were the subject of much discussion in Chicago earlier this year and are likely to be the subject of further in-depth discussion in Atlanta (Braselton).

This is the first in a series of blogs presenting key guidelines for organizations embarking on RPA, covering establishing the RPA framework, RPA implementation, support, and management. First up, I take a look at how to prepare for an RPA initiative, including establishing the plans and frameworks needed to lay the foundations for a successful project.

Getting started – communication is key

Essential action items for organizations prior to embarking on their first RPA project are:

  • Preparing a communication plan
  • Establishing a governance framework
  • Establishing a RPA center-of-excellence
  • Establishing a framework for allocation of IDs to bots.

Communication is key to ensuring that use of RPA is accepted by both executives and staff alike, with stakeholder management critical. At the enterprise level, the RPA/automation steering committee may involve:

  • COOs of the businesses
  • Enterprise CIO.

Start with awareness training to get support from departments and C-level executives. Senior leader support is key to adoption. Videos demonstrating RPA are potentially much more effective than written papers at this stage. Important considerations to address with executives include:

  • How much control am I going to lose?
  • How will use of RPA impact my staff?
  • How/how much will my department be charged?

When communicating to staff, remember to:

  • Differentiate between value-added and non value-added activity
  • Communicate the intention to use RPA as a development opportunity for personnel. Stress that RPA will be used to facilitate growth, to do more with the same number of people, and give people developmental opportunities
  • Use the same group of people to prepare all communications, to ensure consistency of messaging.

Establish a central governance process

It is important to establish a strong central governance process to ensure standardization across the enterprise, and to ensure that the enterprise is prioritizing the right opportunities. It is also important that IT is informed of, and represented within, the governance process.

An example of a robotics and automation governance framework established by one organization was to form:

  • An enterprise robotics council, responsible for the scope and direction of the program, together with setting targets for efficiency and outcomes
  • A business unit governance council, responsible for prioritizing RPA projects across departments and business units
  • A RPA technical council, responsible for RPA design standards, best practice guidelines, and principles.

Avoid RPA silos – create a centre of excellence

RPA is a key strategic enabler, so use of RPA needs to be embedded in the organization rather than siloed. Accordingly, the organization should consider establishing a RPA center of excellence, encompassing:

  • A centralized RPA & tool technology evaluation group. It is important not to assume that a single RPA tool will be suitable for all purposes and also to recognize that ultimately a wider toolset will be required, encompassing not only RPA technology but also technologies in areas such as OCR, NLP, machine learning, etc.
  • A best practice for establishing standards such as naming standards to be applied in RPA across processes and business units
  • An automation lead for each tower, to manage the RPA project pipeline and priorities for that tower
  • IT liaison personnel.

Establish a bot ID framework

While establishing a framework for allocation of IDs to bots may seem trivial, it has proven not to be so for many organizations where, for example, including ‘virtual workers’ in the HR system has proved insurmountable. In some instances, organizations have resorted to basing bot IDs on the IDs of the bot developer as a short-term fix, but this approach is far from ideal in the long-term.

Organizations should also make centralized decisions about bot license procurement, and here the IT department which has experience in software selection and purchasing should be involved. In particular, the IT department may be able to play a substantial role in RPA software procurement/negotiation.

Find out more at the SSON RPA in Shared Services Summit, 1st to 2nd December

NelsonHall will be chairing the third SSON RPA in Shared Services Summit in Braselton, Georgia on 1st to 2nd December, and will share further insights into RPA, including hand-outs of our RPA Operating Model Guidelines. You can register for the summit here.

Also, if you would like to find out more about NelsonHall’s extensive program of RPA & AI research, and get involved, please contact Guy Saunders.

Plus, buy-side organizations can get involved with NelsonHall’s Buyer Intelligence Group (BIG), a buy-side only community which runs regular webinars on sourcing topics, including the impact of RPA. The next RPA webinar will be held in November: to find out more, contact Matthaus Davies.  

 

In the second blog in this series, I will look at RPA need assessment and opportunity identification prior to project deployment.

 

]]>
<![CDATA[HCL Technologies – Automotive, Autonomics & Partnerships Key Elements of its Future Growth]]> HCL Technologies’ (HCL) twitter hashtag for its recent adviser and analyst event in Sweden was #HCLBigLeap, which led us to expect to hear some big news items. The jam-packed sessions did provide an overview of many recent and current developments – but the big news item was finally disclosed in an individual blog the following day (more of which later).

We heard from many clients at the event, all of whom expressed strong satisfaction with HCL: its “relationship beyond the contract” messaging is resonating well with clients.

Mode 1, 2, 3

New COO, C Vijay Kumar, started by discussing HCL’s ‘Mode 1, 2, 3’ growth strategy for its portfolio:

  • Mode 1 - ‘Agile & Lean and service-oriented’: this describes HCL’s existing core services portfolio (applications services, R&D services, BPO) which are increasingly being layered with DRYiCE tools. HCL expects Mode 1 activities will shift from 85% (we believe closer to 90%) of its current revenues to around 60% over the next few years
  • Mode 2 - ‘experience-centric and outcome-oriented’: units including BEYONDigital, IoTWorks, cloud and cyber. These are high (20-30%) growth opportunity businesses, and will be high priorities for organic and inorganic growth investments. Mode 2 also covers newer delivery models such as Agile and DevOps
  • Mode 3 - ‘ecosystem driven’: products & platforms, where growth is most likely to come from IP partnerships. Mode 3 also includes partnership constructs such as JVs and commercializing client captives.

This is an apparently simple way of describing a journey that all IT services providers are having to undertake as they evolve to remain competitive to the new world of IT. HCL's approach includes building independent teams for Mode 2 propositions which are taken to market through the Mode 1 businesses.

As well as highlighting progress it is making in some of its Mode 2 areas, HCL has also made some major investments recently that span two ‘Mode 1’ businesses where it has particular strengths - infrastructure services and engineering services - and where it is now applying Mode 3 partnership constructs such as the deal with Volvo.

Volvo deal brings in mainframe capabilities, is also part of a broader push in the automotive sector

I got to drive the Volvo V90!

Gothenburg was an obvious choice of location for the event given the award to HCL earlier this year of a major IT infrastructure management services contract by Volvo Group involving an IT captive acquisition that serves 40 clients in the Nordics and France, and the transfer of ~2,500 personnel in 11 countries. The captive acquisition adds mainframe capabilities, local delivery presence and a pre-existing external client base, all of whom have migrated (Volvo estimates the transferred captive generates annual external revenues of ~$190m). We heard from Volvo CIO Olle Hogblom; the transition (a walk-in takeover in April) appears to have gone smoothly. Driving a major transformation in Volvo’s IT infrastructure was a key priority and Hogblom highlighted that 52 transformation projects are already up and running.

Since HCL started targeting the Nordics in earnest in 2011, the company has been very successful in the region in winning IT infrastructure services rebids, with the likes of Statoil and DNB in Norway and UPM in Finland. The Volvo deal was a big investment (media reports are of $130m) by HCL, who is now pushing for sector, portfolio and market expansion in Continental Europe.

In terms of target sectors and services, HCL is setting up an automotive CoE in Gothenburg on the back of the Volvo win. Other initiatives will enhance its capabilities in engineering services in the automotive sector. HCL is acquiring Geometrics (not including Geometrics’ 3DPLM JV with Dassault Systemes) in a share-swap deal worth around $190m. Excluding 3DPLM, the automotive sector accounts for ~47% of Geometrics revenue. Geometric generates nearly 60% of its revenues from the U.S. and ~25% from Europe (where it acquired German automotive specialist 3cap technologies GmbH in 2013). Geometrics will be HCL’s largest acquisition to date in engineering services and as such its importance should not be overlooked. HCL has also formed partnerships with the likes of sector specialists Movimento (over-the-air software updates) and Rightware (UI design software). We expect to hear much more from HCL over the next few years about its work around connected vehicles.

Will HCL Technologies be an engineering services consolidator, with additional acquisitions, perhaps specialists in other sectors?

Other obvious areas of potential interest for inorganic growth include

  • Cyber, to enhance its IT infrastructure services offerings.
  • In terms of markets, perhaps  Germany again, or possibly France.

DRYiCE autonomics portfolio continues to expand, newer AI components

DRYiCE, HCL’s brand for its orchestration, operations analytics, automation and AI suite of modular components, has been expanding and now comprises 40 micro apps (both third party and proprietary) that between them support all of its service lines. DRYiCE includes well-established IT operations tools and newer RPA, AI, NLP, machine learning and analytics technologies. It has six layers:

  1. Sense and act, e.g. monitoring tools
  2. Prevent & heal, e.g. automated patch management
  3. Ideate & create, e.g. to support automated testing, smart releases etc.
  4. Engage and collaborate, e.g. Satori, an open source KI collaboration platform
  5. Visualize and insight e.g. MyxAlytics predictive analytics
  6. Orchestrate and choreograph.

HCL highlights its approach with developing and implementing DRYiCE modules as being one of “pragmatic automation”, namely real world, use-case driven (and thereby clearly outcome-focused).

The latest addition to DRYiCE is ‘Lucy’, a Watson-powered Level 0 service desk cognitive agent launched last quarter (also uses ServiceNow), and currently in pilots with four clients. Early uses are in chat; voice will surely follow as other use cases are found.

HCL claims that some elements of DRYiCE (we never found out if it is an acronym) are in use in over 50% of its accounts, not surprising given that some of the modules are well established. Its current push is to increase the level of automation in software and product testing services, very much in line with market trends (see NelsonHall's research on software testing).

HCL articulates the attributes of DRYiCE clearly, and is also ahead of many vendors in indicating what lies under the hood. Doubtless the suite will continue to expand, including in analytics and cognitive applications.

IoT, Small but Growing Practice, Leveraging Engineering Services

At the beginning of the year HCL launched a standalone IoT practice with ~100 people. The unit is now being augmented with the smart product development capabilities from its large engineering services practice. HCL’s current client credentials in IoT are mainly longstanding engineering services clients such as ABB and Xerox. Obvious sectors for expansion for HCL are automotive and aerospace.

Partnerships Key to Mode 3

HCL is looking to develop a products and platforms business through what it calls innovative IP partnerships.

The first of these was with CSC. It is over two years since HCL and CSC struck up an innovative partnership with the Celeriti Fintech JV, whose primary remit is to modernize CSC’s financial services software (see our blog here). We were not given an update on progress at the event, even though HCL has increased its investment in the JV. Given the CSC/HPES merger, the relationship will not now expand beyond the actual JV, and a go-to-market push is likely to be less than initially envisaged.

HCL’s newest significant partnership was alluded to in its last quarterly earnings call when it referred to making a $350m investment over two years in a 15-year partnership with a “global technology vendor”. HCL could not publicly comment on this partnership at the event, but it has since become evident that the partner is IBM (with whom HCL is also collaborating on IoT, setting up an IoT incubation center in Noida). In short HCL appears to be buying IBM Tivoli systems management tools and the Rational software modelling and development middleware in a deal that also involves personnel transfer – so in many respects it looks like a straight outsource (IBM is doing something similar with Persistent Systems). While HCL and IBM will work together on future product roadmaps for Tivoli and Rational, HCL will be responsible for ongoing product development and support. Clearly, the deal plays to HCL’s strengths in software engineering. Beyond this, further details, for example of the go-to-market arrangements, are not yet clear. In the short term, HCL will earn annual revenues of $30-40m from the deal. Longer term, there could also be broader opportunities for HCL in going to market directly with Tivoli and Rational products.

Notable by their absence: BPO and SaaS

  • HCL’s BPO business (now under 5% of revenues and declining) is looking increasingly non-core. The only way that HCL is going to build a new BPO business is through acquisition: outside a captive buy, this does not seem likely in the short term
  • Another area where HCL does not appear to have strong credentials is services around the major SaaS products. As it seeks to mitigate the cannibalization of cloud on traditional ADM activities and the reduction in large ERP projects, HCL does not seem to be looking building up large enterprise apps SaaS practices.

Playing to its Strengths

In short, as it works on evolving its portfolio to align to new and emerging market opportunities, HCL is being bold but also playing to its core strengths in engineering services and IT infrastructure services. This is a story of renovation, rather than one of radical reinvention.

Note: in the NelsonHall Vendor Intelligence Program:

  • HCL Technologies is one of ~20 vendors individially profiled each quarter in the Quarterly Vendor Update Program
  • In future, HCL Technologies will also be included in the Key Vendor Assessment Program.
  • ​For details, contact simon.rodd@nelson-hall.com
]]>