NelsonHall: Digital Transformation Technologies & Services blog feed https://research.nelson-hall.com//sourcing-expertise/digital-transformation-technologies-services/?avpage-views=blog NelsonHall's Digital Transformation Technologies & Services program is designed for organizations considering, or actively engaged in, the application of robotic process automation (RPA) and cognitive services such as AI to their business processes. <![CDATA[UiPath Launches Autopilot for Enhanced Bot Generation & Management]]>

 

NelsonHall recently attended UiPath’s Forward VI event in Las Vegas, at which the company launched its Autopilot capabilities. While the company launched Autopilot across all its platform components, including Studio, Assistant, Apps, mining, and Test Manager, this blog focuses on its use within Studio and Assistant. 

Autopilot for Studio

A constant inhibitor to automation has been the capacity of organizations to build the automations. In the early days, automation relied on skilled developers writing code to build the automation. Platform vendors such as UiPath sought to tackle this inhibitor with the launch of low- and no-code IDEs such as UiPath Studio. However, while that development launched this current wave of automation, companies were still hamstrung by the number of employees trained on these platforms who were able to deliver these results.

Seeking to increase the number of employees that could develop automations, each of the platform vendors focused on increasing support for citizen developers, with UiPath launching StudioX as a simplified version of Studio; indeed, at Forward VI, Deloitte announced that it is committing to having 10% of its 415k employees trained on UiPath as part of a citizen developer drive.

Still, these efforts have not been enough to overcome the capacity issues within clients. To address these issues, UiPath has launched Autopilot for Studio, embedding the capability for users to write requirements for bots in natural language text, from which Autopilot then generates an automation in Studio.

This will not only support citizen developers; experienced developers are often handed bots from citizen developers to ensure the bot is enterprise-grade. With Autopilot, we can expect that the quality of the citizen-developed bot is increased. The demonstrations of Autopilot at the event showed faster bot development than the typical automation developer.

Autopilot for Assistant

On the second day of Forward VI, we also saw a demo of Autopilot for Assistant. Assistant is the interface that lets desktop users see and run all the automations. For the most part, this interface has been selecting automations to run from a list of bots or leveraging the beta of Clipboard AI functionality to quickly and automatically copy information from one interface or image into fields within applications.

With the launch of Autopilot for Assistant, users can now interact through natural language with the interface. This chat interface allows Autopilot to find bots that can answer users’ requests and run desktop automations live on the machine.

Importantly, this is not where the magic ends: if a user asks for something that cannot be completed using existing bots, Autopilot can use the same intelligence used in Autopilot for Studio to create bots that can leverage connectors within Autopilot for Studio to accomplish the task, prompting the human user for confirmation along the way. If the process is successful and the user believes that this automation would be useful, they may add this to the automation hub along with an automatically generated description of the process and the steps performed so that developers can perform any ‘last mile’ work to build the bot, which can be referenced in the future.

In the demo, we saw Assistant being asked to connect with a specific person on LinkedIn. Autopilot interpreted the request, launched a browser, searched for the person, and before requesting the connection, asked the user to confirm the action. In this manner, desktop automation through UiPath assistant is less hamstrung by developers or even citizen developers.

How these developments compare to the competition

UiPath isn’t the first automation platform provider to launch a copilot for their IDEs: in May this year we saw Automation Anywhere launch its copilot capabilities, and last month, Microsoft showed its copilot for Power Automate; both strive to offer bot generation through natural language. 

Where UiPath is ahead of the competition is in its secondary use case above, building bots on the fly to support digital assistants. This capability could not only boost the capability and quality of citizen developers, but reduce the need for them entirely.

Another difference we see with these offerings is their pervasiveness, with UiPath launching variants of its copilot across each platform component, including document understanding, test automation, apps, and mining. 

UiPath also shared a vision for the future that we did not hear from the competition: using these generative AI capabilities to build auto-healing robots; i.e. automatically fixing aspects that have broken using bot descriptions and the AI behind Autopilot, so reducing the ongoing management cost of bots. 

]]>
<![CDATA[Automation Anywhere Looks to Generative AI to Accelerate Automation]]>

 

Automation Anywhere recently announced new capabilities for its platform powered by generative AI: Automation Co-Pilot for Business Users and for Automators, plus Document Automation. Here I take a look at these new capabilities and how they compare with those of other providers, and consider the wider implications of generative AI solutions for automation developers.

Automation Co-Pilot for Business Users

This is an update to Automation Co-Pilot AARI launched in October 2022, leveraging generative AI capabilities in automation assistants that can be embedded into business applications.

In the use case shown by Automation Anywhere, a doctor uses Co-Pilot to retrieve patient records and lab results, extracting data from pdf lab results which the doctor can validate, and then uses GPT to generate a summary of the information and editable next-action recommendations:

For this solution, Co-Pilot was configured with Process Composer to add the generative AI solution, select the AI model and enter the prompt for the AI. These solutions can leverage generative AI from Microsoft Azure OpenAI, Google Vertex AI (below), Amazon Bedrock, and Nvidia NeMo.

Automation Co-Pilot for Business Users is now available under a ‘bring your own license’ (BYOL) model, with Amazon Bedrock support added in Q2.

Automation Co-Pilot for Automators

Automation CoPilot for Automators aims to support automation designers in building bots by embedding Co-Pilot into the IDE. Using natural language text, users can ask the co-pilot to enter bot requirements which the co-pilot then uses to generate the skeleton of an automation flow in the designer. Users can then continue the chat with Co-Pilot to update this flow.

Following the automation skeleton's generation, Co-Pilot prompts the user to confirm whether the generated automation matched requirements as part of reinforcement learning.

Automation Co-Pilot for Automators will be available for preview in July 2023, with pricing and packaging to be announced.

Document Automation

The third new capability announced is generative AI large language models to analyze, extract, and summarize information as part of Automation Anywhere’s document automation capabilities.

The company states that large language models with minimal customization can read through large documents to find specific clauses or for a general understanding of unstructured documents.

Generative AI for Document Automation is expected to launch in Q3 2023. The company is evaluating different generative AI solutions and models, as these will be offered as an embedded solution rather than BYOL.

How does this compare with other providers?

Generative AI is undoubtedly a hot topic inside and outside the automation space, and the two big themes have been connecting to these large language models within automations and using these models for ‘automating automation’.

In the last month, we’ve seen UiPath going down similar paths with the public preview of its connectors to OpenAI and Azure OpenAI and its Clipboard AI offering which uses generative AI solutions to quickly copy information between applications.

There has also been a lot of news related to code generation, including automation code, using these large language models. These have tended to be focused on generating Python code which can then be imported into automation IDEs rather than embedded into low-code designers, as shown above with Automation Co-Pilot for Automators.

Automation Anywhere has built data protection, security, and compliance requirements into its generative AI offerings. Much has been said about security concerns with developers using generative AI models, with some organizations such as Apple banning employees from using ChatGPT over fears that IP in these conversations is being used to train the large language models.

With these integrations to third-parties, Automation Anywhere has stated that client data is secure and will not be used to train any shared models.

What will generative AI solutions mean for automation developers?

The generative AI solutions that the Automation Anywhere platform is leveraging can be very impressive and the natural question following the Automation Co-Pilot for Automators example is whether the automation space still requires developers.

To that question, NelsonHall answers yes.

Generative AI, similar to low/no code IDEs, is opening up the ability to develop automations for more business users, requiring much less intensive coding or development experience than the old IDEs from several years ago. These advances have mainly been feathers in automation developers’ caps as well as allowing citizen developers to build an automation skeleton quickly. While expanding developer resources to encompass business users who know the process can certainly help in the development of automations, early attempts at this have run into pitfalls due to poor processes being automated inefficiently, and with little investment to ensure that these automations are shared across the organization relative to a centralized automation CoE.

For larger, more complex automations that offer enterprise-wide impact, skilled automation developers are still required to translate the business requirements into automations to leverage the best-in-class methodologies, AI models, and supporting platforms that can be updated to match changing business requirements. In these situations, process understanding (task and process mining) platforms can be used to more fully understand the process and automatically create automation skeletons requiring last-mile work similar to that when using generative AI. 

]]>
<![CDATA[IBM Converging Risk Scores to Optimize Cybersecurity Offering]]>

 

NelsonHall recently attended an IBM Security analyst day in London. This covered recent developments such as IBM’s acquisition of Polar Security on May 16th to support the monitoring of data across hybrid cloud estates, and watsonx developments to support the move away from rule-based security. However, a big focus of the event was the subject of risk.

For the last few years, the conversation around cybersecurity has shifted to risk, highlighting the potential holes within a resiliency posture, for example, and asking questions such as ‘if ransomware were to shut down operations for six hours, what would the implications for the business be?’

IBM and a number of other providers, therefore, have been offering ‘risk scores’ related to aspects of an organization’s IT estate. These include risk scores from IBM’s Risk Quantification Service, IBM Guardium, for risk related to the organization’s data, including its relationship with data security regulations; risk scores from IBM Verify related to particular users; and from recently acquired Randori, the company’s attack surface management solution.

Randori, acquired in June 2022, is a prime example of IBM’s strengths in understanding and reducing risks. Its two offerings, Randori Recon and Randori Attack, aim to discover how organizations are exposed to attackers and provide continuous automated red teaming of the organization’s assets.

After running discovered assets, shadow IT, and misconfigurations through Randori Attack’s red team playbooks, clients are presented with the risks through a patented ’Target Temptation’ model. In this way, organizations can prioritize the targets that are the most susceptible to attack and monitor the change in the level of risk on an ongoing basis.

IBM’s Risk Quantification service uses the NIST-certified FAIR model which decomposes risk into quantifiable components: the frequency at which an event is expected and the magnitude of the loss that is expected per event. In this manner, the service performs a top-level assessment of the client’s controls and vulnerabilities, makes assumptions such as the amount of sensitive information stolen during a breach based on prior examples, and produces a probability of loss and the costs related to that loss, including fines and judgments from regulatory bodies.

It is not the first time we have seen this model and a similar approach being taken by vendors offering cyber resiliency services. One such vendor is Unisys, who in 2018 offered its TrustCheck assessment, which used security data and X-Analytics' software to analyze the client's cyber risk posture and how they associate with financial impacts. These financial impacts were plotted against the threat likelihood of the event.

TrustCheck was used as a driver for the Unisys cybersecurity business; it related the expected loss against guidance to whether the value of securing the client's environment was greater than the cost to remediate a gap, and it conveyed this information to the C-level.

So what is the difference between IBM’s approach to risk and Unisys’ TrustCheck service?

IBM has been approaching its risk qualification from both ends – a bottom-up measuring of user, data, compliance, and the IT estate using platforms such as Guardium, Verify, and now Randori, and a top-down view within its Risk Quantification Service. At the analyst event in London, there was a clear indication that these risk scores would be looking to converge over time to provide a more accurate and consistent view of an organization’s risk. For example, using the outputs from Randori Recon to understand the client’s exposure; Guardium and Polar security to understand what data is being held and where it could travel; and Verify to understand what user access exists. A consistent, accurate view of the client’s resiliency would then be used to drive decision-making.

This convergence of risk scores will not be an immediate development. Randori has just undergone a year of development to integrate its UX into QRadar for a unified experience, and its upcoming development will include being brought into the IBM Security QRadar suite as part of an Attack Surface Management (ASM) service before a consistent risk score service is complete. Likewise, the acquisition of Polar Security needs time to bed in to the data security estate.

NelsonHall does, however, welcome any moves that result in more organizations knowing more about the risks to their business, and the financial risks associated, which has traditionally been a major stumbling block for organizations in understanding what remediation should be taken to increase security postures beyond the baseline of compliance requirements.

]]>
<![CDATA[The Future of Process Discovery & Mining: 2023 and Beyond]]>

 

Process discovery and mining platforms, which examine organizations’ process data as part of transformation initiatives, have become an increasingly critical part of process automation and reengineering journeys.

Here I look at what to expect in the process discovery and mining space in 2023 and beyond.

Continuous Monitoring

Traditionally we have seen COEs use process discovery and mining to focus solely on single point-in-time process improvements as part of the transformation journey. Because of this, when an improved process is put into place the focus (and licenses) for the process discovery and mining suites are moved onto the next project.

In 2023, we predict that these solutions will be used to support more process analysis on an ongoing basis, with licenses being used on already reengineered processes to support KPI monitoring and ongoing process improvement. There are certainly features being built into the platforms and pricing models that are reenforcing this move, such as being able to use process discovery platforms to train users on parts of processes that cannot be automated, and unlimited usage licenses that aren’t tied to the amount of users or the amount of process data ingested.

In this way, process discovery and mining solutions can provide a real-time view of actual process performance, augmenting business process management (BPM) platforms.

Automation and Low Code Application Development Links

Process discovery and mining have always been great lead-ins for automation, revealing what processes are in place before automating them; however, that connection has mainly been one way, i.e. process discovery and mining platforms sending over the skeletons of a process to automation platforms to build an automation.

In 2023, we see process discovery platforms implementing more functionality in the reverse direction to take automation logs from the automation tools back into the discovered process to track the overall performance of the process on an ongoing basis, whether the steps of the process are automated or performed by a human.

Likewise, when a process cannot be fully automated and requires human effort, automation platforms are implementing low code applications to collect the necessary information. We envisage the process discovery and mining platforms not only building skeletons of the processes for automation, but also building suggestions from low code applications.

Digital Process Twins

Usually when we refer to digital twins we are talking about a digital representation of a manual process enabled by IoT as part of industry 4.0. However, at the end of last year, we saw one or two vendors moving towards the creation of digital process twins for business operations.

The digital process twin is the culmination of continuous monitoring of both a process and its automation. Using these features, process understanding solutions can be the future of BPM, providing real-time tracking of the performance of a process, and they can enable opportunities like preventative maintenance, leveraging root cause analysis to find when a process is showing signs that it is straying from the target model.

Object-Centric Process Mining

Traditional process mining ascribes a single case notation for every step of the process, but this isn’t the best fit for every process.

For example, in car manufacturing, the production of a car could be held up by the materials for the glass in the windows if single case notation is used. A car manufacturer will not be ascribing the same case notation for silicon arriving at the factory to the car that will eventually be fitted with windows made from that silicon consignment.

In object-centric process mining (OCPM) you would not use a single case notation as the only linking piece in a process. Instead, the case notation ceases to be the be-all-and-end-all of the process and each object, each aspect of the process, is tracked individually, with its own attributes, as part of a whole process.

The object-centric process could then, in the car example, relate case numbers from user issues to the vehicle, to the order they placed, to the windscreen, and to the delivery of silicon.

Such OCPM will expand the usefulness of process discovery and mining from processes that are fairly simple and related to a single case, such as a ticket number on an email, to a more complete view of the process.

***

In this quick look at the future of process discovery and mining, we acknowledge that the features described here may not be the core application of these platforms before the end of 2023, as organizations will continue to use existing functionality to target the bulk of legacy processes requiring quick fixes to reduce costs and perform one-off improvements to processes.

]]>
<![CDATA[UiPath: Going Big with Integrated Business Automation Platform]]>

 

NelsonHall recently attended UiPath’s Forward V event in Las Vegas. The company has recently passed $1bn in Annual Recurring Revenue (ARR), up 44% y/y, enabled through its base of 10.5k clients. This includes some very large clients such as Generali, and, similar to the other tier 1 automation platform providers, an exceptionally long tail of clients with <10 bots. UiPath’s messaging focused on how enhancements to its platform will enable clients to ‘Go Big’ on automation and reach the scale of Generali. This will be critical to UiPath’s long-term growth plans.

Continuous discovery with task & process mining

As such, some of the announcements during the event focused on what UiPath calls ‘continuous discovery’. While the company gained task and process mining capabilities through the acquisitions of ProcessGold and StepShot in 2019 (see the Forward III event blog), Chief Marketing Officer Bobby Patrick stated that back then the end-to-end nature of the platform was mostly ‘on paper’ and not in production, as these components had not been fully integrated. The company now states that the platform is fully integrated, allowing users to help clients discover the as-is state of processes, take actions to optimize and automate them, and then continuously monitor them for ongoing improvement.

What, to some degree, still remains on paper is being able to harness the joint capabilities of task mining and process mining to offer a fuller definition of long complicated processes; currently, UiPath has just a few dozen clients using both components. The general feedback from clients at the event was that these components are not being used in a ‘continuous discovery’ manner (i.e. continuously assessing a process over time for further opportunities for automation and efficiency gains). Instead, licenses are being cycled round the client organization with different departments receiving a point-in-time assessment of the process. We expect this to remain the case going forward in all but very rare scenarios, until task mining licenses decrease in cost.

Another recent acquisition is Re:infer, which brings communications mining functionality. Unlike task or process mining, Re:Infer’s focus is solely on electronic communications within client organizations. The platform can analyze emails, chat logs, social messages, and more, to create actionable business data and new opportunities for automation using its NLP engine. Then, when building and running automations, this NLP engine can be used on inbound electronic communications to trigger automations within UiPath.

The use of Re:Infer is in its early days within UiPath, being only in private preview, and (somewhat like ProcessGold and SnapShot in 2019) has not been fully integrated into the platform. NelsonHall envisions that as Re:infer becomes more integrated into the general UiPath platform suite, it will have the opportunity to become the main NLP engine; for example, being integrated into task mining to better understand emails and documents that employees are working on.

Go Big: theme of UiPath’s Forward V event

 

Enhancing automation development environments

The UiPath platform offers three development environments: Studio is an advanced product for experienced developers, while Studio X and Studio Web are for less experienced developers.

The most notable recent release is Studio Web, a browser-based automation development environment. The concept is not especially new, mirroring Automation Anywhere’s Automation 360 web IDE, but is a welcome one, especially as the company is continuously improving its offering to citizen developers.

Projects being developed on Studio Web can be edited by users on Studio and Studio X, with a target use case being a citizen developer logging into Studio Web, creating an automation, then (when the automation is ready to be proliferated) it can be sent to Studio for a client’s automation CoE to ensure the robustness of the bot.

Incorporated into Studio Web along with Studio and Studio X are ‘document understanding’ advancements. Along with overall enhancements to the extraction engine to support the likes of signature, barcode, QR code, and improved table extraction, UiPath has added native support for verticalized pre-built models for processing tax, insurance, and transportation documents.

Separate to its automation Studios, UiPath has expanded the capabilities of its Apps component to simplify the creation of lightweight applications, and for the first time, allow for the creation of public-facing apps.

UiPath also announced a partnership with outsystems, another low-code application provider. While UiPath believes its low-code application builder is suitable for a wide variety of applications, it has no plans to support high-complexity systems such as CRM applications. In these use cases, the company believes outsystems can fill that gap.

One app created using UiPath Apps and customizable by clients is Automation Launchpad, a springboard to guide client citizen developers through their company’s automation program; for example, providing information on how they may submit an idea.

Lastly, UiPath has added a connection builder to its Integration Service, its enabler of API automation. Similar to the process and task mining capabilities, Integration Service is now supported across all UiPath components, allowing users to use APIs in addition to the suites’ traditional UX automation capabilities. The connection builder allows clients to build connections to in-house and specialized industry solutions.

Conclusion

These advancements will be critical in enabling more use cases for the platform or allowing for easier identification of automation opportunities and more processes that can be automated. Specifically:

  • The process understanding module’s integration will support more organizations opting to discover and understand the processes that are ripe for automation
  • Re:infer will, as it integrates into the platform, allow for the detection of electronic conversations that have lots of inefficient back and forth
  • Public-facing apps opens up the possibility for organizations to develop lightweight citizen-led ways to interact with customers
  • And connection builder allows for more API automation.

These features, along with a change in GTM within sales to support clients who represent the best opportunities to become power users, will be the key to supporting the array of clients that currently don’t take advantage of the wide capabilities of the platform. In speaking with clients at Forward V, each and every one had bandwidth capacity issues with, in most cases, hundreds of opportunities for automation to be identified, but a lack of capacity to automate them all. Therefore, NelsonHall believes the immediate growth drivers will be the features that reduce time to make automations; for example, encouraging more citizen development with Studio Web or within testing automation, and migrating test signatures from QA testing platforms such as HP ALM. 

]]>
<![CDATA[Capgemini Launches ’One Operations’ to Support CPG Enterprises in Driving Revenue Growth]]>

 

Capgemini has launched a new digital transformation service, One Operations, with the specific goal of driving client revenue growth.

One Operations: Key Principles

Some of One Operations’ principles, such as introducing benchmark-driven best practice operations models, taking an end-to-end approach to operations across silos, and using co-invested innovation funds, are relatively well established in the industry. However, what is new is building on these principles to incorporate an overriding focus on delivering revenue growth. The business case for a One Operations assignment focuses on facilitating the client’s revenue growth and taking a B2B2C approach focused on the end customer, emphasizing the delivery of insights that enable client personnel to make earlier decisions focused on the enterprise’s customers.

Capgemini’s One Operations account teams involve consulting and operations working together, with Capgemini Invent contributing design and consulting and the operational RUN organization provided by Capgemini’s Business Services global business line.

Implementing a One Operations philosophy across the client organization and Capgemini is achieved through shared targets to reduce vendor/client friction and co-invested innovation funds. One Operations assignments involve setting joint targets with a continuously replenished co-invested innovation fund of ~10–15% of Capgemini revenues used to fund digital transformation.

One Operations is very industry-focused, and Capgemini is initially targeting selected clients within the CPG sector, looking to assist them in growing within an individual country or small group of countries by localizing within their global initiatives. The key to this approach is demonstrating to clients that it understands and can support both the ’grow’ and ’run’ elements of their businesses and having an outcome-based conversation. Capgemini is typically looking to enable enterprises to achieve 4X growth by connecting the sales organization to the supply chain.

Assignments commence with working sessions brainstorming the possibilities with key decision-makers. The One Operations client team is jointly led by a full-time executive from Capgemini Invent and an executive from Capgemini’s Business Services. The Capgemini Invent executive remains part of the One Operations client team until go-live. The appropriate business sector expertise is drawn more widely from across the Capgemini group.

One Operations assignments typically have three phases:

  • Deployment planning (3–6 months) to understand the processes and associated costs and create the business case
  • Deployment (6–15 months) to create the ’day one’ operating model
  • Sustain, involving go-live and continuous improvement.

At this stage, Capgemini has two live One Operations assignments with further discussions taking place with clients.

Using End-to-End Process Integration to Speed Up Growth-Oriented Insights

Capgemini’s One Operations has three key design principles:

  • Re-inventing the organization by embedding a growth mindset by reducing business operations complexity and enabling an AI-augmented workforce to focus on their customers and higher-value services
  • Increasing the level of end-to-end integration by improving data accuracy and incorporating AI to achieve ’touchless forecasting & planning’ and enable better decisions and speed of innovation. ’Frictionless’ end-to-end integration is used to support more connected decisions and planning across the value chain
  • Transforming at speed and scale.

These transformations involve:

  • Shaping the strategic transformation agenda through defining the target operating model based on peer benchmarks and using standardized operating model design, assets, and accelerators
  • Using a digital-first framework incorporating One Operations pre-configured digital process evaluation and digital twins
  • Deployment of D-GEM technology accelerators, including AI-augmented workforce solutions and Capgemini IP such as Tran$4orm and ranging from platforms to microtools
  • Augmented operations using Capgemini Business Services.

Changing the mindset within the enterprise involves freeing personnel from tactical transactional activities and providing relevant information supporting their new goals.

Capgemini aims to achieve the growth mindset in client enterprises by enabling an integrated end-to-end view from sales to delivery, facilitating teams with digital tools for process execution and growth-oriented data insights. Within this growth focus, Capgemini offers an omnichannel model to drive sales, augmented teams to enable better customer interactions, predictive technology to identify the next best customer actions, and data orchestration to reduce customer friction.

One Operations also enables touchless planning to improve forecast accuracy, increase the order fill rate, reduce time spent planning promotions, and accelerate cash collections to reduce DSO, while improving promotions accuracy and product availability are also key to revenue growth within CPG and retail environments.

Shortening Forecasting Process & Enhancing Quality of Promotional Decisions: Keys to Growth in CPG

The overriding aim within One Operations is to free enterprise employees to focus on their customers and business growth. In one example, Capgemini is looking to assist an enterprise in increasing its sales within one geography from ~$1bn to $4bn.

The organization needed to free up its operational energies to focus on growth and create an insight-driven consumer-first mindset. However, the organization faced the following issues:

  • 70% of its planning effort was spent analyzing past performance, and ~100 touches were required to deliver a monthly forecast
  • Order processing efficiency was below the industry average
  • Approx. 30% of its trucks were leaving the warehouse half-empty
  • Launching products was taking longer than expected.

Capgemini took a multidisciplinary approach end-to-end across plan-to-cash. One key to growth is the provision of timely information. Capgemini is aiming to improve the transparency of business decisions. For example, the company has rationalized the coding of PoS data so that it can be directly interfaced with forecasting, shortening the forecasting process from weeks to days and enhancing the quality of promotional decisions.

Capgemini also implemented One Operations, leveraging D-GEM to develop a best-in-class operating model resulting in a €150m increase in revenue, 15% increase in forecasting accuracy, 50% decrease in time spent on setting up marketing promotions, and a 20% increase in order fulfillment rate.

]]>
<![CDATA[WNS Repositions its Data & Analytics Practice to Assist Organizations in Becoming 'Insights-Led Enterprises']]>

 

Analytics has often been run as a series of periodic and siloed exercises. However, to respond to their customers in the smartest, fastest, most efficient manner, WNS perceives that organizations increasingly need to run their analytics always-on, in almost real-time, and on an enterprise rather than siloed basis. To do this and become ‘insights-led enterprises’, organizations’ analytics need to be supported by a suitable underlying enterprise data ecosystem, typically cloud-based.

WNS has had a strong Data & Analytics practice for many years. In the past, the scope of WNS’ analytics-led engagements was somewhat limited and frequently priced on an FTE basis. WNS now seeks to significantly broaden the scope, powered by data management, Artificial Intelligence (AI), and cloud, and aggressively incorporate alternative and outcome-based pricing models. WNS has now repositioned to work more upstream on client engagements and participate in larger data lake transformations, rebranding its Data & Analytics practice as WNS Triange.

Repositioning and Expanding Horizons as an ‘End-to-End Industry Analytics’ Player

This repositioning aims to establish WNS with a clear identity as ‘an end-to-end industry analytics player’ delivering outcomes and not just personnel and assisting the practice in targeting transformational activity for functional business heads, CDOs, CTOs, and CDOs outside of WNS BPS engagements. It also assists the Data & Analytics practice in establishing a stronger identity within WNS and attracting talent in a challenging talent market.

WNS is also aiming to change the scope of engagements from running individual use case analytics in silos to assisting organizations with the broader management of their underlying data ecosystems and rolling out analytics on an enterprise basis at scale.

Accordingly, WNS Triange is building data & analytics capability on the cloud, together with data/AI Ops capability to run large-scale data operations and governance at scale. These capabilities are supported by an Analytics CoE that brings together “best practices” on cloud, data, and AI, together with associated governance mechanisms and domain expertise.

Investing in High-End consulting and Hyperscaler-Certified IP

WNS Triange currently has ~4,500 personnel and is being restructured into three components:

  • Triange Consult. WNS Triange is placing much greater emphasis on up-front consulting than previously and is increasingly recruiting and locating senior consultants onshore operating from its design labs. WNS has also built framework assets in support of Triange Consult in the past two years, covering areas such as analytics and AI strategy, data strategy, and data quality & governance strategy, together with domain-specific consulting
  • Triange NxT. WNS continues to focus on the creation of accelerators. These include SKENSE, Unified Analytics Platform, Insurance Analytics in a BOX, Emerging Brands and Trends, InsighTRAC, and Datazone.ai
  • Triange CoE, for analytics project and service implementation.

WNS has invested in platforms to address intelligent cloud data ops as well as in analytics AI models. These Triange NxT platforms assist WNS in delivering speed-to-execution and speed-to-value since these elements are pre-built models with tested connectors to third-party data and are being cloud-certified with the necessary governance and built-in security protocols.

For example, the Triange NXT Insurance Analytics Platform provides pre-trained AI and non-AI based analytics models in support of insurance analytics related to claims, pricing, underwriting, fraud, customer marketing, and service & retention. These models are underpinned by APIs to leading insurance platforms, connectors with workflow systems, ML Ops, and what-if analyses. WNS also incorporates platforms from partnerships with start-ups and specialized data providers as part of its prepackaged solutions.

Key WNS platforms within Triange NxT include Skense for data extraction and contextualization, Insurance Analytics Platform, InsighTRAC for procurement insights, and SocioSEER, a social media analytics platform. WNS is currently finalizing the certification of each platform on AWS and Azure and making them available in cloud marketplaces.

SKENSE platform based solutions have been built to address a range of use cases across finance & accounting, customer interaction services, legal services and procurement, as well as banking & financial services, shipping & logistics, healthcare, and insurance.

Increasing Use of Co-Innovation and Non-FTE Pricing Models

WNS Triange revenues have grown ~25% over the past year, and WNS is increasing its use of co-innovation and non-FTE pricing models.

For example, WNS has deployed its AI/ML platform to capture the quality control data from the various plants of an FMCG company, create summaries, change the data into a suitable format for generating insights, and return the summary notes and insights to the FMCG company’s data lake.

This resulted in an 82% reduction in processing cost per document compared to what had previously been a very manual process.

WNS undertook the development of this IP largely at its own expense and now owns it, with the client paying some elements of the development fee and a licensing fee. In addition, WNS will pay the initial client a percentage of the revenue if this IP is sold to other CPG companies.

WNS helped an Insurance client automate the process of identifying subrogation opportunities in the Claims processing workflow. WNS used MLOps frameworks to identify recovery opportunities based on historical data and predict opportunities in the current transactional data with higher chances of recovery. This helped the client in improving the recovery rates by multiple percentage points.

Elsewhere, WNS is working with a media client to transform the enterprise into a digital media agency and reinvent its traditional approach to processes such as media planning and customer segmentation. Here, WNS is assisting the company with multiple data & analytics initiatives. In some cases, this involves the Triange Consult practice, in others provision of platforms, and in others, the application of the Triange CoE approach.

For example, WNS Triange Consult is helping the company establish an appropriate cloud architecture and organize its data appropriately, establish how to run machine learning ops, and identify the appropriate design for a complete reporting center.

The company’s data has traditionally been paper-based, so WNS NxT is using platforms to digitize its data and provide insights for real-time decision-making. WNS is also helping the company set up its training infrastructure for data & analytics.

This repositioning is underlined by systemic structural changes that will enable WNS to adopt a more consultative and enterprise-scale approach to analytics. While many organizations will still address analytics on a siloed case-by-case basis, and these use cases remain important, WNS now has the structure to go beyond individual use cases, further augmenting its traditional strengths in domain-based analytics and assisting organizations in adopting more systematic approaches to establishing and scaling their enterprise analytics infrastructures end-to-end with enterprise-level data, analytics, and AI.

]]>
<![CDATA[The Role of Organizational Change Management in Digital Transformation]]>

 

Digital transformation and the associated adoption of Intelligent Process Automation (IPA) remains at an all-time high. This is to be encouraged, and enterprises are now reinventing their services and delivery at a record pace. Consequently, enterprise operations and service delivery are increasingly becoming hybrid, with delivery handled by tightly integrated combinations of personnel and automations.

However, the danger with these types of transformation is the omnipresent risk in intelligent process automation projects of putting the technology first, regarding people as secondary considerations, and alienating the workforce through reactive communication and training programs. As many major IT projects have discovered over the decades, the failure to adopt professional organizational change management procedures can lead to staff demotivation, poor system adoption, and significantly impaired ROI.

The greater the organizational transformation, the greater the need for professional organizational change management. This requires high workforce-centricity and taking a structured approach to employee change management.

In the light of this trend, NelsonHall's John Willmott interviewed Capgemini's Marek Sowa on the company’s approach to organizational change management.

JW: Marek, what do you see as the difference between organizational change management and employee communication?

MS: Employee communication tends to be seen as communicating a top-down "solution" to employees, whereas organizational change management is all about empowering employees and making them part of the solution at an individual level.

JW: What are the best practices for successful organizational change management?

MS: Capgemini has identified three best practices for successful organizational change management, namely integrated OCM, active and visible sponsorship, and developing a tailored case for change:

  • Integrated OCM – OCM will be most effective when integrated with project management and involved in the project right from the planning/defining phase. It is critical that OCM is regarded as an integral component of organizational transformation and not as a communications vehicle to be bolted on to the end of the roll-out.
  • Active and visible sponsorship – C-level executives should become program sponsors and provide leadership in creating a new but safe environment for employees to become familiar with new tools and learn different practices. Throughout the project, leaders should make it a top priority to prove their commitment to the transformation process, reward risk-taking, and incorporate new behaviors into the organization's day-to-day operations.
  • Tailored case for change – The new solution should be made desirable and relevant for employees by presenting the change vision, outlining the organization's goals, and illustrating how the solution will help employees achieve them. It is critical that the case for change is aspirational, using evidence based on real data and a compelling vision, and that employees are made to feel part of the solution rather than threatened by technological change.

JW: So how should organizations make this approach relevant at the workgroup and individual level?

MS: A key step in achieving the goals of organizational change management is identifying and understanding all the units and personnel in the organization that will be impacted both directly and indirectly by the transformation. Each stakeholder or stakeholder group will likely find itself in a different place when it comes to perspective, concerns, and willingness to accept new ways of working. It is critical to involve each group in the transformation and get them involved in shaping and driving the transformation. One useful concept in OCM for achieving this is WIIFM (What's In It For Me), with WIIFM identified at a granular level for each stakeholder group.

Much of the benefit and expected ROI is tied to people accepting and taking ownership for the new approach and changing their existing ways of working. Successfully deployed OCM motivates personnel by empowering employees across the organization to improve and refine the new solution continually, stimulating revenue growth, and securing ROI. People need to be both aware of how the new solution is changing their work and that they are active in driving it – and thanks to that, they are actively making the organization a "powerhouse" for continuous innovation.

How an enterprise embeds change across its various siloes is very important. In fact, in the context of AI, automatization is not only about adopting new tools and software but mostly about changing the way the enterprise's personnel think, operate and do business.

JW: How do you overcome employees' natural fear of new technology?

MS: To generate enthusiasm within the organization while avoiding making the vision seem unattainable or scary, enterprises need to frame and sell transformations incorporating, for example, AI as evolutions of something the employees are doing already, not merely as "just the next logical step" but reinventions of the whole process – from both the business and experience perspective. They need to retain the familiarity which gives people comfort and confidence but, on the other hand, reassure them that the new tool/solution adds to their existing capability, allowing them to fulfill their true potential – something that is not automatable.

]]>
<![CDATA[Capgemini Looks to Accelerate Process Transformation with the "Frictionless Enterprise"]]>

 

The value of automation using tools such as RPA, and more recently intelligent automation, has been accepted for years. However, there is still a danger in many automation projects that while each project is valuable in its own right, they become disconnected islands of automation with limited connectivity and lifespans. Accordingly, while elements of process friction have been removed, the overall end-to-end process can remain anything but friction-free.

Capgemini has developed the "Frictionless Enterprise" approach in response to this challenge, an approach the company is now applying across all Capgemini’s Business Services accounts.

What is the Frictionless Enterprise?

The Frictionless Enterprise is essentially a framework and set of principles for achieving end-to-end digital transformation of processes. The aim is to minimize friction in processes for all participants, including customers, suppliers, and employees across the entire value chain of a process.

However, most organizations today are far from frictionless. In most organizations, the processes were designed years ago, before AI achieved its current maturity level. Similarly, teams were traditionally designed to break people up into manageable groups organized by silo rather than by focusing on the horizontal operation they are there to deliver. Consequently, automation is often currently being used to address pain points in small process elements rather than transform the end-to-end process.

The Frictionless Enterprise approach requires organizations to be more radical in their process reengineering mindsets by addressing whole process transformation and by designing processes optimized for current and emerging technology.

Capgemini’s Business Services uses this approach to assist enterprises in end-to-end transformation from conception and design through to implementation and operation, with the engine room of Capgemini Business Services now focused on technology, rather than people, for transaction processing.

A change in mindset is critical for this to succeed. Capgemini is increasingly encouraging its clients to move from customer-supplier relationships to partnerships around shared KPIs and adopt dedicated innovation offices.

The five fundamentals of the Frictionless Enterprise

Capgemini views the Frictionless Enterprise as depending on five fundamentals: hyperscale automation, cloud agility, data fluidity, sustainable planet, and secure business.

Hyperscale automation

This ultimately means the ability to reach full touchless automation. Hyperscale automation depends on exploiting artificial intelligence and building a scalable and flexible architecture based on microservices and APIs.

Cloud agility

While the frictionless transformation approach is designed to work at the sub-process level and the overall process level, it is important that any sub-process changes are a compatible part of the overall journey.

Cloud agility emphasizes improving the process in ways that can be reused in conjunction with future process changes as part of an overall transformation. So any changes made to sub-processes addressing immediate pain points should be steps on the journey towards the final target end-to-end operating model rather than temporary throwaway fixes.

Accordingly, Capgemini aims to bring the client the tools, solutions, and skills that are compatible with the final target transformation. For example, tools must be ready to scale, and at present, API-based architectures are regarded as the best way to implement cloud-native integration. This has meant a change in emphasis in the selection and nature of relationships with partners. Capgemini now spends much more time than it used to with vendors, and Capgemini’s Business Services has a global sales officer with a mandate to work with partners. In addition, this effort is now much more focused, with Capgemini concentrating its efforts on a limited set of strategic partners. All the solutions chosen are API native, fully able to scale, AI at the core, and cloud-based. One example of a Capgemini partner is Kryon in RPA, since it can record processes as well as automate them.

Data fluidity

It's important within process transformations to use both internal and external data, such as IoT and edge data, efficiently and have a single version of the truth that is widely accessible. Accordingly, data lakes are a key foundational component in frictionless transformations.

However, while most enterprises have lots of data to leverage, they also have lots of data points that need to be fixed. Master data management is critical to successful transformation and remains an important part of transformed operations.

Digital twins are key to removing process friction and are used as the interface between how the business currently operates and how it needs to operate in the future. As well as providing an accurate view of the reality of current process execution, process mining also speeds up process transformation, enabling transformation consultants to focus on evaluation and prioritization of opportunities for change rather than collecting process data. Process mining can also help with maintaining best practice compliance post-transformation by monitoring how individual agents are using their systems, with the potential to guide them through proactive online training and removing the need to compensate for agent inefficiencies with automation.

Sustainable planet

It's also becoming extremely important when reviewing end-to-end processes to consider their impact on the planet across the whole value chain, including suppliers. For example, this covers both carbon impact and social aspects such as diversity, including ensuring a lack of bias in AI models. Sustainability is becoming increasingly important in financial reporting, and in response, Capgemini has added sustainability into its integrated architecture framework.

Secure business

Enterprises cannot undertake massive transformations unless they are guaranteed to be secure, and so the Frictionless Enterprise approach encompasses account security operations and cybersecurity compliance. Similarly, change management is of overwhelming importance within any transformation project, and the Frictionless Enterprise approach focuses on building trust and transparency with customers and partners to facilitate the transformation of the value chain.

A client example of Frictionless Enterprise adoption

Capgemini is helping a CPG company to apply the Frictionless Enterprise approach to its sales & distribution planning. The company was already upper quartile at each of the individual process elements such as supply planning and distribution planning in isolation, but the overall performance of its end-to-end planning process was inadequate. Accordingly, the company looked to improve its overall inventory and sales KPIs dramatically by reengineering its end-to-end order forecasting process. For example, improved prediction would help achieve more filled trucks, and improved inventory management has a direct impact on sustainability and levels of CO2 production.

The CPG company undertook planning quarterly, centrally forecasting orders. However, half of these central forecasts were subsequently changed by the company's local planners, firstly because the local planners had more detailed account information and did not believe the centrally generated forecasts, and secondly because quarterly forecasts were unable to keep up with day-to-day account developments.

So there was a big disconnect between the plan and the reality. To address this, Capgemini undertook a process redesign and proposed daily planning, entailing:

  • Planning overnight daily with machine learning used to forecast orders based on the levels of actual orders up until that point
  • Removing local planners' ability to change order forecasts but making them responsible for improving the quality of the master data underpinning the automated forecasts, such as identifying the correct warehouse used to deliver to a particular customer.

This process redesign involved comprehensive automation of the value chain and the use of a data lake built on Azure as the source of data for all predictions.

Capgemini has now been awarded a 5-year contract with a contractual goal of completing the transformation in three years.

]]>
<![CDATA[Capgemini’s Intelligent Process Automation, Part 2: Intelligent Transaction Routing & Digital Twins Maximize RoI Delivery]]>

 

Part 1 of this blog focused on Capgemini’s structured approach to workforce motivation and upskilling when transitioning to a Frictionless Enterprise that leverages a digitally augmented workforce. This second part looks at how, when adopting a digitally augmented workforce, it is critical to ensure optimized routing of incoming queries and transactions between humans and machines, and to ensure that the expected RoI is delivered from automation projects.

Intelligent Routing of Transactions between Workforce and Machines

Intelligent query access and routing is essential to successfully deploy a hybrid human/machine workforce to achieve the optimal allocation of transactions between personnel and machines. For example:

  • For a North American manufacturer, Capgemini combined RPA with multiple microservices from AWS and Google and Capgemini code to classify 41 categories of incoming accounts payable queries. If a classification is possible, the query is allocated either to a human or machine. Queries that go the machine route have their text analyzed using NLP, and actions are then triggered to collect the information necessary to answer the query. If the confidence level in the response exceeds 95%, the answer is sent automatically. If not, then the query and response are sent to a human for review and confirmation. This is an example of a digitally augmented workforce
  • For another client, Capgemini reduced the cost per query of procure-to-pay queries from 180 cents to 17 cents by using a digitally augmented workforce. The company’s AI Query Classifier uses NLP and ICR to extract the relevant information from the unstructured text, validate the query and automate ticket creation. Its AI Workload Distribution then orchestrates the process and decides whether each case goes through automated or human resolution
  • Elsewhere, a client had a large team serving billable transactions in 24 languages, but 30%-40% of the transactions they received were not relevant to this team. Capgemini implemented 90% automated identification & indexing for 21 of these 24 languages. The data is validated, further data retrieved where necessary, and then the data revalidated. Business rules are then applied to identify whether the transaction is handled manually or automated. Savings of ~75% of the total effort were achieved.

The use of machine translation is becoming increasingly important in these situations, and Capgemini is now working on machine language translation to reduce its dependency on nearshore centers employing large numbers of native speakers in multiple languages.

Preformed automation assets are also important in combining best practices and intelligent automation. Here, Capgemini has introduced 890 by Capgemini. This catalog of analytics services enables organizations to access analytical and AI solutions and datasets from within their own organization, from multiple curated third-party providers, and from Capgemini. Capgemini has focused on the provision of sector-specific solutions and currently offers ~110 sector solutions.

Introduction of Digital Twins Ensure Delivery of RoI from Technology Deployment

Capgemini’s approach to data-driven process discovery and excellence is based on combining process mining using process logs, task capture and task mining using desktop recorders, productivity analytics for each individual, and use of digital twins.

Tools used include Fortress IQ, Celonis, and Capgemini’s proprietary Prompt tool. These tools are combined with Capgemini’s Digital Global Enterprise Model (D-GEM) platform to incorporate best-in-class processes and frictionless processing.

Digital twins are used to progress process discovery beyond digital snapshots and provide ongoing process watching, assessment, and definition of opportunities. It also allows Capgemini to simulate the real returns that will be achieved by the introduction of technology by highlighting any other process constraints that will be exposed and limit the expected RoI from automation initiatives.

Capgemini’s approach to process digital twin introduction is:

  • To start with business mining, a combination of process mining, task mining, and Capgemini’s D-GEM platform
  • This is followed by benchmarking the processes against D-GEM
  • Then simulating the impact of introducing technology, calculating the business case, and ensuring that the result achieved is close to what was anticipated by identifying any potential process bottlenecks that might reduce the technology deployment’s savings. These simulations also help in accelerating the approval of intelligent automation projects and the scaling of digital transformation within the enterprise, since they increase management confidence in the certainty of project outcomes
  • This is followed by continuous improvement and identifying ongoing areas for improvement.

Also, during the pandemic, it is increasingly difficult to run onsite workshops for automation opportunity identification. It is becoming increasingly necessary to use digital twin process mining of individuals’ machines to remotely build business cases. This development may become standard practice post-pandemic if it proves to be a faster and more reliable basis for opportunity identification than interviewing SMEs.

Conclusion

In conclusion, the deployment of technology is arguably the easy part of intelligent process automation projects. Two more challenging elements have always been interpreting and routing unstructured transactions and queries and identifying and delivering RoI. Capgemini’s Frictionless Enterprise approach – that leverages a digitally augmented workforce – addresses both these challenges by combining technologies for classification and routing unstructured transactions and queries, and introducing process digital twins to ensure RoI delivery.

You can read Part 1 of this blog here.

]]>
<![CDATA[Capgemini's Intelligent Process Automation, Part 1: Significant Growth From Frictionless Enterprise Approach]]>

 

This is Part 1 of a two-part blog looking at Capgemini’s Intelligent Process Automation practice. Here I examine Frictionless Enterprise, Capgemini’s framework for intelligent process automation that focuses on the adoption of a digitally augmented workforce.

Digital transformation has been high on enterprise agendas for some years. However, COVID-19 has given the drive to digital transformation even greater impetus as organizations have increasingly looked to reduce cost, implement frictionless processing, and decouple their increasingly unpredictable business volumes from the number of servicing personnel required.

For Capgemini, this has resulted in unprecedented increases in Intelligent Process Automation bookings and revenue in 2020.

Frictionless Enterprise & the hybrid workforce

There is always a danger in intelligent automation projects of regarding people as secondary considerations and addressing the workforce through reactive change management. As part of its Frictionless Enterprise approach, Capgemini's framework for intelligent process automation stresses the adoption of a digitally augmented workforce, and aims to avoid this pitfall by maintaining high workforce-centricity, stressing the need to involve employees in the automation journey by taking a structured approach to workforce communication and upskilling.

Capgemini's Intelligent Automation Practice emphasizes the workforce communication and reskilling needed to achieve a digitally augmented or hybrid workforce. This involves putting humans at the center of the hybrid workforce and motivating and reskilling them.

The personnel-related stages in the journey towards a Frictionless Enterprise that leverages a digitally augmented workforce used by Capgemini are:

  • Design of the augmented workforce. On the design side, it is important to ask, "what is the impact of technology on the workforce and how should the organization's competency model change?" How is the workforce of the future defined?
  • Building the augmented workforce
  • Creating the right context.

Client cases

In one client example, Capgemini assisted a major capital markets firm in designing and building its digitally augmented workforce, using a four-step process:

  1. Resource profiling
  2. Dedicated curriculum creation
  3. Pilot on 15% of resources
  4. Augmented workforce scaling.

Step 1: Involved identifying personnel with a statistics or mathematics background who could be potential candidates for, say, ML data analysis. These potential candidates were then interviewed and tested to ensure their ability, for example, to run a Monte Carlo simulation.

Having established the desired job profiles, these personnel were allocated to various job families, such as automation business analysts, data analysts, power users, and developers, with developers split into low code/no-code developers and advanced developers.

Step 2: A dedicated curriculum was created in support of each of the job families. However, to ensure the training was focused and to increase employee engagement and retention, each employee was tasked up-front with clearly defined projects to be undertaken following training. This kept the training relevant and avoided a demotivating disconnect between training and deployment

Step3: 15% of the entire team were then trained and deployed in their new roles. This figure ranges between 5% and 15% depending on the client, but it is important to deploy on a sub-set of the workforce before rolling out more widely across the organization. This has the dual advantages of testing the deployment and creating an aspirational group that other employees wish to join

Step 4: Roll-out to the wider labor force. The speed of roll-out typically depends on the sector and company culture.

Capgemini has also helped a wealth management company enhance its ability to supply information from various sources to its traders by enhancing its capabilities in data management and automation. In particular, this required upskilling its workforce to address shortages of data, automation, and AI skillsets.

This involved a 3-year MDM Ops modernization program with dedicated workforce augmentation and upskilling for digitally displaced personnel, starting with three personnel groups.

This resulted in an average processing speed increase of 64% and an estimated data quality increase of 50%, and the approach was subsequently adopted more widely within the company's in-house operations.

AI Academy Practitioner's Program

Capgemini has created its AI Academy Practitioner's Program, an "industrialized approach" to AI training to support workforce upskilling. This program is mentor-led and customizable by sector and function to ensure that it supports the organization's current challenges.

The program's technical elements include:

  • "Qualifying" (6 hours over 3 days) for personnel who only need to be aware of the potential of AI
  • "Professional" (10-hours per week for 4 weeks), where personnel are provided with low code tools to start developing something
  • "Expert" (10-hours per week for 4 weeks), incorporating custom AI & ML model building.

The program's functional courses include:

  • Data literacy (4 hours over 4 days)
  • Business functional (10-hours per week for 4 weeks)
  • Business influencer (CXO) (15 hours over 3 days)
  • Intelligent process automation (15 hours over 3 days), highlighting combining automation stack with AI.

Conclusion

In conclusion, the deployment of technology is arguably the easy part of intelligent process automation projects. A more challenging element has always been to motivate the workforce to come forward with ideas and enthusiastically adopt change. Capgemini's Frictionless Enterprise approach – that leverages a digitally augmented workforce – addresses this challenge by adopting an aspirational approach to upskilling the workforce and removing the disconnect between training and deployment.

]]>
<![CDATA[2020 Lessons & Future Success Factors: Q&A with Wipro’s Nagendra P Bandaru]]>

 

The following is a discussion between Nagendra (Nag) P Bandaru, President, Wipro Limited – Global Business Lines-iCore, and John Willmott, NelsonHall CEO, covering lessons learned in 2020 and success factors for 2021. Nag is responsible for Infrastructure and Cloud Services, Digital Operations and Platforms, and Risk Services and Enterprise Cybersecurity – three key strategic business lines of Wipro Limited that help global clients accelerate their digital journeys. Together, these business lines generate revenues of USD 4.3 billion with an overall operation of approximately 100,000+ employees across 50 global delivery locations. He is a member of Wipro’s Executive Board, the apex leadership forum of the company. Nag is based in Plano, Texas.

JW: Nagendra, what was the most important lesson that you learned in 2020 in the face of COVID-19 and its impact on business?

Nag: What 2020 taught us was that no plans, great systems, or great processes would work in an unprecedented environment. It was a year of being resilient – of creating hope when you have no hope. For us, that was the starting point.

In a world where everything is constrained, how do you keep your operations live? Such situations often lend to the danger of overemphasizing technology, frameworks, systems, and processes while underestimating people. But, I believe that it is most important to have a resilient team. During BCP implementation in response to government-imposed lockdowns across the globe, the team’s leadership and decisive action have been the cornerstone of our success. Some of our employees were stuck at home without access to their secure office desktops. Our teams worked closely with local administrations to obtain the appropriate government permissions and ship laptops to employees’ homes ­­– often crossing inter-state borders – through ground transport. Then, we had to set up the software and security remotely and change all the dongles to direct-to-home broadband. At that time, it was about keeping things simple and working out the basics. We saw simplicity being redefined by the pandemic. In getting the basics right, the bigger stuff began to fall in place. The experience taught us to focus on the basics.  

Our global teams ensured that we moved from BCP to Business as usual within a few weeks of the lockdown, with 93% of our colleagues working securely from their homes. This proved that business resilience is about having the right talent. My leaders also emerged strong and worthy in those extremely difficult times. It is important to build future leaders whose core capability is the ability to anticipate and prevent risk. This ability to manage risk is the biggest leadership trait a company needs to be successful.

JW: The IT and BPS industry was remarkably resilient in 2020 and has a very promising outlook for 2021. What do you see as the main growth opportunities?

Nag: A good client experience comes from executing well, where the client is able to rely on the supplier’s strong delivery organization. The pandemic proved to be a challenging environment for several suppliers, and, as a result, clients were quick to change their vendors. The speed of decision-making has become much faster now, with little patience for lengthy procurement cycles. We sensed this as soon as the crisis struck. That is why, when one of our customers, a large bank in the U.S., wanted to launch a full-fledged digital solution to support thousands of small businesses and their employees under the fiscal stimulus program initiated by the U.S. government, we worked round-the-clock to deliver the application in just 48 hours. Similarly, we helped our client, a Postal Services company in the Middle East, launch a special medicine delivery service as a part of the government’s COVID response for citizens. For another client, we processed over 2.7 Mn claims in the first 3 weeks of the lockdown with nearly 100% accuracy. We were obsessed about delivery with minimal business disruption and supporting clients during their toughest time.

We now see that much of the growth is coming from vendor consolidation resulting from good execution. It rests primarily on the ability to understand the client’s critical needs when they require a tremendous amount of help. And many companies require that help now, which is why contracting decisions are being executed very, very fast. In some cases, accounts that used to take a decade to get into are now doing business in a week. Most often, the questions being asked are, “Can you do it correctly, and can you ramp up several hundred people?” If the answers are affirmative, you will get the business. Having said that, if you are unable to deliver within 15 days, you stand to lose the contract. Thus, the industry growth is also stemming from good execution.

Here, I would also note that the IT and ITES industry is recession-proof since volumes grow when client business is growing, and when there is a recession, clients undertake widespread initiatives to cut their operational costs but these initiatives are transformative in nature

There are also short-term growth engines such as the Cares Act in the US, together with regulatory changes and consolidation opportunities. Additionally, operational excellence opportunities, where customers want cost savings to readjust their cost bases to maintain their bottom lines, also offer growth.

However, the larger growth lies in business transformation, which has been a global trend even prior to the pandemic. The pandemic has forced industries and companies to accelerate tech investments. There is no denying that technology is at the core of this digital transformation. It powers both the front-end—to gain better access to markets, as well as the backend—to improve efficiency and optimize costs. Those customers who have invested in higher levels of RPA and AI or Intelligent Automation are finding out that they are in a better position to provide elasticity to their operations. Going forward, much of the growth for the industry will be led by next-generation technologies and services, including digital, cloud, data, engineering, and cybersecurity. Wipro has a massive role to play in helping businesses get onto the cloud as we have been investing in these areas for the last few years, and they are integral to our strategy.

JW: How are your delivery structures and processes changing as a result of COVID-19?

Nag: Security of data and processes has kept everyone awake. Add to that the several ransomware attacks on companies, and it’s a nightmare. The single biggest threat to companies right now is risk, which is why risk tolerance is very low. You will no longer be excused or pardoned if you do anything remotely risky. That is why we are constantly investing from a controls perspective, from a process perspective, and from a systems perspective. I’ve never experienced the importance of risk and security at this magnitude. Today, I have four teams ensuring four-eyes checks on everything we do. Yet, there is vulnerability, and we are constantly on the alert. That is the biggest thing that has changed in our lives.

Security is another reason why, I believe, the IT industry may not be able to embrace work-from-home permanently in the long term. Security will continue to remain vulnerable despite heavy investment in software and tightening of controls and processes. Therefore, I expect the future of delivery to be a distributed network from well-secured office environments of the supplier. However, this could definitely involve a shift to small offices, rather than operating from large delivery centers, thus leading to a high increase in the number of delivery locations. The future will see a “work-from-anywhere” model instead of a pure “work-from-home.”

JW: COVID-19 is also credited with a major increase in uptake of digital transformation. To what extent is this impacting your staff development programs?

Nag: The past nine months have seen unprecedented changes in the industry. There has been a big shift in the adoption of technology and its key role in making businesses resilient in the post-Covid world. While several changes were related to technology, some have been structural and are here to stay. Rapid digital transformation has created demand for new skills and flexibility. While employee wellness and safety has been our prime focus during the pandemic, we have also empowered employees to develop new skills through our robust internal upskilling initiatives. This helps build fungibility and keeps us agile. For example, there has been a major spike in loan administration volumes across both consumer and institutional loans, while other sectors such as auto claims and healthcare claims have seen considerably reduced volumes. In such situations, the fungibility of people becomes an important factor for quick ramp-up and fast delivery.

However, it is equally important to have a culture of obsession with employee experience. How we manage talent is going to be the single biggest determining factor for future success. I believe that skills and talent are at the center of that success, and the nature of education will have to evolve as we go into a very integrated disciplinary world. Earlier computing infrastructure used to be dominated by a small number of players. Today, the fragmentation in the cloud world is so immense that the large legacy fixed cost has been spread across many companies. This means that individuals will now need experience in a much wider range of technologies and software companies.

The gap between employable talent and educated talent is massive. Training is one area where budgets need to be increased across both skill-based and competency-based training. Skills requirements may change over time, but it is very important to give people experiences that enable them to develop competencies. That is why training has been the core focus of employee initiatives at Wipro. 

Automation as a theme has been there for years. We have to understand that automation is a continuous journey of converting manual processes to straight-through processing. This, too, creates opportunities for employees to move up to roles that leverage their skills.

Our customers have always valued our passion for innovation, work ethics, and culture. They expect us to be the best at execution while being a proactive force of change. In order to be passionately committed to delivering lasting value and be the trusted partner to our clients in their transformation journey, we must continuously evolve. And for that, we will continue to focus on attracting, developing, and retaining the best talent in our industry.

]]>
<![CDATA[Capgemini's CIAP 2.0 Assists Enterprises in Rapid & Cost-Effective Scaling of Automation Initiatives]]>

 

Capgemini has just launched version 2 of the Capgemini Intelligent Automation Platform (CIAP) to assist organizations in offering an enterprise-wide and AI-enabled approach to their automation initiatives across IT and business operations. In particular, CIAP offers:

  • Reduced TCO and increased resilience through use of shared third-party components
  • Support for AIOps and DevSecOps
  • A strong focus on problem elimination and functional health checks.

Reduced TCO & increased ability to scale through use of a common automation platform

A common problem with automation initiatives is their distributed nature across the enterprise, with multiple purchasing points and a diverse set of tools and governance, reducing overall RoI and the enterprise's ability to scale automation at speed.

Capgemini aims to address these issues through CIAP, a multi-tenanted cloud-based automation solution that can be used to deliver "automation on tap." It consists of an orchestration and governance platform and the UiPath intelligent automation platform. Each enterprise has a multi-tenanted orchestrator providing a framework for invoking APIs and client scripts together with dedicated bot libraries and a segregated instance of UiPath Studio. A central source of dashboards and analytics is built into the front-end command tower.

While UiPath is provided as an integral part of CIAP, CIAP also provides APIs to integrate other Intelligent Automation platforms with the CIAP orchestration platform, enabling enterprises to continue to optimize the value of their existing use cases.

The central orchestration feature within CIAP removes the need for a series of point solutions, allowing automations to be more end-to-end in scope and removing the need for integration by the client organization. For example, within CIAP, event monitoring can trigger ticket creation, which in turn can automatically trigger a remediation solution.

Another benefit of this shared component approach is reducing TCO by improved sharing of licenses. The client no longer has to duplicate tool purchasing and dedicate components to individual automations; the platform and its toolset can be shared across each of infrastructure, applications, and business services departments within the enterprise.

CIAP is offered on a fixed-price subscription-based model based on "typical" usage levels, with additional charges only applicable where client volumes necessitate additional third-party execution licenses or storage beyond those already incorporated in the package.

Support for AIOps & DevSecOps

CIAP began life focused on application services, and the platform provides support for AIOps and DevSecOps, not just business services.

In particular, CIAP incorporates AIOps using the client's application infrastructure logs for reactive and predictive resolutions. In terms of reactive resolutions, the AIOps can identify the dependent infrastructure components and applications, identify the root cause, and apply any automation available.

CIAP also ingests logs and alerts and uses algorithms to correlate them, so that the resolver group only needs to address a smaller number of independent scenarios rather than each alert individually. The platform can also incorporate the enterprise's known error databases so that if an automated resolution does not exist, the platform can still recommend the most appropriate knowledge objects for use in resolution.

Future enhancements include increased emphasis on proactive capacity planning, including proactive simulation of the impact of change in an estate and enhancing the platform's ability to predict a greater range of possible incidents in advance. Capgemini is also enhancing the range of development enablers within the platform to establish CIAP as a DevSecOps platform, supporting the life cycle from design capture through unit and regression testing, all the way to release within the platform, initially starting with the Java and .NET stacks.

A strong focus on problem elimination & functional health checks

Capgemini perceives that repetitive task automation is now well understood by organizations, and the emphasis is increasingly on using AI-based solutions to analyze data patterns and then trigger appropriate actions.

Accordingly, to extend the scope of automation beyond RPA, CIAP provides built-in problem management capability, with the platform using machine learning to analyze historical tickets to identify the causes and recurring problems and, in many cases, initiate remediation automatically. CIAP then aims to reduce the level of manual remediation automation on an ongoing basis by recommending emerging automation opportunities.

In addition to bots addressing incident and problem management, the platform also has a major emphasis within its bot store on sector-specific bots providing functional health checks for sectors including energy & utilities, manufacturing, financial services, telecoms, life sciences, and retail & CPG. One example in retail is where prices are copied from a central system to store PoS systems daily. However, unreported errors during this process, such as network downtime, can result in some items remaining incorrectly priced in a store PoS system. In response to this issue, Capgemini has developed a bot that compares the pricing between upstream and downstream systems at the end of each batch pricing update, alerting business users, and triggering remediation where discrepancies are identified. Finally, the bot checks that remediation was successful and updates the incident management tool to close the ticket.

Similarly, Capgemini has developed a validation script for the utilities sector, which identifies possible discrepancies in meter readings leading to revenue leakage and customer dissatisfaction. For the manufacturing sector, Capgemini has developed a bot that identifies orders that have gone on credit hold, and bots to assist manufacturers in shop floor capacity planning by analyzing equipment maintenance logs and manufacturing cycle times.

CIAP has ~200 bots currently built into the platform library.

A final advantage of using platforms such as CIAP beyond their libraries and cost advantages is that they provide operational resilience by providing orchestrated mechanisms for plugging in the latest technologies in a controlled and cost-effective manner while unplugging or phasing out previous generations of technology, all of which further enhances time to value. This is increasingly important to enterprises as their automation estates grow to take on widespread and strategic operational roles.

]]>
<![CDATA[Automation Anywhere Launches AARI to Facilitate Bot Access to Employees]]>

 

NelsonHall recently attended Automation Anywhere's 2020 innovation day, where the company launched its Automation Anywhere Robotic Interface (AARI) digital assistant focused on making bot usage easier and more accessible to employees.

Automation Anywhere Robotic Interface

AARI “aims to elevate employees' workflows in the same manner as at-home digital assistants such as Alexa and Siri have enhanced their home life” and increase the adoption of RPA in the front and back office.

The AARI application allows users to:

  • Launch bots providing integrations to, for example, Salesforce, Google Sheets, and Microsoft Excel through a chat-based interface in addition to via the desktop application, mobile, and web interfaces. Automation Anywhere will also add voice support for accessing bots
  • Provide form-style entry into the bot, with the information then disseminated to the client’s business applications
  • Manage escalation scenarios.

Automation Anywhere expects CoEs to use AARI to create attended bots triggered using natural language conversations for workgroups or business users across front and back offices.

For example, in a contact center handling customer loans, once the CoE has established the logic behind loan terms and conditions, the workflow with AARI could be optimized to:

  • Collect the customer data from across platforms before the call, and present it in the contact center’s CRM platform of choice (e.g., Salesforce)
  • Provide forms during the call for the CX agent to populate with information from the conversation, which can then be used to populate the appropriate platforms, reducing the need for re-entry of information
  • Extract unstructured information from emailed PDFs using IQ Bot and run credit checks in the background
  • Suggest context-specific next-best actions incorporating business rules
  • On a natural language command from the CX agent to AARI, such as ‘send over the new loan terms to this customer,’ use the previously established logic to create a set of terms and conditions and email them to the customer.

Early adopter client examples include:

  • Colombian financial services firm Bancolombia using AARI to reduce in-branch wait times. The deployment of AARI was completed in one month and resulted in a $19m reduction in provision costs, 59% reduction in response time, and delivered a 1300% ROI in its first year
  • CX BPO firm TaskUs using AARI to improve employee experience along with shorter training cycles and improved agent performance for a San Antonio-based client, resulting in a 20-second reduction in AHT, a 3.4% improvement in CSAT, and 2.7% improvement in call quality.

Interactions with AARI are created using standard drag-and-drop task items from the toolbox and can leverage Automation Anywhere's Discovery Bot and other features. AARI will be charged on a $35 per user per month basis.

How distinctive is the AARI concept?

  • UiPath’s Forms feature provides an input form functionality similar to AARI to allow users to design forms for a user to input data, then disseminate the input data across its business applications, but does not allow users to launch bots through a conversational input with a digital assistant
  • The NICE Employee Virtual Assistant (NEVA) acts as an automation finder to launch pre-existing processes and conversational AI-based scenarios but lacks form-style entry
  • In the front office space, the use of bots to integrate platforms to reduce data entry and swivel chair activities is not new. Many of the CX Services vendors have had this form of capability for some years, in addition to handling escalation scenarios as a hygiene factor. These platforms do not, however, offer the automation capabilities of an RPA implementation. The CX vendors also have roadmaps to include features such as chatbots to capture sensitive information and assess the customer's tone to provide answers tailored to their emotional response.

Automation Anywhere differs in bringing together the form data entry capabilities of the RPA providers and CX vendors, and the more niche ability to interact with bots through more conversational means.

]]>
<![CDATA[AntWorks Targets Breadth & Depth in Client Engagements, Partners & Curation Capabilities]]>

 

Last week, NelsonHall attended ANTENNA2020, AntWorks’ yearly analyst retreat. AntWorks has made considerable progress since its last analyst retreat, experiencing considerable growth (estimated at ~260%) in the three quarters ending January 2020, and employing 604 personnel at the end of this period.

By geography, AntWorks’ most successful geography remains APAC, closely followed by the Americas, with AntWorks having an increasingly balanced presence across APAC, the Americas, and EMEA. By sector, AntWorks’ client base remains largely centered on BFSI and healthcare, which together account for ~70% of revenues.

The company’s success continues to be based on its ability to curate unstructured data, with all its clients using its Cognitive Machine Reading (CMR) platform and only 20% using its wider “RPA” functionality. Accordingly, AntWorks is continuing to strengthen its document curation functionality while starting to build point solutions and building depth into its partnerships and marketing.

Ongoing Strengthening of Document Curation Functionality

The company is aiming to “go deep” rather than “shallow and wide” with its customers and cites the example of one client which started with one unstructured document use case and has over the past year introduced an additional ten unstructured document use cases resulting in revenues of $2.5m.

Accordingly, the company continues to strengthen its document curation capability, and recent CMR enhancements include signature verification, cursive handwriting, language extension, sentiment analysis, and hybrid processing. The signature verification functionality can be used to detect the presence of a signature in a document and verify it against signatures held centrally or on other documents and is particularly applicable for use in KYC and fraud avoidance where, for example, a signature on a passport or driving license can be matched with those on submitted applications.

This strategy of the depth of document curation functionality resonated strongly with the clients speaking at the event. In one such case, it was the depth of the platform allowing cursive and text to be analyzed together that led to an early drop out of a number of competitors tasked with building a POC that could extract cursive writing.

AntWorks also continues to extend the range of languages where it can curate documents; currently, 17 languages are supported. The company has changed the learning process for new languages to allow for quicker training on new languages, with support for Mandarin and Arabic available soon.

Hybrid processing enables multi-format documents containing, for example, text, cursive handwriting, and signatures to be processed in a single step.

Elsewhere, AntWorks has addressed a number of hygiene factors with QueenBOT, enhancing its business continuity management, auto-scaling, and security. Auto-scaling in QueenBOT to allow bots to switch between processes if one process requires extra assistance to meet SLAs, effectively allowing bots to be “carpenters in the morning and electricians in the evening,” increasing both SLA adherence and bot utilization.

Another key hygiene factor addressed in the past year has been training material. AntWorks began 2019 with a thin training architecture, with just two FTEs supporting the rapidly expanding company; over the past year, the number of FTEs supporting training has grown to 25, supporting the creation of thousands of hours of training material. AntWorks also launched its internship program, starting in India which has added 43 FTEs in 2019. The ambition this year is to go global with this program.

Announcement of Process Discovery, Email Agent & APaaS Offerings

Process discovery is an increasingly important element in intelligent automation, helping to remove the up-front cost involved in scaling use cases by identifying and mapping potential use cases.

AntWorks’ process discovery module enables organizations to both record the keystrokes taken by one or more users against multiple transactions or import keystroke data from third-party process discovery tools. From these recordings, it uses AI to identify the cycles of the process, i.e. the individual transactions, and presents the user with the details of the workflow, which can then be grouped into process steps for ease of use. The process discovery module can also be used to help identify the business rules of the process and assist in semi-automatic creation of the identified automations (aka AutoBOT).

The process discovery module aims to offer ease of use compared to competitive products and can, besides identifying transaction steps, be used to assist organizations in calculating the RoI on business cases and in estimating the proportions of processes that can be automated, though AntWorks is understandably reluctant to underwrite these estimates.

One of the challenges for AntWorks over the coming year is to develop standardized use cases/point solutions based on its technology components, initially in horizontal form, and ultimately verticalized. Two of these just announced are Email Agent and Accounts Payable as-a-Service (APaaS).

Email Agent is a natural progression for AntWorks given its differentiation in curating unstructured documents, built on components from the ANTstein full-stack and packaged for ease of consumption. It is a point solution designed solely to automate email traffic and encompasses ML-based email classification, sentiment analysis to support email prioritization, and extraction of actionable data. Email Agent can also respond contextually via templated static or dynamic content. AntWorks estimates that 40-50 emails are sufficient for training for each use case such as HR-related email.

The next step in the development of Email Agent is the production of verticalized solutions by training the model on specific verticals to understand the front office relations organizations (such as those in the travel industry) have with their clients.

APaaS is a point solution consisting of a pre-trained configuration of CMR to extract relevant information from invoices which can then be API’d into accounting systems such as QuickBooks. Through these point solutions offered on the cloud, AntWorks hopes to open up the potential for the SME market.

Focusing on Quality of Partnerships, Not Quantity

Movement on AntWorks’ partner ecosystem (now ~66) has been slower than expected, with only a handful of partners added since last year's ANTENNA event, despite its expansion being a priority. Instead, AntWorks has been ensuring that the partnerships it does have and signs are deep and constructive. Examples of these deep partnerships include Bizhub and Accenture, two partners who have been added and that are helping train CMR in Korean and Thai respectively in exchange for some timed exclusivity in those countries.

AntWorks is also partnering with SBI Group to penetrate the South East Asia marketplace, with SBI assisting AntWorks in implementing the ability to carry out data extraction in Japanese. Elsewhere, AntWorks has partnered with the SEED Group based in Dubai and chaired by Sheikh Saeed Bin Ahmed Al Maktoum to access the MENA (Middle East & North Africa) region.

New hire Hugo Walkinshaw was brought in to lead the partnership ecosystem very recently, and he has his work cut out for him, as CEO Ash Mehra targets a ratio of direct sales to sales through partners of between 60:40 and 50:50 (an ambitious target from the current 90:10 ratio). The aim is to achieve this through the current strategy of working very closely with partners, signing exclusive partnerships where appropriate, and targeting less mature geographies and emerging use cases, such as IoT, where AntWorks can establish a major presence.

In the coming year, expect AntWorks to add more deep partnerships focused on specific geographic presence in less mature markets and targeted verticals, and possibly with technology players to support future plans for running bots on embedded devices such as ships.

Continuing to Ramp Up Marketing Investment

AntWorks was relatively unknown 18 months ago but has made a major investment in marketing since then. AntWorks attended ~50 major events in 2019, possibly 90 events in total, counting all minor events. However, AntWorks’ approach to events is arguably even more important than the number attended, with the company keen to establish a major presence at each event it attends. AntWorks does not wish to be merely another small booth in the crowd, instead opting for larger spaces in which it can run demos to support the interest in clients and partners.

This appears to have had the desired impact. Overall, AntWorks states that in the past year it has gone from being invited to RFIs/RFPs in 20% of cases to 80% and that it intends to continue to ramp up its marketing budget.

A series B round of funding, currently underway, is targeted on expanding its marketing investments as well as its platform capabilities. Should AntWorks utilize this second round of funding as effectively as its first with SBI Investments 2 years ago, we expect it to act as a springboard for exponential growth and these deep relationships and continue to lead in middle- and back-office intelligent automation use cases with high volumes of complex or hybrid unstructured documents.

]]>
<![CDATA[UiPath: Forging Connections Between Business Users & Automation]]>

 

Reboot work was the slogan for UiPath’s recent Forward III partner event, a reference to rethinking the way we work. UiPath’s vision is to elevate employees above repetitive and tedious tasks to a world of creative, fulfilling work. The company’s vision is driven by an automation first mindset, along with the concept of a bot for everyone and human-automation collaboration.

During the event, which attracted ~3K attendees, UiPath referenced ~50 examples of clients at scale, and pointed to a sales pipeline of more than $100m.

Previously, UiPath’s automation process had three phases: Build, Manage, and Run, using Studio, Orchestrator, and Attended and Unattended bots respectively. Their new products extend this process to six phases: Plan, Build, Manage, Run, Engage, and Measure. In this blog, I look at the six phases of the UiPath automation process and at the key automation products at each stage, including new and enhanced products announced at the event.

 

 

Plan phase (with Explorer Enterprise, Explorer Expert, ProcessGold, and Connected Enterprise)

By introducing the product lines Explorer and Connected Enterprise, UiPath aims to allow RPA developers to have a greater understanding of the processes to be automated when planning RPA development.

Explorer consists of three components, Explorer Enterprise, Explorer Expert, and ProcessGold. Explorer provides new process mapping and mining functionality building on two UiPath acquisitions: the previously announced SnapShot, which now comes under the Explorer Enterprise brand, and the newly announced ProcessGold whose existing clients include Porsche and EY. Both products construct visual process maps in data-driven ways; Explorer Enterprise (SnapShot) does this by observing the steps performed by a user for the process, and ProcessGold does this by mining transaction logs from various systems.

Explorer Enterprise performs task mining, with an agent sitting in the background of a user machine (or set of users’ machines) for 1-2 weeks. Explorer then collects details of the user activities, the effort required, the frequency of the activity, etc.

ProcessGold, on the other hand, monitors transaction logs and, following batch updates and 2 to 3 hours of construction, builds a process flow diagram. These workflow diagrams show the major activities of the process and the time/effort required for each step, which can then be expanded to an individual task level. Additionally, at the activity level, the user has access to activity and edge sliders. The activity slider expands the detail of the activities, and the edge slider expands the number of paths that the logged users take, which can identify users possibly straying from a golden path.

Administrators can then use the data from Explorer Enterprise and/or ProcessGold in Explorer Expert. Explorer Expert allows the admin users to enter deeper organizational insights, and either record a process to build or manually create a golden path workflow. These workflows act as a blueprint to build bots and can be exported to Word documents which can then be used by bot creators.

Connected Enterprise enables an organization to crowdsource ideas for which processes to automate, and aims to simplify the automation and decision-making pipelines for CoEs.

Automation ideas submitted to Connected Enterprise are accompanied by process information from the submitter in the form of nine standard questions, such as how rule-based it is, how likely to change it is, who the owner is, etc. as well as process owners. This information is crunched to produce automation potential and ease of implementation scores to help decide on the priority of the automation idea. These ideas are then curated by admins who can ask the end-user for more information, including an upload of ProcessGold files.

The additions of Explorer and Connected Enterprise allow developers to gain deeper insights into the processes to be automated, and business users to connect with RPA development.

Build phase (with enhanced Studio, plus new StudioX & StudioT)

New components to the build phase include StudioX and StudioT along with a number of enhancements to the existing Studio bot builder.

StudioX is a simplified version of the Studio component which is targeted at citizen developers and regular business users, which UiPath referred to ‘Excel power user level’, to create more simplistic bots as part of a push for citizen developers and a bot for every person.

StudioX simplifies bot development by removing the need for variables, and reduces the number of tasks that can be selected. Bots produced with StudioX can be opened with Studio; however, the reverse may not necessarily be the case depending on the components used in Studio.

The build-a-bot demo session for StudioX focused on using Excel to copy data in and out of HR and finance systems and extracting and renaming files from an Outlook inbox to a folder. Using StudioX in the build-a-bot session was definitely an improvement over Studio for the creation of these simple bots.

StudioT, which is in beta and set to release Q1 2020, will act as a version of Studio focused entirely on testing automation. NelsonHall’s software testing research, including software testing automation, can be found here.  

Further key characteristics of the existing build components include:

  • Long-running workflows which can suspend a process, send a query to a human while freeing up the bot, and continuing the bot once the human has provided input
  • Cloud which has a 1-minute signup for the Community version of the aaS platform and (as of September 2019) has 240k users, up from 167k in June 2019
  • Queue triggers which can automatically take action when items are added to the queue
  • More advanced debugging with breakpoint and watch panels
  • Taxonomy management
  • Validation stations.

With the introduction of StudioX, UiPath aims to democratize RPA development to the business users, at least in simple cases; and with long-running workflows, human-bot collaboration no longer requires bots to sit idle, hogging resources while waiting for responses.

Manage phase (with AI Fabric)

The Manage phase now allows users to manage machine learning (ML) models using AI Fabric, an add-on to Studio. It allows users to more easily select ML models, including models created outside of UiPath, and integrate them into a bot. AI Fabric, which was announced in April 2019, has now entered private preview.

Run phase (with enhancements to bots with native integrations)

Improvements to the run components leverage changes across the portfolio of Plan, Build, Manage, Run, Engage, and Measure, in particular for attended bots with Apps (see below). Other new features include:

  • Expanding the number of native integrations, for which UiPath and its partners are building 100s of connectors to business applications such as Salesforce and Google to provide functionality including launching bots from the business application. Newer native applications are available via the UiPath Go! Storefront
  • A new tray will feature in the next release.

Engage phase (allowing users direct connection to bots with Apps)

Apps act as a direct connection for users to interact with attended bots through the use of forms, tasks, and chatbots. In Studio, developers can add a form with the new form designer to ask for inputs directly from the user. For example, combined with an OCR confidence score, a bot could trigger a form to be filled in should the confidence score of the OCR be substandard due to a low-quality image.

Bots that encounter a need for human intervention through Apps will automatically suspend, add a task to the centralized inbox, and move on to running another job. When a human has completed the required interaction, the job is flagged to be resumed by a bot.

With the addition of Apps, the development required to capture inputs from the business users is minimized to allow for a deeper human-bot connection, reduction of development timelines, and helping to enable the goal of ‘a bot for every person’.

Measure phase (now with Insights to measure bot performance)

Insights expands UiPath’s reporting capabilities. Specifically, Insights features customizable dashboarding facilities for process and bot metrics. Insights also features the ability to send pulses, i.e. notifications, to users on metrics, such as if an SLA falls below a threshold. Dashboards can be filtered on processes and bots and can be shared through a URL or as a manually sent or scheduled PDF update.

What does this mean for the future of UiPath?

While UiPath and its competitors have long-standing partnerships with the likes of Celonis for process mining, the addition of native process mining through the acquisitions of SnapShot and ProcessGold, in addition to the expanded reporting capabilities, position UiPath as more of an end-to-end RPA provider.

With ProcessGold, NelsonHall believes that UiPath will continue the development of Explorer, which could lead to a nirvana state in which a client deploys ProcessGold, ProcessGold maps the processes and identifies areas that are ideal for automation, and Explorer Expert helps the bot creator to design this process by linking directly with Studio. While NelsonHall has had conversations with niche process mining and automation providers that are focusing on developing bots through a combination of transaction logs and recording users, UiPath is currently the best positioned of the big 3 intelligent automation platform providers to invest in this space.

StudioX is a big step towards allowing citizen developers. During our build-a-bot session, it was clear that the simplified version of the platform is more user-friendly, resulting in the NelsonHall team powering ahead of the instructor at points. However, we were somewhat concerned that while StudioX opens up the ability to develop bots to a larger scope of personae, the slight disconnects between Studio and StudioX could lead to users learning StudioX and wanting to leverage activities that are currently restricted to Studio (such as error handling) becoming frustrated. NelsonHall believes that the lines between Studio and StudioX will blur, with StudioX receiving simplified functions currently restricted to Studio, which will enable more bots to be passed between the two personae.

Conclusion

With the announcements at the Forward III event, it is clear that UiPath is enabling organizations to connect the business users directly with automation; be that through citizen developers with StudioX, the Connected Enterprise Hub to forge stronger connections between business users and automation CoEs, Explorer to allow the CoE to have greater understanding of the processes, or Apps to provide direct access to the bot.

This multi-pronged push approach to connect the developer and automation to the business user will certainly reduce frustrations around bot development and reduce the feeling from business users that automation is something that is thrust upon them rather than being part of their organization's journey to a more efficient way of working.

]]>
<![CDATA[Genpact Acquires Rightpoint to Strengthen 'Experience' Capability]]>

 

Enterprise operations transformation requires three critically important capabilities:

  • Domain process expertise and the ability to identify new “digital” target operating models
  • Transformational technology capability, leveraging technologies such as cloud platforms and intelligent automation to elevate straight-through processing and self-service principles ahead of agent-based processing
  • Experience design and implementation, now highly important to optimize the experience across entire customer, employee, and partner populations.

Genpact has strong domain process expertise and, in recent years, has developed strong transformational technology capability but, despite its acquisition of TandemSeven, has historically possessed lower levels of capability in “experience” design and development.

However, TandemSeven’s experience capability was becoming highly important to Genpact even in core activities such as order management and collections, and Genpact recognized that “experience” was potentially a key differentiating factor for the company. Accordingly, having seen the benefits of integrating TandemSeven, Genpact increasingly looked to go up the value chain in experience capability by both enhancing and scaling its existing capabilities.

Rightpoint Judged to be Highly Complementary to TandemSeven

Rightpoint was then identified as a possible acquisition target by the Genpact M&A team, with Genpact judging that Rightpoint’s assets and capabilities were highly complementary to those of TandemSeven.

Rightpoint currently employs ~450 personnel and positions as a full-service digital agency offering multidisciplinary teams across strategy, design, content, engineering, and insights. The company was formed with the thesis that employee experience is paramount, with the company initially focusing on employee experience, a key area for Genpact, and subsequently developing an increasing emphasis on consumer experience in recent years.

Genpact perceives that Rightpoint can make a significant contribution to helping organizations “define the creative, define the interactive, and hence define a higher experience.” The company’s clients include Aon, Sanofi, M Health, Grant Thornton, Flywheel, and Walgreens. For example, Rightpoint has defined and designed the entire employee experience for Grant Thornton, where the company developed an employee information sharing and knowledge management platform. In addition, Rightpoint has assisted a large pharmaceutical company in creating a patient engagement application to encourage patients to monitor their insulin and sugar levels.

In addition to a complementary skillset, Rightpoint is also complementary to TandemSeven in industry presence. TandemSeven has a strong focus on financial services, with Rightpoint having a significant presence in healthcare and clients in consumer goods, auto, and insurance.

Maximizing the Synergies Between Genpact & Rightpoint

Genpact expects to grow both Rightpoint’s and its own revenues by exploiting the synergies between the two organizations.

One initial synergy being targeted by Genpact is providing end-to-end and “closed loop” services to its clients. Rightpoint employs both creative and technology personnel, with its creative personnel typically having a blend of technology capability allowing them to go from MVP to first product to roll-out. Rightpoint is a Microsoft Customer Engagement Alliance National Solution Provider, a Sitecore Platinum Partner, a certified Google Developer Agency, and also has partnerships with Episerver and Salesforce.

However, the company lacks the process and domain expertise that Genpact can bring to improve process target models and process controls & management. For example, for the medical company example above, Rightpoint could develop the app, while Genpact could run the app and provide the analytics to improve patient engagement, with Rightpoint then modifying the app accordingly.

Secondly, Genpact will support Rightpoint’s growth by bringing financial muscle to Rightpoint, facilitating:

  • An ability to invest in new technology capability in platforms such as Shopify and Adobe
  • The financial means to be able to spend a significant amount of time doing discovery work with clients and prospects, and hence targeting larger-scale assignments.

However, Genpact is being careful not to overstretch Rightpoint. The company intends to be highly disciplined in introducing Rightpoint to its accounts, initially targeting just those champion accounts where Rightpoint will enable Genpact to create a significant level of differentiation.

Genpact also perceives that it can learn from Rightpoint delivery methodologies. Rightpoint has a strong methodology in driving agile delivery and makes extensive use of gig workers (with ~10-15% of its workforce being gig workers) and these are both areas where Genpact perceives it can apply Rightpoint practice to its wider business.

Rightpoint Will Retain its Identity, Culture & Management

Rightpoint and TandemSeven are planned to be integrated with a porting of expertise and resources between both companies, and with Ross Freedman heading an expanded Rightpoint capability and reporting into Genpact’s transformation services lead.

In terms of the current organization, RightPoint has an experience practice and a digital operations practice. This includes an offshore delivery center in Jaipur and technology practice groups. However, while the practices are national, most of Rightpoint’s client delivery work is carried out in regional centers to give strong client proximity. The company’s HQ is in Chicago, with regional centers in Atlanta, Boston, Dallas, Denver, Detroit, Los Angeles, New York, and Oakland.

In due course, Genpact will likely further restructure some of the delivery, with a greater proportion of non-client-facing activity being moved into offshore CoEs.

]]>
<![CDATA[Automation Anywhere’s Enterprise A2019, Simpler to Use, Quicker to Scale]]> ‘Anything Else is Legacy’ was the messaging presented at Automation Anywhere’s Enterprise A2019 launch, hosted in New York.

The event, the first under new CMO Riadh Dridi, showcased improvements in the new version of the Automation Anywhere platform around:

  • Experience – the most immediate change is in the UI. While prior versions utilized code, workflow, and mixed code/workflow views, the new version features a completely revamped workflow view that simplifies the UX with little coding environment
  • Cloud –delivery now utilizes a completely web-based interface, allowing users to sign in and create bots in minutes with zero required installation. This speed of development was demonstrated live on stage with SVP of products Abhijit Kakhandiki successfully racing to create a simple bot against the arrival of an Uber ordered by CEO Mihir Shukla. The bot used in this example was part of Automation Anywhere’s RPA-aaS offering, hosted on Azure leveraging its partnership with Microsoft. Automation Anywhere was also keen to point out the ability to use the platform on-premise or in a private cloud, as is deployed at JP Morgan Chase, the client speaking at the event
  • Ecosystem – Automation Anywhere highlighted it has strong and growing ecosystem. With Microsoft, for example, the partnership has been operating for over a year and has so far featured the ability to embed Microsoft’s AI tools into bots, and the above-mentioned Azure partnership. The event featured a demonstration of the integration of Automation Anywhere into Office: a user was able to select and use bots from Excel, as a single joined experience
  • Intelligent Automation – in addition to leveraging the ecosystem for its ability to drag and drop third-party AI components, another improvement in A2019 was the integration of the capabilities gained through the Klevops acquisition earlier this year to improve assisted automation capabilities, providing a greater bot and human collaboration across teams and workflows

The majority of these enhancements are already analyzed in NelsonHall’s profile of Automation Anywhere’s capabilities as part of the Intelligent Automation Platform NEAT assessment.

Using the above enhancements, Automation Anywhere estimates that whereas previously clients required 3 to 6 months to POC, and a further 6 to 24 months to scale, it now takes 1-4 months to POC and 4-12 months to scale.

Absent from the event were enhancements to the governance procedure of bots, vitally important as the access to build bots increases, and the bot store for which curation could still be an issue.

While the messaging of the event was ‘Anything Else is Legacy’, there were some natural points in which the announcement looks unfinished – the partnership with Office currently only extends to Excel, the rest of the suite will follow, and the Community version of Automation Anywhere, which is how a large proportion of users dip their toes in the water of automation, is set to be updated to match A2019 later in Q4 2019. Likewise, while the improvement to the workflow view is much cleaner, easier to use than competitors, leading to quicker bot development, the competitor platforms more easily handle complex, branching operations. Therefore, while A2019 can be ideal for organizations that are looking to have citizen developers build simple bots, organizations looking to automate more complex workflows should include the competing platforms in shortlisting.

NelsonHall's profile on the Automation Anywhere platform can be found here.

The recent NEAT evaluation of Intelligent Automation Platforms can be found here.

]]>
<![CDATA[NelsonHall Launches Industry-First Intelligent Automation Platform Evaluation]]>

 

NelsonHall has just launched an industry-first evaluation of Intelligent Automation (IA) platforms, including platforms from Antworks, Automation Anywhere, Blue Prism, Datamatics, IPsoft, Jacada, Kofax, Kryon, Redwood, Softomotive, and UiPath.

As RPA and artificial intelligence converge to address more sophisticated use cases, we at NelsonHall feel it is now time for an evaluation of IA platforms on an end-to-end basis and based on the use cases to which IA platforms will typically be applied. Accordingly, NelsonHall has evaluated IA platforms against five use cases:

  • Ability for Business Process Owners to Develop Automations
  • Bot/Human Co-Working SSC Capability
  • Ease of IA Adoption & Scaling
  • End-to-End IA Capability
  • Overall.

Ability for Business Process Owners to Develop Automations – as organizations move to a ‘bot for every worker’, platforms must support the business process owners in developing automations rather than select individuals as part of an automation CoE. Capabilities that support business process owners in developing an automation include a strong bot development canvas, a well-populated app/bot store, and process discovery functionality, all in support of speed of implementation.

Bot/Human Co-Working SSC Capability – in addition to traditional unassisted back-office automation and assisted individual automations, bots are increasingly required to provide end-to-end support for large-scale SSC and contact center automation. This increasingly requires bot/human rather than human/bot co-working, with the bot taking the lead in processing SSC transactions, queries and requests. The key capabilities here include conversational intelligence, ability to handle unidentified exceptions, and seamless integration of RPA and machine learning.

End-to-End IA Capability – the ability for a platform to support an automation spanning an end-to-end process, leveraging ML and artificial intelligence, either through native technologies or through partnerships. While many IA implementations remain highly RPA-centric, it is critical for organizations to begin to leverage a wider range of IA technologies if they are to address unstructured document processing and begin to incorporate self-learning in support of exception handling. Key capabilities here include computer vision/NLP, ability to handle unidentified exceptions, and seamless integration of RPA and machine learning in support of accurate document/data capture, reduced error rates, and improved transparency & auditability of operations.

Ease of IA Adoption & Scaling – the ability for organizations to roll out automations at scale. Key criteria here include the ability to leverage the cloud delivery of the IA platform and the strength of the bot orchestration/management platform.

Overall – a composite perspective of the strength of the IA platforms across capabilities, delivery options, and the benefits provided to clients.

No single platform is the most appropriate across all these use cases, and the pattern of capability varies considerably by use case. And this area is ill-understood, even by the vendors operating in this market, with companies that NelsonHall has identified as leaders unknown even to some of their peers. However, the NelsonHall Evaluation & Assessment Tool (NEAT) for IA platforms enables organizations to see the relative strengths and capabilities of platform vendors for all the use cases described above in a series of quadrant charts.

If you are a buy-side organization, you can view these charts, and even generate your own charts based on criteria that are important to you, FREE-OF-CHARGE at NelsonHall Intelligent Automation Platform evaluation.

The full project, including comprehensive profiles of each vendor and platform, is also available from NelsonHall by contacting either Guy Saunders or Simon Rodd.

]]>
<![CDATA[Democratizing RPA through the Connected Entrepreneur Enterprise]]>

 

Following on from the Blue Prism World Conference in London (see separate blog), NelsonHall recently attended the Blue Prism World conference in Orlando. Building on the significant theme around positioning the ‘Connected Entrepreneur Enterprise’, the vendor provided further details on how this links to the ‘democratization’ of RPA through organizations.

In the past, Blue Prism has seen automation projects stall when being led from the bottom up (due to inabilities to scale and apply strong governance or best practices from IT), or from the top down (which has issues with buy-in and with speed of deployments). However, their Connected Entrepreneur Enterprise story aims to overcome these issues by decentralizing automation. So how is Blue Prism enabling this?

Connected Entrepreneur Enterprise

The Connected RPA components, namely Blue Prism’s connected-RPA platform, Blue Prism Digital Exchange, Blue Prism Skills, and Blue Prism Communities, all aim to facilitate this. In particular, the likes of Blue Prism Communities acts as a knowledge-sharing platform for which Blue Prism envisions that clients will access forums for help in building digital workers (software robots), share best practices, and (with its new connection into Stack Overflow) collaborate on digital worker development.

Blue Prism Skills helps in lightening the load with knowledge requirements for users to begin digital worker development. with the ability to drag and drop in AI components into processes such as any number of computer vision AI solutions.

Decipher for document processing was developed by Blue Prism’s R&D lab, and features ML which can be integrated into digital workers, and in turn can have skills such as language detection from Google dropped into the process. The ability to drag and drop these skills continues the work in allowing business users who know the process best to quickly and easily build AI into digital workers. Additionally, Decipher introduces human-in-the-loop capability into Blue Prism to assist in cases for which the OCR lacks confidence in its result. The beta version of Decipher is set to launch this summer with a focus on invoice processing.

Decipher will also factor in the new cloud-based and mobile-enabled dashboard capabilities in the new dashboard notification area which, in addition to providing SLA alerts, provides alerts when queues for Decipher’s human-in-the-loop feature are backing up.

Client example

An example of Blue Prism being used to democratize RPA is for marquee client EY. EY, Blue Prism’s fifth largest client, spoke during the conference about its automation journey. During the 4.5-year engagement, EY has deployed 2k digital workers, with 1.3k performing client work and 700 working internally on 500 processes. Through the deployment of the digital workforce, EY has saved 2 million-man hours.

In democratizing RPA, EY federated the automation to the business, while using a centralized governance model and IT pipeline. A benefit of having an IT pipeline was that the automation of processes was not a stop-start development.

When surveying its employees, EY found that the employees who had been involved in the development of RPA had the highest engagement.

Likewise, Blue Prism had market surveys performed with a partner that found that in 87% of cases in the U.S., employees are willing to reskill to work alongside a digital workforce.

Summary

There is further work to be done in democratizing RPA as part of this Connected Entrepreneur Enterprise, and Blue Prism is currently looking into upgrading the underlying architecture and is surveying its partners with regard to UI changes; in addition, it is moving aspects of the platform to the cloud, starting with the dashboarding capability. Also, while Blue Prism has its university partnerships, these are often not heavily marketed and are in competition with other RPA vendors in the space offering the likes of community editions to encourage learnings.

]]>
<![CDATA[IPsoft Looks to Reduce Time to Value While Increasing Return on AI]]>

 

NelsonHall recently attended the IPsoft Digital Workforce Summit in New York and its analyst events in NY and London. For organizations unfamiliar with IPsoft, the company has around 2,300 employees, approximately 70% of these based in the U.S. and 20% in Europe. Europe is responsible for aproximately 30% of the IPsoft client base with clients relatively evenly distributed over the six regions: U.K., Spain & Iberia, France, Benelux, Nordics, and Central Europe.

The company began life with the development of autonomics for ITSM in the form of IPcenter, and in 2014 launched the first version of its Amelia conversational agent. In 2018, the company launched 1Desk, effectively combining its cognitive and autonomic capabilities.

The events outlined IPsoft’s positioning and plans for the future, with the company:

  • Investing strongly in Amelia to enhance its contextual understanding and maintain its differentiation from “chatbots”
  • Launching “Co-pilot” to remove the currently strong demarcation between automated and agent interactions
  • Building use cases and a partner program to boost adoption and sales
  • Positioning 1Desk and its associated industry solutions as end-to-end intelligent automation solutions, and the key to the industry and the future of IPsoft.

Enhancing Contextual Understanding to Maintain Amelia’s Differentiation from Chatbots

Amelia has often suffered from being seen at first glance as "just another chatbot". Nonetheless, IPsoft continues to position Amelia as “your digital companion for a better customer service” and to invest heavily to maintain Amelia’s lead in functionality as a cognitive agent. Here, IPsoft is looking to differentiate by stressing Amelia’s contextual awareness and ability to switch contexts within a conversation, thereby “offering the capability to have a natural conversation with an AI platform that really understands you.”

Amelia goes through six pathways in sequence within a conversation to understand each utterance and the pathway with highest probability wins. The pathways are:

  • Intent model
  • Semantic FAQ
  • AIML
  • Social talk
  • Acknowledge
  • Don’t know.

The platform also separates “entities” from “intents”, capturing both of these using Natural Language Understanding. Both intent and entity recognition is specific to the language used, though IPsoft is now simplifying implementation further by making processes language-independent and removing the need for the client to implement channel-specific syntax.

A key element in supporting more natural conversations is the use of stochastic business process networks, which means that Amelia can identify the required information as it is provided by the user, rather than having to ask for and accept items of information in a particular sequence as would be the case in a traditional chatbot implementation.

Context switching is also supported within a single conversation, with users able to switch between domains, e.g. from IT support to HR support and back again in a single conversation, subject to the rules on context switching defined by the organization.

Indeed, IPsoft has always had a strong academic and R&D focus and is currently further enhancing and differentiating Amelia through:

  • Leveraging ELMo with the aim of achieving intent accuracy of >95% while using only half of the data required in other Deep Neural Net models
  • Using NLG to support Elaborate Question Asking (EQA) and Clarifying Question & Answer (CQA) to enable Amelia to follow-up dynamically without the need to build business rules.

The company is also looking to incorporate sentiment analysis within voice. While IPsoft regards basic speech-to-text and text-to-speech as commodity technologies, the company is looking to capture sentiment analysis from voice, differentiate through use of SLM/SRGS technology, and improve Amelia’s emotional intelligence by capturing aspects of mood and personality.

Launching Co-pilot to Remove the Demarcation Between Automated Handling and Agent Handling

Traditionally, interactions have either been handled by Amelia or by an agent if Amelia failed to identify the intent or detected issues in the conversation. However, IPsoft is now looking to remove this strong demarcation between chats handled solely by Amelia and chats handled solely by (or handed off in their entirety) to agents. The company has just launched “Co-pilot”, positioned as a platform to allow hybrid levels of automation and collaboration between Amelia, agents, supervisors, and coaches. The platform is currently in beta mode with a major telco and a bank.

The idea is to train Amelia on everything that an agent does to make hand-offs warmer and to increase Amelia’s ability to automate partially, and ultimately handle, edge cases rather than just pass these through to an agent in their original form. Amelia will learn by observing agent interactions when escalations occur and through reinforcement learning via annotations during chat.

When Amelia escalates to an agent using Co-pilot, it will no longer just pass conversation details but will now also offer suggested responses for the agent to select. These responses are automatically generated by crowdsourcing every utterance that every agent has created and then picking those that apply to the particular context, with digital coaches editing the language and content of the preferred responses as necessary.

In the short term, this assists the agent by providing context and potential responses to queries and, in the longer term as this process repeats over queries of the same type, Amelia then learns the correct answers, and ultimately this becomes a new Amelia skill.

Co-pilot is still at an early stage with lots of developments to come and, during 2019, the Co-pilot functionality will be enhanced to recommend responses based on natural language similarity, enable modification of responses by the agent prior to sending, and enable agents to trigger partial automated conversations.

This increased co-working between humans and digital chat agents is key to the future of Amelia since it starts to position Amelia as an integral part of the future contact center journey rather than as a standalone automation tool.

Building Use Cases & Partner Program to Reduce Time to Value

Traditionally, Amelia has been a great cognitive chat technology but a relatively heavy-duty technology seeking a use case rather than an easily implemented general purpose tool, like the majority of the RPA products.

In response, IPsoft is treading the same path as the majority of automation vendors and is looking to encourage organizations (well at least mid-sized organizations) to hire a “digital worker” rather than build their own. The company estimates that its digital marketplace “1Store” already contains 672 digital workers, which incorporate back-office automation in addition to the Amelia conversational AI interface. For example, for HR, 1Store offers “digital workers” with the following “skills”: absence manager, benefits manager, development manager, onboarding specialist, performance record manager, recruiting specialist, talent management specialist, time & attendance manager, travel & expense manager, and workforce manager.

At the same time, IPsoft is looking to increase the proportion of sales and service through channel partners. Product sales currently make up 56% of IPsoft revenue, with 44% from services. However, the company is looking to steer this ratio further in support of product, by targeting 60% per annum growth in product sales and increasing the proportion of personnel, currently approx. two-thirds, in product-related positions with a contribution from reskilling existing services personnel. 

IPsoft has been late to implement its partner strategy relative to other automation software vendors, attributing this early caution in part to the complexity of early implementations of Amelia. Early partners for IPcenter included IBM and NTT DATA, who embedded IPsoft products directly within their own outsourcing services and were supported with “special release overlays” by IPsoft to ensure lack of disruption during product and service upgrades. This type of embedded solution partnership is now increasingly likely to expand to the major CX services vendors as these contact center outsourcers look to assist their clients in their automation strategies.

So, while direct sales still dominate partner sales, IPsoft is now recruiting a partner/channel sales team with a view to reversing this pattern over the next few years. IPsoft has now established a partner program targeting alliance and advisory (where early partners included major consultancies such as Deloitte and PwC), implementation, solution, OEM, and education partners.

1Desk-based End-to-End Automation is the Future for IPsoft

IPsoft has about 600 clients, including approx. 160 standalone Amelia clients, and about a dozen deployments of 1Desk. However, 1Desk is the fastest-growing part of the IPsoft business with 176 enterprises in the pipeline for 1Desk implementations, and IPsoft increasingly regards the various 1Desk solutions as its future.

IPsoft is positioning 1Desk by increasingly talking about ROAI (the return on AI) and suggesting that organizations can achieve 35% ROAI (rather than the current 6%) if they adopt integrated end-to-end automation and bypass intermediary systems such as ticketing systems.

Accordingly, IPsoft is now offering end-to-end intelligent automation capability by combining the Amelia cognitive agent with “an autonomic backbone” courtesy of IPsoft’s IPcenter heritage and with its own RPA technology (1RPA) to form 1Desk.

1Desk, in its initial form, is largely aimed at internal SSC functions including ITSM, HR, and F&A. However, over the next year, it will increasingly be tailored to provide solutions for specific industries. The intent is to enable about 70% of the solution to be implemented “out of the box”, with vanilla implementations taking weeks rather than many months and with completely new skills taking approx.. three 3 months to deploy.

The initial industry solution from IPsoft is 1Bank. As the name implies, 1Bank has been developed as a conversational banking agent for retail banking and contains preformed solutions/skills covering the account representative, e.g. for support with payments & bills; the mortgage processor; the credit card processor; and the personal banker, to answer questions about products, services, and accounts.

1Bank will be followed during 2019 by solutions for healthcare, telecoms, and travel.

]]>
<![CDATA[Blue Prism Offers A Lever for Culture Change to Mature Enterprises]]> Blue Prism adopted the theme “Connected RPA – Powering the Connected Entrepreneur Enterprise” at its recent Blue Prism World conferences, the key components of connected-RPA being the Blue Prism connected-RPA platform, Blue Prism Digital Exchange, Blue Prism Skills, and Blue Prism Communities:

 

Components of Blue Prism's connected-RPA

 

Blue Prism is positioning by offering mature companies the promise of closing the gap with digital disruptors, both technically and culturally. The cultural aspect is important, with Blue Prism technology positioned as a lever to help organizations attract and inspire their workforce and give digitally-savvy entrepreneurial employees the technology to close the “digital entrepreneur gap” and also close the gap between senior executives and the workforce.

Within this vision, the Blue Prism roadmap is based around helping organizations to:

  • Automate more – here, Blue Prism is introducing intelligent automation skills, ML-based process discovery, and DX
  • Automate better – with more expansive and scalable automations
  • Automate together – by learning from the mistakes and achievements of others.

Introducing intelligent document processing capability

When analyzing the interactions on its Digital Exchange (DX), Blue Prism unsurprisingly found that the single biggest use, with 60% of the items being downloaded from DX, was related to unstructured document processing.

Accordingly, Blue Prism has just announced a beta intelligent document processing program, Decipher. Decipher is positioned as an easy on-ramp to document processing and is a document processing workflow that can be used to ingest & classify unstructured documents. It can be used “out-of-the-box” without the need to purchase additional licenses or products, and organizations can also incorporate their own document capture technologies, such as Abbyy, or document capture services companies within the Decipher framework.

Decipher will clean documents to ensure that they are ready for processing, apply machine learning to classify the documents, and then to extract the data. Finally, it will apply a confidence score to the validity of the data extracted and pass to a business user where necessary, incorporating human-in-the-loop assisted learning.

Accordingly, Decipher is viewed by Blue Prism as a first step in the increasingly important move beyond rule-based RPA to introduce machine learning-based human-in-the-loop capability. Not surprisingly, Blue Prism recognizes that, as machine learning becomes more important, people will need to be brought into the loop much more than at present to validate “low-confidence” decisions and to provide assisted learning to the machine learning.

Decipher is starting with invoice processing and will then expand to handle other document types.

Improving control of assets within Digital Exchange (DX)

The Digital Exchange (DX) is another vital component in Blue Prism’s vision of connected-RPA.

Enhancements planned for DX include making it easier for organizations to collaborate and share knowledge and facilitating greater security and control of assets by enabling an organization to control the assets available to itself. Assets will be able to be marked as private, effectively providing an enterprise-specific version of the Blue Prism digital exchange and within DX, there will be a “skills” drag-and-drop toolbar so that users, and not just partners, will be able to publish skills.

Blue Prism, like Automation Anywhere, is also looking to bring an e-commerce flavor to its DX: developers will be able to create skills and then sell them. Initially, Blue Prism will build some artifacts themselves. Others will be offered free-of-charge by partners in the short-term, with a view in the near term to enabling partners to monetize their assets.

Re-aligning architecture & introducing AI-related skills

Blue Prism has been working closely with cloud vendors to re-align its architecture, and in particular to rework its UI to appeal to a broader range of users and make Blue Prism more accessible to business users.

Blue Prism is also improving its underlying architecture to make it more scalable as well as more cloud-friendly. There will be a new, more native and automated means of controlling bots via a browser interface available on mobiles and tablets that will show the health of the environment in terms of meeting SLAs, and provide notifications showing where interventions are required. Blue Prism views this as a key step in moving towards provision of a fully autonomous digital workforce that manages itself.

Data gateways (available on April 30, 2019 in v6.5) are also being introduced to make Blue Prism more flexible in its use of generated data. Organizations will be able to take data from the Blue Prism platform and send it to ML for reporting, etc.

However, Blue Prism will continue to use commodity AI and is looking to expand the universe of technologies available to organizations and bring them into the Blue Prism platform without the necessity for lots of coding. This is being done via continuing to expand the number of Blue Prism partners and by introducing the concept of Blue Prism skills.

At Blue Prism World, the company announced five new partners:

  • Bizagi, for process documentation and modeling, connecting with both on-premise and cloud-based RPA
  • Hitachi ID Systems, for enhanced identity and access management
  • RPA Supervisor, an added layer of monitoring & control
  • Systran, providing digital workers with translation into 50 languages
  • Winshuttle, for facilitating transfer of data with SAP.

At the same time, the company announced six AI-related skills:

  • Knowledge & insight
  • Learning
  • Visual perception: OCR technologies and computer vision
  • Problem-solving
  • Collaboration: human interaction and human-in-the-loop
  • Planning & sequencing.

Going forward

Blue Prism recognizes that while the majority of users presenting at its conferences may still be focused on introducing rule-based processes (and on a show of hands, a surprisingly high proportion of attendees were only just starting their RPA journeys), the company now needs to take major strides in making automation scalable, and in more directly embracing machine learning and analytics.

The company has been slightly slow to move in this direction, but launched Blue Prism labs last year to look at the future of the digital worker, and the labs are working on addressing the need for:

  • More advanced process analytics and process discovery
  • More inventive and comprehensive use of machine learning (though the company will principally continue to partner for specialized use cases)
  • Introduction of real-time analytics directly into business processes.
]]>
<![CDATA[Automation Anywhere Monetizes Bot Store to Provide ‘Value as a 2-Way Street’]]> Automation Anywhere’s current Bot Store contains ~500 bots and has received ~40K downloads. In January 2019, these bots were complemented by Digital Workers, with bots being task-centric and Digital Workers being persona- and skill-centric.

 

 

So far, downloads from the Bot Store have been free-of-charge, but Automation Anywhere perceives that this approach potentially limits the value achievable from the Bot Store. Accordingly, the company is now introducing monetization to provide value back to developers contributing bots and Digital Workers to the Bot Store, and to increase the value that clients can receive. In effect, Automation Anywhere is looking to provide value as a two-way street.

The timing for introducing monetization to the Bot Store will be as follows:

  • April 16, 2019: announcement and start of sales process validation with a small number of bots and bot bundles priced within the Bot Store. Examples of “bot bundles” include a number of bots for handling email operation around Outlook or bots for handling common Excel operations
  • May 2019: Availability of best practice guides for developers containing guidelines on how to write bots that are modular and easy to onboard. Start of developer sign-up
  • Early summer 2019: customer launch through the direct sales channel. At this stage, bots and Digital Workers will only be available through the formal direct sales quotation process rather than via credit card purchases
  • Late summer 2019: launch of “consumer model” and Bot Store credit card payments.

Pricing, initially in US$ only, will be per bot or Digital Worker, with a 70:30 revenue split between the developer and Automation Anywhere, with Automation Anywhere handling the billing and paying the developer monthly. Buyers will have a limited free trial period, initially 30 days but under review, but IP protection is being introduced so that buyers will not have access to the source code. The original developer will retain responsibility for building, supporting, maintaining, and updating their bots and Digital Workers. Automation Anywhere is developing some Digital Workers itself in order to seed the Bot Store with some examples, but Automation Anywhere has no desire to develop Digital Workers medium-term itself and may, once the concept is well-proven, hand over/license the Digital Workers it has developed to third-party developers.

Automation Anywhere clearly expects that a number of smaller systems integrators will switch their primary business model from professional services to a product model, building bots for the Bot Store, and is offering developers the promise of a recurring revenue stream and global distribution ultimately through not only the Bot Store but through Automation Anywhere and its partners. Although payment will be monthly, developers will receive real-time transaction reporting to assist them in their financial management. For professional services firms retaining a strong professional services focus, but used to operating on a project basis, Automation Anywhere perceives that licensing and updating Digital Workers within this model could provide both a supplementary revenue stream, and possibly, more importantly, a means to maintain an ongoing relationship with the client organization.

In addition to systems integrators, Automation Anywhere is targeting ISVs who, like Workday, can use the Bot Store and Automation Anywhere to facilitate deployment and operation of their software by introducing Digital Workers that go way beyond simple connectors. Although the primary motivation of these firms is likely to be to reduce the time to value for their own products, Automation Anywhere expects ISVs to be cognizant of the cost of adoption and to price their Digital Workers at levels that will provide both a reduced cost of adoption to the client and a worthwhile revenue stream to the ISV. Pricing of Digital Workers in the range $800 to as high as $12k-$15K per annum has been mentioned.

So far, inter-enterprise bot libraries have largely been about providing basic building blocks that are commonly used across a wide range of processes. The individual bots have typically required little or no maintenance and have been disposable in nature. Automation Anywhere is now looking to transform the concept of bot libraries to that of bot marketplaces to add a much higher, and long-lived, value add and to put bots on a similar footing to temporary staff with updateable skills.

The company is also aiming to steal a lead in the development of such bots and, preferably Digital Workers, by providing third-parties with the financial incentive to develop for its own, rather than a rival, platform.

]]>
<![CDATA[Automation Anywhere Looking to 'Deliver the Digital Workforce for Everyone']]> Automation Anywhere, as with the RPA market in general, continues to grow rapidly. The company estimates that it now has 1,600 enterprise clients, encompassing 3,800 unique business entities across 90 countries with ~10,000 processes deployed. At end 2018, the company had 1,400 employees, and it expects to have 3,000 employees by end 2019.

The company was initially slow to go-to-market in Europe relative to Blue Prism and UiPath, but estimates it has more than tripled its number of customers in Europe in the past 12 months.

NelsonHall attended the recent Automation Anywhere conference in Europe, where the theme of the event was “Delivering Digital Workforce for Everyone” with the following sub-themes:

  • Automate Everything
  • Adopted by Everyone
  • Available Everywhere.

Automate Everything

Automation Anywhere is positioning as “the only multi-product vendor”, though it is debatable whether this is entirely true and also whether it is desirable to position the various components of intelligent automation as separate products.

Nonetheless, Automation Anywhere is clearly correct in stating that, “work begins with data (structured and unstructured) – then comes analysis to get insight – then decisions are made (rule-based or cognitive) – which leads to actions – and then the cycle repeats”.

Accordingly, “an Intelligent RPA platform is a requirement. AI cannot be an afterthought. It has to be part of current processes” and so Automation Anywhere comes to the following conclusion:

Intelligent digital workforce = RPA (attended + unattended) + AI + Analytics

Translated into the Automation Anywhere product range, this becomes:

 

 

Adopted by Everyone

Automation Everywhere clearly sees the current RPA market as a land grab and is working hard to scale adoption fast, both within existing clients and to new clients, and for each role within the organization.

The company has traditionally focused on the enterprise market with organizations such as AT&T, ANZ, and Bank of Columbia using 1,000s of bots. For these companies, transformation is just beginning as they now look to move beyond traditional RPA, and Automation Anywhere is working to include AI and analytics to meet their needs. However, Automation Anywhere is now targeting all sizes of organization and sees much of its future growth coming from the mid-market (“automation has to work for all sizes of organization”) and so is looking to facilitate adoption here by introducing a cloud version and a Bot Store.

The company sees reduced “time to value” as key to scaling adoption. In addition to a Bot Store of preconfigured bots, the company has now introduced the concept of downloadable “Digital Workers” designed around personas, e.g. Digital SAP Accounts Payable Clerk. Automation Anywhere had 14 Digital Workers available from its Bot Store as at mid-March 2019. These go beyond traditional preconfigured bots and include pretrained cognitive capability that can process unstructured data relevant to the specific process, e.g. accounts payable.

In addition, Automation Anywhere believes that to automate at the enterprise-wide level you have to onboard your workforce very fast, so that you can involve more of the workforce sooner. Accordingly, the company is providing role-based in-product learning and interfaces.

To enable the various types of user to ramp up quickly, the coming version of Automation Anywhere will provide a customizable user interface to support the differing requirements of the business, IT, and developers, providing unique views for each. For example:

  • The business user interface can be set up with a customized tutorial on how to build a simple bot using a Visio-like graphical interface. The advanced functionality can be hidden when they start using the tool. Alternatively, the business user can use the recorder to create a visual representation of what needs to be done, including documenting cycle times and savings information, etc., then passing this requirement to a developer
  • Advanced developers, on the other hand, can be set up with advanced functionality including, for example, the ability to embed their own code in, say, Python
  • An IT user can learn about and manage user management, including roles and privileges, and license management.

The Automation Anywhere University remains key to adoption for all types of user. Overall, Automation Anywhere estimates that it has trained ~100K personnel. The Automation Anywhere University has:

  • An association with 200 educational institutions
  • 26 training partners
  • 9 role-based learning tracks
  • 120 certified trainers
  • Availability in 4 course languages.

An increased emphasis on channel sales is also an important element in increasing adoption, with Automation Anywhere looking to increase the proportion of sales through partners from 50% to 70%. The direct sales organization consists of 13 field operating units broken down into pods, and this sales force will be encouraged to leverage partners with a “customer first/partner preferred” approach.

Partner categories include:

  • BPOs with embedded use of Automation Anywhere, and Automation Anywhere is now introducing tools that will facilitate support for managed service offerings
  • Global alliance partners (major consultancies and systems integrators)
  • The broader integrator community/local SIs
  • A distributor channel. Automation Anywhere is currently opening up a volume channel and has appointed distributors including TechData and ECS
  • Private Equity. Automation Anywhere has set up a PE practice to go after the more deterministic PEs who are very prescriptive with their portfolio companies.

In addition, Automation Anywhere is now starting to target ISVs. The company has a significant partnership with Workday to help the ISV automate implementation and reduce implementation times by, for example, assisting in data migration, and the company is hoping that this model can be implemented widely across ISVs.

Automation Anywhere is also working on a partner enablement platform, again seen as a requisite for achieving scale, incorporating training, community+, etc. together with a demand generation platform.

Customer success is also key to scaling. Here, Automation Anywhere claims a current NPS of 67 and a goal to exceed the NPS of 72 achieved by Apple. With that in mind, Automation Anywhere has created a customer success team of 250 personnel, expected to grow to 600+ as the team tries to stay ahead of customer acquisition in its hiring. All functions with Automation Anywhere get their feedback solely through this channel, and all feedback to clients is through this channel. In addition, the sole aim of this organization is to increase the adoptability of the product and the organization’s NPS. The customer success team does not get involved in up-selling, cross-selling, or deal closure.

Available Everywhere

“Available Everywhere” encompasses both a technological and a geographic perspective. From a hosting perspective, Automation Anywhere is now available on cloud or on-premise, with the company clearly favoring cloud where its clients are willing to adopt this technology. In particular, the company sees cloud hosting as key to facilitating its move from the enterprise to increasingly address mid-market organizations.

At the same time, Automation Anywhere has “taken installation away” with the platform, whether on-premise or on cloud, now able to be accessed via a browser. The complete cloud version “Intelligent Automation Cloud” is aimed at allowing organizations to start their RPA journey in ~4 minutes, while considerably reducing TCO.

 

 

In terms of languages, the user interface is now available in eight languages (including French, German, Japanese, Spanish, Chinese, and Korean) and will adjust automatically to the location selected by the user. At the same time, the platform can process documents in 190 languages.

Automation Anywhere also provides a mobile application for bot management.

Summary

In summary, Automation Anywhere regards the keys to winning a dominant market share in the growth phase of the RPA market as being about simultaneously facilitating rapid adoption in its traditional large enterprise market and moving to the mid-market and SMEs at speed.

The company is facilitating ongoing RPA scaling in large enterprises by recognizing the differing requirements of business users, IT, and developers, and establishing separate UIs to increase their acceptance of the platform while increasingly supporting their need to incorporate machine learning and analytics as their use cases become more sophisticated. For the smaller organization, Automation Anywhere has facilitated adoption by introducing free trials, a cloud version to minimize any infrastructure hurdles, and a Bot Store to reduce development time and time to value.

]]>
<![CDATA[D-GEM: Capgemini’s Answer to the Problem of Scaling Automation]]> Finance & accounting is at the forefront of the application of RPA, with organizations attracted by its high volumes of transactional activity. Consequently, activities such as the movement and matching of data within purchase-to-pay have been a frequent start-point for organizational automation initiatives.

Organizations starting on RPA are initially faced with the challenges of understanding RPA tools and approaches and typically lack the internal skills necessary to undertake automation initiatives. Once these skills have been acquired, RPA is then often applied in a piecemeal fashion, with each use case considered by a governance committee on its own merits. However, once a number of deployments have been achieved, organizations then look to scale their automation initiatives across the finance function and are confronted by the sheer complexity, and impossibility, of managing the scaling of automation while maintaining a ‘piecemeal’ approach. At this point, organizations realize they need to modify their approach to automation and adopt a guiding framework and target operating model if they are to scale automation successfully across their finance & accounting processes.

In response to these needs, Capgemini has introduced its Digital Global Enterprise Model (D-GEM to assist organizations in scaling automation across processes such as finance & accounting more rapidly and effectively.

Introducing D-GEM

The basic premise behind D-GEM is that organizations need both a vision and a detailed roadmap if they are to scale their application of automation successfully. Capgemini is taking an automation-first approach to solutioning, with the client vision initially developed in “Five Senses of Intelligent Automation” workshops. Here, Capgemini runs workshops for clients to demo the various technologies and the possibilities from automation, and to establish their new target operating model taking into account:

  • The key outcomes sought within finance & accounting under the new target operating model. For example, key outcomes sought could be reduced DSO, increased working capital, and reduced close days
  • How the existing processes could be configured and connected better using “five senses”:
    • Act (RPA)
    • Think (analytics)
    • Remember (knowledge base)
    • Watch (machine vision & machine learning)
    • Talk (chatbot technology).

However, while the vision, goals, and technology are important, implementing this target operating model at scale requires an understanding of the underlying blueprint, and here Capgemini has developed D-GEM as the “practitioners’ guidebook, a repository showing (e.g., for finance & accounting) what can be achieved and how to achieve it at a granular level (process level 4).

D-GEM essentially aims to provide the blueprint to support the use of automation and deliver the transformation. It is now being widely used within Capgemini and is being made available not just to the company’s BPO clients but for wider application by non-BPO clients within their SSCs and GBS organizations.

From GEM to D-GEM

Capgemini’s original GEM (Global Enterprise Model) was used for solutioning and driving transformation within BPO clients prior to the advent of intelligent automation technologies. Its transformation focus was on improving the end-to-end process and eliminating exceptions. It aimed to introduce best-in-class processes while optimizing the location mix and improving domain competencies and reflected the need to drive standardization and lean processes to deliver efficiency.

While the focus of D-GEM remains the introduction of “best-in-class” processes, best-in-class has now been updated to take into account Intelligent Automation technologies, and the transformation focus has changed to the application of automation to facilitate best-in-class. For example, industrialization of the inputs needs to be taken into account at an early stage if downstream processes are to be automated at scale. Alongside the efficiency focus on eliminating waste, it also looks to use technology to improve the user experience. For instance, rather than eliminating non-standard reporting as has often been a focus in the past, deployment of reporting tools and services on top of standardized inputs and data can enhance the user experience by allowing them to produce their own one-off reports based on consistent and accurate information.

D-GEM provides a portal for practitioners using the same seven levers as GEM, namely:

  • Grade Mix
  • Location Mix
  • Competencies
  • Digital Global Process Model
  • Technology
  • Pricing and Cost Allocations
  • Governance.

However, the emphasis within each of these levers has now changed, as explained in the following sections.

Role of the Manager Changes from Managing Throughput to Eliminating Exceptions

Within Grade Mix, Capgemini evaluates the impact of automation on the grade mix, including how to increase the manager’s span of control by adding bots as well as people, how to use knowledge to increase the capability at different grades, and how to optimize the team structure.

Under D-GEM, the role of the manager fundamentally changes. With the emphasis on automation-first, the primary role of the manager is now to assist the team in eliminating exceptions rather than managing the throughput of team members. Essentially, managers now need to focus on changing the way invoices are processed rather than managing the processing of invoices.

The needs of the agents also change as the profile of work changes with increased levels of task automation. Typically, agents now need to have a level of knowledge that will enable them to act as problem-solvers and trainers of bots. Millennials typically have great problem-solving skills, and Capgemini is using Transversal and the process knowledge base within D-GEM to skill people up faster and ensure Process Champions are growing within each delivery team, so knowledge management tools have a key role to play in ensuring that knowledge is effectively dispersed and able junior team members can expand their responsibility more quickly.

The required changes in competency are key considerations within digital transformations, and it is important to understand how the competencies of particular roles or grades change in response to automation and how to ensure that the workforce knows how automation can enrich and automate their capabilities.

The resulting team structure is often portrayed as a diamond. However, Capgemini believes it is important not to end up with a top-heavy organization as a result of process automation. The basic pyramid structure doesn’t necessarily change, but the team now includes an army of robots, so while the span of managers will typically be largely unchanged in terms of personnel, they are now additionally managing bots. In addition, tools such as Capgemini’s “prompt” facilitate the management of teams across multiple locations.

Within Location Mix, as well as evaluating that the right processes are in the right locations and how the increased role of automation impacts the location mix, it is now important to consider how much work can be transitioned to a Virtual Delivery Center.

Process & Technology Roadmaps Remain Important

Within Digital Global Process Model, D-GEM provides a roadmap for best-practice processes powered by automation with integrated control and performance measures. Capgemini firmly believes that if an organization is looking to transform and automate at scale, then it is important to apply ESOAR (eliminate, standardize, optimize, automate, and then apply RPA and other intelligent automation technologies) first, not just RPA.

Finance & accounting processes haven’t massively changed in terms of the key steps, but D-GEM now includes a repository for each process, based on ESOAR, which shows which steps can be eliminated, what can be standardized, how to optimize, how to automate, how to robotize, and how to add value.

Within the Technology lever, D-GEM then provides a framework for identifying suitable technologies and future-proofing technology. It also indicates what technologies could potentially be applied to each process tower, showing a “five senses” perspective. For example, Capgemini is now undertaking some pilots applying blockchain to intercompany accounting to create an internal network. Elsewhere, for one German organization, Capgemini has applied Tradeshift and RPA on top of the organization’s ERP to achieve straight-through processing.

In addition, as would be expected, D-GEM includes an RPA catalog, listing the available artifacts by process, together with the expected benefits from each artifact, which greatly facilitates the integration of RPA into best practices.

Governance is also a critical part of transformation, and the Governance lever within D-GEM suggests appropriate structures to drive transformation, what KPIs should be used to drive performance, and how roles in the governance model change in the new digital environment.

Summary

Overall, D-GEM has taken Capgemini’s Global Enterprise Model and updated it to address the world of digital transformation, applying automation-first principles. While process best practice remains key, best practice is now driven by a “five senses” perspective and how AI can be applied in an interconnected fashion across processes such as finance and accounting.

]]>
<![CDATA[AntWorks Positioning BOT Productivity and Verticalization as Key to Intelligent Automation 2.0]]> Last week, AntWorks provided analysts with a first preview of its new product ANTstein SQUARE, to be officially launched on May 3.

AntWorks’strategy is based on developing full stack intelligent automation, built for modular consumption, and the company’s focus in 2019 is on:

  • BOT productivity, defined as data harvesting plus intelligent RPA
  • Verticalization.

In particular, AntWorks is trying to dispel the idea that Intelligent Automation needs to consist of three separate products from three separate vendors across machine vision/OCR, RPA, and AI in the form of ML/NLP, and show that AntWorks can offer a single, though modular, “automation” across these areas end-to-end.

Overall, AntWorks positions Intelligent Automation 2.0 as consisting of:

  • Multi-format data ingestion, incorporating both image and text-based object detection and pattern recognition
  • Intelligent data association and contextualization, incorporating data reinforcement, natural language modelling using tokenization, and data classification. One advantage claimed for fractal analysis is that it facilitates the development of context from images such as company logos and not just from textual analysis and enables automatic recognition of differing document types within a single batch of input sheets
  • Smarter RPA, incorporating low code/no code, self-healing, intelligent exception handling, and dynamic digital workforce management.

Cognitive Machine Reading (CMR) Remains Key to Major Deals

AntWorks’ latest release, ANTstein SQUARE is aimed at delivery of BOT productivity through combining intelligent data harvesting with cognitive responsiveness and intelligent real-time digital workforce management.

ANTstein data harvesting covers:

  • Machine vision, including, to name a modest sub-set, fractal machine learning, fractal image classifier, format converter, knowledge mapper, document classifier, business rules engine, workflow
  • Pre-processing image inspector, where AntWorks demonstrated the ability of its pre-processor to sharpen text and images, invert white text on a black background, remove grey shapes, and adjust skewed and rotated inputs, typically giving a 8%-12% uplift
  • Natural language modelling.

Clearly one of the major issues in the industry over the last few years has been the difficulty organizations have experienced in introducing OCR to supplement their initial RPA implementations in support of handling unstructured data.

Here, AntWorks has for some time been positioning its “cognitive machine reading” technology strongly against traditional OCR (and traditional OCR plus neural network-based machine learning) stressing its “superior” capabilities using pattern-based Content-based Object Retrieval (CBOR) to “lift and associate all the content” and achieve high accuracy of captured content, higher processing speeds, and ability to train in production. AntWorks also takes a wide definition of unstructured data covering not just typed text, but also including for example handwritten documents and signatures and notary stamps.

AntWorks' Cognitive Machine Reading encompasses multi-format data ingestion, fractal network driven learning for natural language understanding using combinations of supervised learning, deep learning, and adaptive learning, and accelerators e.g. for input of data into SAP.

Accuracy has so far been found to be typically around 75% for enterprise “back-office” processes, but the level of accuracy depends on the nature of the data, with fractal technology most appropriate where the past data strongly correlates with future data and data variances are relatively modest. Fractal techniques are regarded by AntWorks as being totally inappropriate in use cases where the data has a high variance e.g. crack detection of an aircraft or analysis of mining data. In such cases, where access to neural networks is required, AntWorks plans to open up APIs to, for example, Amazon and AWS.

Several examples of the use of AntWorks’ CMR were provided. In one of these, AntWorks’ CMR is used in support of sanction screening within trade finance for an Australian bank to identify the names of the parties involved and look for banned entities. The bank estimates that 89% of entities could be identified with a high degree of confidence using CMR with 11% having to be handled manually. This activity was previously handled by 50 FTEs.

Fractal analysis also makes its own contribution to one of ANTstein’s USPs: ease of use. The business user uses “document designer”, to train ANTstein on a batch of documents for each document type, but fractal analysis requires lower numbers of cases than neural networks and its datasets also inherently have lower memory requirements since the system uses data localization and does not extract unnecessary material.

RPA 2.0 “QueenBOTs” Offer “Bot Productivity” through Cognitive Responsiveness, Intelligent Digital Automation, and Multi-Tenancy

AntWorks is positioning to compete against the established RPA vendors with a combination of intelligent data harvesting, cognitive bots, and intelligent real-time digital workforce management. In particular, AntWorks is looking to differentiate at each stage of the RPA lifecycle, encompassing:

  • Design, process listener and discoverer
  • Development, aiming to move towards low code business user empowerment
  • Operation, including self-learning and self-healing in terms of exception handling to become more adaptive to the environment
  • Maintenance, incorporating code standardization into pre-built components
  • Management, based on “central intelligent digital workforce management.

Beyond CMR, much of this functionality is delivered by QueenBOTs. Once the data has been harvested it is orchestrated by the QueenBOT, with each QueenBOT able to orchestrate up to 50 individual RPA bots referred to as AntBOTs.

The QueenBOT incorporates:

  • Cognitive responsiveness
  • Intelligent digital automation
  • Multi-tenancy.

“Cognitive responsiveness” is the ability of the software to adjust automatically to unknown exceptions in the bot environment, and AntWorks demonstrated the ability of ANTstein SQUARE to adjust in real-time to situations where non-critical data is missing or the portal layout has changed. In addition, where a bot does fail, ANTstein aims to support diagnosis on a more granular basis by logging each intermittent step in a process and providing a screenshot to show where the process failed.

AntWorks’ is aiming to put use case development into the hands of the business user rather than data scientists. For example, ANTstein doesn’t require the data science expertise for model selection typically required when using neural network based technologies and does its own model selection.

AntWorks also stressed ANTstein’s ease of use via use of pre-built components and also by developing its own code via the recorder facility and one client talking at the event is aiming to handle simple use cases in-house and just outsourcing the building of complex use cases.

AntWorks also makes a major play on reducing the cost of infrastructure compared to traditional RPA implementations. In particular, ANTstein addresses the issue of servers or desktops being allocated to, or controlled by, an individual bot by incorporating dynamic scheduling of bots based on SLAs rather than timeslots and enabling multi-tenancy occupancy so that a user can use a desktop while it is simultaneously running an AntBOTs or several AntBOTs can run simultaneously on the same desktop or server.

Building Out Vertical Point Solutions

A number of the AntWorks founders came from a BPO background, which gave them a focus on automating the process middle- and back-office and the recognition that bringing domain and technology together is critical to process transformation and building a significant business case.

Accordingly, verticalization is a major theme for AntWorks in 2019. In addition to support for a number of horizontal solutions, AntWorks will be focusing on building point solutions in nine verticals in 2019, namely:

  • Banking: trade finance, retail banking account maintenance, and anti-money laundering
  • Mortgage (likely to be the first area targeted): new application processing, title search, and legal description
  • Insurance: new account set up, policy maintenance, claims handling, and KYC
  • Healthcare & life sciences: BOB reader, PRM chat, payment posting, and eligibility
  • Transportation & logistics: examination evaluation
  • Retail & CPG: no currently defined point solutions
  • Telecom: customer account maintenance
  • Media & entertainment: no currently defined point solutions
  • Technology & consulting: no currently defined point solutions.

The aim is to build point solutions (initially in conjunction with clients and partners) that will be 80% ready for consumption with a further 20% of effort required to train the bot/point solution on the individual company’s data.

Building a Partner Ecosystem for RPA 2.0

The company claims to have missed the RPA 1.0 bus by design (the company commenced development of “full-stack ANTstein in 2017) and is now trying to get out the message that the next generation of Intelligent Automation requires more than OCR combined with RPA to automate unstructured data-heavy industry-specific processes.

The company is not targeting companies with small numbers of bot implementations but is ideally seeking dozens of clients, each with the potential to build into $10m relationships. Accordingly the bulk of the company’s revenues currently comes from, and is likely to continue to come from, CMR-centric sales with major enterprises either direct or through relationships with major consultancies.

Nonetheless, AntWorks is essentially targeting three market segments:

  • Major enterprises with CMR-centric deals
  • RPA 2.0, through channels
  • Point solutions.

In the case of major enterprises, CMR is typically pulling AntWorks’ RPA products through to support the same use cases.

AntWorks is trying to dissociate itself from RPA 1.0, strongly positioning against the competition on the basis of “full stack”, and is slightly schizophrenic about whether to utilize a partner ecosystem which is already tied to the mainstream RPA products. Nonetheless, the company is in the early stages of building a partner ecosystem for its RPA product based on:

  • Referral partners
  • Authorized resellers
  • Managed Services Program, where partners such as EXL build their own solutions incorporating AntWorks
  • Technology Alliance partners
  • Authorized training partners
  • University partners, to develop up a critical mass of entry-level automation personnel with experience in AntWorks and Intelligent Automation in general.

Great Unstructured Data Accuracy but Needs to Continue to Enhance Ease of Use

A number of AntWorks’ clients presented at the event and it is clear that they perceive ANTstein to deliver superior capture and classification of unstructured data. In particular, clients liked the product’s:

  • Superior natural language-based classification using limited datasets
  • Ability to use codeless recorders
  • Ability to deliver greater than 70% accuracy at PoC stage

However, despite some the product’s advantages in terms of ease of use, clients would like further fine tuning of the product in areas such as:

  • The CMR UI/UX is not particularly user-friendly. The very long list of options is hard for business users to understand who require shorter more structured UI
  • Improved ease of workflow management including ability to connect to popular workflows.

So, overall, while users should not yet consider mass replacement of their existing RPAs, particularly where these are being used for simple rule-based process joins and data movement, ANTstein SQUARE is well worth evaluation by major organizations that have high-volume industry-specific or back-office processes involving multiple types of unstructured documents in text or handwritten form and where achieving accuracy of 75%+ will have a major impact on business outcomes. Here, and in the industry solutions being developed by AntWorks, it probably makes sense to use the full-stack of ANTstein utilizing both CMR and RPA functionality. In addition, CMR could be used in standalone form to facilitate extending an existing RPA-enabled process to handle large volumes of unstructured text.

Secondly, major organizations that have an outstanding major RPA roll-out to conduct at scale, are becoming frustrated at their level of bot productivity, and are prepared to introduce a new RPA technology should consider evaluating AntWorks' QueenBOT functionality.

The Challenge of Differentiating from RPA 1.0

If it is to take advantage of its current functionality, AntWorks urgently needs to differentiate its offerings from those of the established RPA software vendors and its founders are clearly unhappy with the company’s past positioning on the majority of analyst quadrants. The company aimed to achieve a turnaround of the analyst mindset by holding a relatively intimate event with a high level of interaction in the setting of the Maldives. No complaints there!

The company is also using “shapes” rather than numbers to designate succeeding versions of its software. Quirky and could be incomprehensible downstream.

However, these marketing actions are probably insufficient in themselves. To complement the merits of its software, the company needs to improve its messaging to its prospects and channel partners in a number of ways:

  • Firstly, the company’s tagline “reimagining, rethink, recreate” shows the founders’ backgrounds and is arguably more suitable for a services company than for a product company
  • Secondly, establishing an association with Intelligent Automation 2.0 and RPA 2.0 is probably too incremental to attract serious attention.

Here the company needs to think big and establish a new paradigm to signal a significant move beyond, and differentiation from, traditional RPA.

]]>
<![CDATA[A First Look at Blue Prism’s New RPA Initiatives]]>

 

Today’s announcement from Blue Prism covers new product capabilities, new service and design support services, and a new go-to-market framework that underscores the importance of automation as a means to enable legacy organizations to compete with 'born-digital' startups. Blue Prism’s announcement is equal parts perspective, product, and process. Let’s examine each in turn.

Perspective

The perspective Blue Prism is bringing to the table today is the notion of empowering digital entrepreneurs within an organization (under the flag ‘connected RPA’) with the intent of either disruption-proofing that organization or at least enabling self-disruption as part of a deliberate strategy.

In Blue Prism’s view, this is best accomplished through a package of three organizational automation design concepts. The first is the federation of the center of excellence concept – which is not to say that existing CoEs are obsolete, but rather now serve as a lighthouse for other disciplinary CoEs within, for example, finance, production, and customer care. Pushing more organizational automation authority and responsibility outward into the organization, in Blue Prism’s view, enables legacy organizations to begin acting more like ‘born-digital’ disruptors.

The second such principle, enabled by the first, is the concept of significantly accelerating the process of moving from proof of concept to at-scale testing to enterprise deployment. Again, the company positions this as a means to emulate born-digital firms and build both proactive and reactive organizational change speed through rapid automation technology deployment.

And third, Blue Prism is emphasizing the value of peer-to-peer interaction among organizational automation executives, a plank of its strategy that is being served through the rollout of Blue Prism Community – an area in Blue Prism Digital Exchange for sharing best practices and collaborating on automation challenges.

Product

The product announcements supporting this new go-to-market perspective include a process discovery capability, which will be available on the Blue Prism website. For those readers who recall seeing Blue Prism announce a partner relationship with Celonis in September of 2018, this may come as a surprise, but the firm has every intention of maintaining that relationship; this new software offering is intended as a lighter process exploration tool with the ability to visualize and contextualize process opportunities.

Blue Prism is careful to distinguish here between process discovery – the identification of processes representing a good fit for automation – and process mining, a deeper capability offered by Celonis that includes analysis of the specific stepwise work done within those processes.

Blue Prism also announced today the availability of its London-based Blue Prism AI Research Lab and accompanying AI roadmap strategy, which focuses on three areas: understanding and ingesting data in a broader variety of formats, simplifying automation design, and improving the relationship between humans and digital workers in assisted automations.

In addition, in an effort to put its expanded product set in the hands of more organizations, Blue Prism is also going to open up access to the company’s RPA software making it easy for people to get started, learn more, and explore what’s possible with an intelligent digital workforce.

Process

Finally, the process of engaging Blue Prism is changing as well. The company has established, through its experience in deployments, that the early stages of organizational automation initiatives are critical to the long-term success of such efforts, and has staged more support services and personnel into this period in response. Far from being a rebuke of channel partner efforts, this packaged service offering will actually increase the need for delivery partner resources ‘on the ground’ to service customers’ automation capabilities.

Blue Prism’s own customer success and services organization will offer to provide Blue Prism expertise into the customer programs through a series of pre-defined interventions that complement and augment the customers’ and partners’ efforts. The offering, entitled Success Accelerator, is designed around Blue Prism’s Robotic Operating Model (ROM), the company’s design and deployment framework. The intent of this new product is accelerating and accentuating client ROI by establishing sound automation delivery principles based on lessons Blue Prism has learned in its deployment history to date.

Summary

Blue Prism’s suite of product, process and perspective announcements today underscore an emerging trend in the sector – namely, the awareness that automation offers real improvements in organizational speed and agility, two characteristics that will be important for legacy organizations to develop if they are to compete with fast, reactive, born-digital disruptive startups.

The connected RPA vision that Blue Prism has outlined highlights the evolving power of automation. It extends beyond the limits of traditional RPA, giving users a compelling automation platform which includes AI and cognitive features. Furthermore, the new roadmap, capabilities, and features being introduced today enable Blue Prism’s growing community of developers, customers, channel partners, and technology alliances.

]]>
<![CDATA[Get Ready for Quantum Computing: 5 Steps to Take in 2019]]>

 

IBM recently announced the first ‘commercial-ready’ quantum computer, the 20-qubit Q System One. The date is certainly worth recording in the annals of computing history. But, in much the same way that mainframes, micros, and PCs all began with an ‘iron launch’ and then required a long pragmatic use case maturity curve, so too will this initial offering from IBM be the first step on a long evolution path. With so much conjecture and contemplation happening in the industry surrounding this announcement, let’s unpack what IBM’s announcement means – and how organizations should be reacting.

First, although Q System One is being billed as commercial-ready, that designation means that the product is ready for usage on a traditional cloud computing basis, not necessarily that it is ready to contribute meaningfully to solving business problems (although the device will certainly mature quickly in both capability and speed). What Q System One does offer is a keystone for the industry to begin working with quantum technology in much the same way that any other cloud utility supercomputing devices are available, and a testbed for beginning to explore and develop quantum code and quantum computing strategies. As such, while Q System One may not outperform traditional cloud computing resources today, its successors will likely do so in short order – perhaps as soon as 2020.

As I noted in my blockchain predictions blog for 2019, quantum computing has long been the shadow over blockchain adoption, owing to the concern that quantum computing will make blockchain’s security aspect obsolete. That watershed lies years in our future, if indeed at all, and it is important to note that quantum computing can as easily be tasked to enhance cryptographic strength as it can to break it down. As a result, expect that the impact of quantum computing on blockchain will net to a zero-sum game, with quantum capabilities powering ever-more evolved cryptographic standards in much the same way that the cybersecurity arms race has proceeded to date.

With this in mind, what should organizations have on their quantum readiness roadmaps? The short answer is that quantum readiness is more the beginning of many long-term projects rather than the consummation of any short-term ones, so quantum is more a component of IT strategy than near-term tactical change. Here are five recommendations I’m making for beginning to ready your organization for quantum computing during 2019.

Migrate to SHA-3 – and build an agile cybersecurity faculty

There is no finish line for cybersecurity, especially with quantum capabilities on the horizon, but when I speak with enterprise organizations on the subject, I recommend that a combination of NIST and RSA/ECC technologies approximates to something that will be quantum-proof for the foreseeable future. Migration off of SHA-2 is a strong prescriptive regardless, given the flaws that platform shared with its predecessor. But perhaps more importantly than the construction of a cryptographic standard to meet quantum’s capabilities is the design of an agile cybersecurity faculty that can shorten the time to transition from one standard to the next. Quantum computing will produce overnight gains in both security and exposure as the technology evolves; being ready to take swift counteraction will be key in the next decade of information technology.

Begin asking entirely new questions in a Quantum CoE

Traditional computing technology has taught us clear phase lines of the possible and impossible with respect to solving business problems. Quantum, over the course of the next decade, will completely redraw those lines, with more capability coming online with each passing year (and, eventually, quarter). Tasks like modeling new supply chain algorithms, new modes of product delivery, even new projections of complex M&A activity in a sector over a long forecast span will become normal requests by 2030.

Make sure data hygiene and MDM protocols are quantum-ready

Already, there have been multiple technologies – Big Data, automation, and blockchain are just three – that have strongly suggested the need to ensure that organizations are running on clean, reliable data.

As business task flow accelerates, and more cognitive automation and smart contracts touch and interact with information as the first actor in the process chain, it is increasingly vital to ensure that these technologies are handling quality data. Quantum may be the last such opportunity to bring the car into the pit for adjustments before racing at full speed commences in sectors like retail, telecom, technology, and logistics. This is a to-do that benefits a broad array of technological deployment projects, so while it may not be relevant for quantum computing until the next decade begins, the benefits will begin to accrue from these efforts today.

Aim at a converged point involving data, analytics, automation & AI

Quantum computing is often discussed in the context of moonshot computing problems – and, indeed, the technology is currently best deployed against problems outside the realm of capability for legacy iron. But quantum will also power the move from offline or nearline processing to ‘now’ processing, so tasks that involve putting insights from Big Data environments to work in real-time will also fall within reach over the course of the next decade. What you may find from a combination of this action and the two prior is that some of the questions and projects you had slated for a quantum computing environment may actually be addressable today through a combination of cognitive technologies.

Reach out to partners, suppliers & customers to build a holistic quantum perspective

Legacy enterprise computing grew up as a ‘four-walls’ concept in part because of the complexity of tackling large, complex business optimization problems that involved moving parts outside the organization. Quantum does not automatically erase those boundary lines from an integration perspective, but the next decade will see more than enough computing power come online to optimize long, global supply chain performance challenges and cross-border regulatory and financing networks. Again, efforts in this area can also benefit organizational initiatives today; projects in IoT and blockchain, in particular, can achieve greater benefits when solutions are designed with partners, suppliers, regulators, and financiers involved up front.

Conclusion

Quantum computing is not going to change the landscape of enterprise IT tomorrow, or next month, or even next year. But when it does effect that change, organizations should expect its new capabilities to be game-changers – especially for those firms that planned well in advance to take advantage of quantum computing’s immense power.

This short checklist of quantum-readiness tasks can provide a framework for pre-quantum technology projects, too – making them an ideal roster of 2019 ‘to-dos’ for enterprise organizations.

]]>
<![CDATA[7 Blockchain Predictions for 2019]]>

Blockchain has progressed considerably as an emerging technology during 2018. Many of 2017’s PoCs have become deployed commercial solutions as standards have begun to solidify, and with more organizations beginning to explore the potential of distributed ledger architecture and smart contracting.

We are still at the very beginning of the lifecycle of this particular technology, and nowhere close to seeing its full potential yet. But as the year comes to an end, what might 2019 bring in terms of distributed ledger maturity and trends? Here are my seven predictions for blockchain in 2019.

The use case landscape shakes out

Blockchain has a clear goodness-of-fit spectrum, as I have written about in a previous blog, and to date, that spectrum has often been tested at the low end with mixed results. Blockchain has clear strengths in a number of well-defined use cases, most notably supply chain and parts management, multiparty shipping and logistics tasks, remittances, securities clearance, and more. As 2019 dawns, we will begin to see providers focus less on exploring the use case spectrum and more on building more capability into those use cases that have been proven to be blockchain-relevant.

Use cases become playbooks

The secondary benefit of a more focused approach on the part of blockchain service providers is the swift emergence of proven playbooks for specific blockchain applications. Already, providers are beginning to slash the number of discussed use cases as it becomes obvious that cold-chain pharma, farm-to-fork agricultural provenance, and airplane parts sourcing and documentation, for example, are functionally multiple iterations of the same basic design with domain knowledge added per the specific deployment.

Interoperability fades as a limiting factor

Blockchain has presented something of a Betamax/VHS (or Blu-Ray/HD-DVD, for younger readers) quandary to date, with multiple standards each offering a unique source of value but on a mutually exclusive basis. But more providers are beginning to focus on hybrid blockchain solutions and platform interoperability, and the announcement in late October that Hyperledger Fabric will be able to execute smart contracts written for Ethereum certainly signals that we are entering the next phase of the market, in which multiple market leaders will need to play responsibly in the sandbox for this technology to take deep root.

Throughput speeds improve – but DB-like operation at-speed/at-scale is more likely in 2020

Blockchain’s primary drawback up to this point is that it can operate at speed, or at scale, but not both. That is slowly changing, with more blockchain accelerators emerging in the marketplace (Microsoft’s CoCo being just one example), and greater attention being paid to purpose-built platforms (like Symbiont Assembly) that are architected for at-speed/at-scale operation. Sharding and layer-2 protocols, both under exploration by Ethereum, show promise for keeping the core value of a distributed ledger system and adding the ability to accelerate transaction throughput to near-database speeds.

Quantum computing comes in from the cold

QC has been the hobgoblin looming over blockchain in the media for years, almost always framed as a technology that sits in opposition to blockchain – either as a security threat or a technology that will make the distributed ledger concept obsolete. But, like most technologies, it will emerge from the threatening media gloom to take its place at the solution table, in the form of a blockchain acceleration and security-improving offering. Quantum computing is still some way off from making a material difference in the IT landscape, but 2019 will bring a dose of sanity in removing the oppositional rhetoric from its emerging presence.

Automation, AI & IoT combine with blockchain for next-gen digital transformation

Blockchain is often discussed as if immutability and transaction security are its primary value proposition. But smart contracting and autonomous action within a DLT environment are at least as important in terms of overall value to the enterprise – and these are capabilities enriched and informed by other emerging technologies, including IoT, artificial intelligence, and cognitive automation. Increasingly, these four technologies are combining to form the basis of next-generation digital transformation for organizations seeking results beyond the limited promise of the initial wave of early transformational work (circa 2014-2017).

Convergence sets the stage for a viable long-term replacement for ERP

What these combined technologies are capable of reaches beyond the ‘four walls’ of the transformational enterprise; they enable whole supply chains to work together as extended ERP fabrics, and to incorporate financial, regulatory, and technology entities surrounding the production and distribution cycle. The discussions around these possibilities are just beginning as 2018 draws to a close, but we expect 2019 to bring more blueprinting and ecosystem construction conversations.

One final, overarching perspective for blockchain and DLT in general: we are progressing past the point of questioning whether these technologies have a role to play in the broader business IT ecosystem. When deployed against the right business challenges, on the right architecture for the task, with the right partner, blockchain is capable of remarkable improvements – and becomes a more strategic technology when considered as a transformational component alongside IoT, AI and automation. The future isn’t built exclusively on blockchain, but it is increasingly a part of the future of business transaction management.

]]>
<![CDATA[UiPath’s Go! Automation Marketplace Aims to Accelerate RPA Adoption in Enterprise Clients]]>

 

UiPath held its 2018 UiPathForward event October 3-4, 2018, in Miami, Florida. The focus of proceedings was the October release of the company’s software and a related trio of major announcements: a new automation marketplace, new investment in partner technology and marketing, and a new academic alliance program.

The analyst session included a visit from CEO Daniel Dines and an update on the company’s performance and roadmap. UiPath has grown from a $1m ARR to a $100m ARR in just 21 months, and the company is trending on a $140m ARR for 2018 en route to Dines’ forecasted $200m ARR in early 2019. UiPath is adding nearly six enterprise clients a day and has begun staking a public claim – not without defensible merit – to being the fastest-growing enterprise software company in history.

During the event, UiPath announced a new academic alliance program, consisting of three sub-programs – one aimed at training higher education students for careers in automation, another providing educators with resources and examples to utilize in the classroom setting, and the third focused on educating youth in elementary and secondary educational settings. UiPath has a stated goal of partnering with ~1k schools and training ~1m students on its RPA platform.

The centerpiece of the event, however, was Release 2018.3 (Dragonfly), which was built around the launch of UiPath Go!, the company’s new online automation marketplace. It would be easy to characterize Go! as a direct response to Automation Anywhere’s Bot Store, but that would be overly simplistic. Where currently the Bot Store skews more toward apps as automation task solutions, Go! is an app store for particulate task components – so while the former might offer a complete end-to-end document processing bot, Go! would instead offer a set of smaller, more atomic components like signature verification, invoice number identification, address lookup and correction, etc.

The specific goal of Go! is to accelerate adoption of RPA in enterprise-scale clients, and the component focus of the offering is intended to fill in gaps in processes to allow them to be more entirely automated. The example presented was, the aforementioned signature verification; given that a human might take two seconds to verify a signature, is it really worth automating this phase of the process? Not in and of itself, but failing to do so creates an attended automation out of an unattended one, requiring human input to complete. With Go!, companies can automate the large, obvious task phases from their existing automation component libraries, and then either build new components or download Go! components to complete the task automation in toto.

Dragonfly is designed to integrate Go! components into the traditional UiPath development environment, providing a means for automation architects to combine self-designed automation components with downloaded third-party components. Given the increased complexity of managing project automation software dependencies for automations built from both self-designed and downloaded components, UiPath has also improved the dependency and library management tools in 2018.3. For example, automation tasks that reuse components already developed can include libraries of such components stored centrally, reducing the amount of rework necessary for new projects.

In addition, the new dependencies management toolset allows automation designers to point projects at specific versions of automations and task components, instead of defaulting to the most recent, for advanced debugging purposes. Dragonfly also moves UiPath along the Citrix certification roadmap, as this release is designated Ready for Citrix, another step toward becoming Certified for Citrix. Finally, Dragonfly also adds new capabilities in VDI management, new localization capabilities in multiple languages, and UI improvements in the Studio environment.

In the interest of spurring development of Go! components, UiPath has designated $20m for investment in its partners during 2019. The investment is split between two funds, the UiPath Venture Innovation Fund and the UiPath Partner Acceleration Fund. The first of these is aimed directly at the Go! marketplace by providing incentives for developers to build UiPath Go! components. In at least one instance, UiPath has lent developers directly to an ISV along with funding to support such development. UiPath expects that these investment dollars will enable the Go! initiative to populate the store faster than a more passive approach of waiting for developers to share their automation code.

The second fund is a more traditional channel support fund, aimed at encouraging partners to develop on the UiPath platform and support joint marketing and sales efforts. The timing of this latter fund’s rollout, on the heels of UiPath’s deal registration/marketing and technical content portal announcement, demonstrates the company’s commitment to improving channel performance. Partners are key to UiPath’s ability to sustain its ongoing growth rate and the strength of its partner sales channels will be vital in securing the company’s next round of financing. (UiPath's split of partner/direct deployments is approaching 50/50, with an organizational goal of reaching 100% partner deployments by 2020.) Accordingly, it is clear that the company’s leadership team is now placing a strong and increasing emphasis on channel management as a driver of continued growth.

]]>
<![CDATA[7 Process Characteristics That Are Key to Blockchain Adoption]]>

 

With the rise of every new technology, there is a parallel rush among enterprises to incorporate it into the near-term IT roadmap, and for good reason: new technologies offer cost savings, CX improvement, better risk management, and a host of other benefits that look terrific in annual reports and quarterly earnings documentation.

Nowhere is that trend more prevalent than in blockchain, a technology that has created significant adoption pressure for enterprise clients amid a flurry of questions regarding platform selection, process redesign, and partner engagement. Blockchain’s benefits are much vaunted (security, immutability, fault tolerance, decentralization), but then there is no shortage of drawbacks (throughput speed, fragmented platform landscape, interoperability). Moreover, there is already talk of technologies that could replace or bypass the benefits of blockchain, from hashgraph to quantum computing, adding further to the murk surrounding the process of evaluating blockchain for organizational use.

In an environment spurring on organizations to adopt blockchain, there’s actually good cause to slow the overall technological rush and ensure that a blockchain solution is the right choice for a specific business challenge and commercial ecosystem. Blockchain is a transformative technology; it changes the fundamental way that transactions are encoded, stored, and tracked. In the right setting, it can be a material lever for unlocking value within the organization, in the supply chain, and among banking and regulatory interactors. In the wrong setting, it can be an expensive dead-end that diverts resources and time from a broader slate of digital transformation activities.

So how can organizations correctly sort the real blockchain opportunities out? In the course of my research in this area, I’ve identified seven key characteristics of business processes that form the basis of an organizational blockchain ‘goodness of fit’ checklist:

1. Transactional processes

Note that this is not the same as being financially transactional; any process where information changes hands between parties, even if compensation is not a part of the exchange, can be a candidate for blockchain deployment. Blockchain excels at documenting the transfer of value or information, and fiscal gains tend to accumulate with greater volumes of handshakes. As a result, the higher the transaction count in one cycle of a given process task, the more relevant blockchain becomes.

2. Frictional processes

Process friction can take many forms – from time delays in passing information from one party to the next, to per-message costs (such as SWIFT messaging expenses in financial services), to partner fatigue in disputing invoices or claims. The more time and expense accumulates within the process, the better a fit for blockchain technology a process is.

3. Non real-time, low-volume processes

Speed is not currently a significant blockchain platform strength, so processes that need to happen in real-time at scale may be a poor fit for the technology in its current form. While some specialized platforms – most notably Digital Asset, Symbiont, and Waves – offer compelling speed at scale, most of the big names in the platform space are not yet performing at speeds comparable to a relational database, so real-time processes happening in volume may be good candidates to consider when blockchain catches up in the next two years.

4. Simpler processes

The term smart contracts tends to be a confusing one in blockchain, as it suggests more intelligence than is really present in the technology currently. Smart contracts are smart in their management of the tasks surrounding a transaction, like document processing, notarization, and approval; a smart contract is self-executing in these areas and does not require additional input.

But for all that transactional intelligence, smart contracts remain relatively ‘dumb’ in terms of overall contract complexity. So, while most can follow relatively simple ‘if-then’ logic, complicated transactions with multiple forks and ‘fuzzy’ interpretation are beyond the current reach of most smart contract platforms. Again, this is a development priority for many platform providers, so expect to see this evolve swiftly in parallel with developments in AI – but at the time of writing, simpler processes are a better fit for blockchain implementation.

5. Oppositional processes

Transparency and trust are cornerstone components of an effective blockchain implementation, particularly so when there is an element of opposed goals in a process environment (payor versus payee being the most common such example). When all parties can monitor and oversee the documented process of content or payment through a process transparently from end to end, trust is improved and disputes tend to decline both in volume and in time required for resolution.

6. Fragmented processes

Intra-organizational applications of blockchain can produce meaningful benefits, but the real value is unlocked when a blockchain connects multiple parties operating in different domains – for example, in an ocean cargo management setting, exporters, banks, insurers, regulators, shipping providers, importers, and distributors. In such an environment, where responsibility and input are being passed among many organizations, the relevance of a blockchain solution increases considerably.           

7. Risk-accumulative processes

Corporate risk management is an accumulative function to begin with, as the audit task normally demands a large volume of signed and documented data – so the ability to produce the supporting documentation without significant organizational effort or data reconstruction is a vital task. Blockchain offers historically unparalleled data immutability and signed witness status, making it an exceptionally good fit for processes that accumulate large volumes of risk-relevant exchanges over time.

In conclusion

What can operations and IT executives take from this in planning for blockchain deployment? Currently, the most compelling fiscal and performance returns are coming from highly transactional processes with considerable process friction, prioritizing real-time transparency in low transaction volume, with minimal complexity, high levels of fragmentation, and considerable risk exposure.

However, it is critical to maintain a perspective on the role of implementing blockchain for these processes within the scope of a broader digital transformation initiative; blockchain demands many of the same transformation readiness checkpoints (big data capability, master data management and hygiene, and automation readiness) that other transformational initiatives do.

Finally, in assessing blockchain’s weaknesses, keep a weather eye on the horizon: blockchain’s two principal shortcomings to date (managing real-time transaction volume at scale and handling complex smart contracts) will increasingly become priorities for the major platform providers over the next two years.

]]>
<![CDATA[IPsoft’s Challenging Vision for Cognitive Automation]]>

 

I recently attended IPsoft’s Digital Workforce Summit in New York City, an intriguing event that in some ways represented a microcosm of the challenges clients are experiencing in moving from RPA to cognitive automation.

The AI challenge

Chetan Dube loomed large over proceedings. IPsoft’s president and CEO was onstage more than is common at events of this type, chairing several fireside chats himself in addition to his own technology keynote, and participating (with sleeves rolled up) at the analyst day that followed. He brought a clear challenge to the stage, while at the same time conveying the complexity and capability of IPsoft’s flagship cognitive products, Amelia and 1DESK, and making them understandable to the audience, in part by framing them in terms of commercial value and ROI.

RPA vendors have a simpler form of this challenge, but both robotic process automation and cognitive automation vendors have a hill to climb in gaining clients’ trust in the underlying technology and reassuring service buyers that automation will be both a net reducer of cost and a net creator of jobs (rather than a net displacer of them).

From a technological perspective, RPA sounds from the stage (and sells) much more like enterprise software than neuroscience or linguistics, so the overall pitch can be sited much more in the wheelhouse of IT buyers. The product does what it says on the tin, and the cavalcade of success stories that appear on event stages are designed to put clients’ concerns to rest. To be sure, RPA is by no means easy to implement, nor is it yet a mature offering in toto, but the bulk of the technological work to achieve a basic business result has been done. And overall, most vendors are working on incremental and iterative improvements to their core technology at this time.

AI differs in that it is still at the start of the journey towards robust, reliable customer-facing solutions. While Amelia is compelling technology (and is performing competently in a variety of settings across multiple industries), the version that IPsoft fields in 2025 will likely make today’s version seem almost like ELIZA by comparison, if Dube’s roadmap comes to fruition. He was keen to stress that Amelia is about much more than just software development, and he spent a lot of time explaining aspects of the core technology and how it was derived from cognitive theory. The underlying message, broadly supported by the other presenters at the event, was clearly one of power through simplicity.

IPsoft’s vision

The messaging statements coming from the stage during the event portrayed a diverse and wide-ranging vision for the future of Amelia. Dube sees Amelia as an end-to-end automation framework, while Chief Cognitive Officer Edwin van Bommel sees Amelia as a UI component able to escape the bounds of the chatbox and guide users through web and mobile content and actions. Chief Marketing Officer Anurag Harsh focused on AI though the lens of the business, and van Bommel presented a mature model for measuring the business ROI of AI.

Digging deeper, some of what Dube had to say was best read metaphorically. At one point he announced that by 2025 we will be unable to pass an employee in the hallway and know if he or she is human or digital. That comment elicited some degree of social media protest. But consider that what he was really saying is that most interaction in an enterprise today is performed electronically – in that case, ‘the hallways’ can be read as a metaphor for ‘day-to-day interaction’.

The question discussed by clients, prospects, and analysts was whether Dube was conveying a visionary roadmap or fueling hype in an often overhyped sector. Listening to his words and their context carefully, I tend towards the former. Any enterprise technology purchase demands three forms of reassurance from the vendor community:

  • That the product is commercially ready today and can take up the load it is promising to address
  • That the company has a long-term roadmap to ensure that a client’s investment stays relevant, and the product is not overtaken by the competition in terms of capacity and innovation
  • And perhaps most importantly, that the roadmap is portrayed realistically and not in an overstated fashion that might cause clients to leave in favor of competitors’ offerings.

I took away from Digital Workforce Summit that Dube was underscoring the first and second of these points, and doing so through transparency of operation and vision.

There are only two means of conveying the idea that you sell a complex product which works simply from the user perspective – you either portray it as a black box and ask that clients trust your brand promise, or you open the box and let clients see how complex the work really is. IPsoft opted for the latter, showing the product’s operation at multiple levels in live demonstrations. Time and again, Dube reminded the audience that it is unnecessary to grasp evolved scientific principles in order to take advantage of technologies that use those principles – so light switches work, in Dube’s example, without the user needing to grasp Faraday’s principles of induction. It still benefits all parties involved to see the complexity and grasp the degree to which IPsoft has worked to make that complexity accessible and actionable.

Conclusion

The challenge, of course, is that clients attend events of this kind to assess solutions. The majority of attendees at Digital Workforce Summit were there to learn whether IPsoft’s Amelia, in its latest form, is up-to-speed to manage customer interactions, and will continue to evolve apace to become a more complete conversational technology solution and fulfill the company’s ROI promises.

I came away with the sense that both are true. Now it is up to the firm’s technology group to translate Dube’s sweeping vision into fiscally rewarding operational reality for clients.

]]>
<![CDATA[Infosys Announces Blockchain-Powered Nia Provenance to Manage Complex Supply Chains]]>

 

EdgeVerve, an Infosys Product subsidiary, this week announced a new blockchain-powered application for supply chain management as part of its product line. Nia Provenance is designed to address the challenges faced by organizations managing complex supply chain networks with multiple IT stacks engaged across multiple stakeholders. Here I take a quick look at the new application and its potential impact.

Supply chain traceability, transparency & trust

Nia Provenance is designed to provide traceability of products from source of origin to point of purchase with full transparency at every point along the supply chain. The product establishes trust through the utilization of a version of Bitcore, the blockchain architecture used by Bitcoin. While this can be a relatively simple task in agribusiness and other supply environments in which a product involves only processing as it moves through the supply chain, environments such as consumer electronics or medical devices are much more complex, involving integration and assembly of multiple components along the way. The ability to isolate a specific component and trace it to its source of origin, through phases of value addition timestamped on a blockchain ledger, is invaluable in case of recall or consumer danger.

Transparency in Nia Provenance is provided through proof of process as the product or commodity moves through the system – so attributes that must be agreed on at specific phases of the supply chain, such as conflict-free or locally-sourced, can be seen in the system as they are accumulated. Similarly, regulatory inspections and certifications are more easily tracked and audited through a blockchain solution like Nia Provenance.

Finally, trust is gained in a system with a combination of data immutability, equality in network participation as a result of decentralization of the overall SCM ledger, and cryptographic information security. Over time, the benefits of a blockchain SCM environment accrue both to the organizational bottom line, in the form of cost savings, and to the organization’s brand as a function of increased consumer trust in the brand promise.

Agribusiness client case

As one example of how Nia Provenance is being leveraged in the real world, a global agribusiness firm undertook a proof of concept for its coffee sourcing division in Indonesia to track the journey of coffee from the growing site, through the roasting plant, the blend manufacturer, the quality control operation, the logistics providers, and on to the importer. This enabled the trader to provide trusted accreditation and certification information to the importer for properties such as organic or fair trade status, or that the coffee was grown using sustainable agriculture standards.

Providing strategic blockchain reach

Nia Provenance provides Infosys with three important sources of strategic blockchain ‘reach’ in an increasingly competitive market, because:

  • It is platform-agnostic and purpose-built to dock with multiple blockchain architectures. A supply chain solution that relies too heavily on the specific capabilities of one common blockchain architecture or another – for example, Ethereum or HyperLedger – would encounter difficulty working with other upstream or downstream architectures. By keeping the DLT technology in an abstraction layer, Nia Provenance eases the process of incorporating different blockchain architectures in a complex SCM task environment
  • It is designed to benefit multiple supply chain stakeholders, not just the client. Blockchain adoption becomes more appealing to upstream and downstream stakeholders, as well as horizontal entities like banks, insurers and regulators, when the ecosystem is built with clear benefits for them as well as the organizing entity. Nia Provenance is designed from the ground up with a mindset inclusive of suppliers, inspectors, insurers, shippers, traders, manufacturers, banks, distributors, and end customers
  • It is designed to span multiple industries. Although the platform has its origins in agribusiness, Nia Provenance looks to be up to the task of SCM applications in manufacturing, consumer goods/FMCG, food and beverage, and specialized applications such as cold-chain pharmaceuticals.

Summary

Supply chain provenance is a core application for blockchain, and one that we expect to be a clear value delivery vehicle for blockchain technology through 2025. The combination of – as Infosys puts it – traceability, transparency, and trust that blockchain provides is a compelling proposition. Nia Provenance offers a solution across a broad variety of industry applications for organizations seeking lower cost and greater security in their supply chain operations.

]]>
<![CDATA[The Advantages of Building a Bespoke Blockchain Platform]]>

 

For all the discussion in the blockchain solution industry around platform selection (are they choosing Fabric or Sawtooth? Quorum or Corda?), you’d be forgiven for thinking that every provider’s first stop is the open-source infrastructure shelf. But the reality is that blockchain is more a concept than a fixed architecture, and the platforms mentioned do not encompass the totality of use case needs for solution developers. As a result, some solution developers have elected to start with a blank sheet of paper and build blockchain solutions from the ground up.

One such company is Symbiont, who started down this road much earlier than most. Faced with the task of building a smart contracts platform for the BFSI industry, the company examined what was available in prebuilt blockchain platform infrastructure and did not see their solution requirements represented in those offerings – so they built their own. Symbiont’s concerns centered around the two areas of scalability and security, and for the firm’s pursuit target accounts in capital markets and mortgages, those were red-letter issues.

The company addressed these concerns with Symbiont Assembly, the company’s proprietary distributed ledger technology. Assembly was designed to address three specific demands of high-volume transactional processes in the financial services sector: fault tolerance, volume management, and security.

Supporting fault tolerance

Assembly addresses the first of these through the application of a design called Byzantine Fault Tolerance (BFT). Where some blockchain platforms allow for node failure within a distributed ledger environment, platforms using BFT broaden that definition to include the possibility of a node acting maliciously, and can control for actions taken by these nodes as well. The Symbiont implementation of BFT is on the BFT-SMaRt protocol.

Volume management

In addressing the volume demands of financial services processing, deciding on the BFT-SMaRt protocol was again important, as it enables Assembly to reach performance levels in the ~80k/s range consistently.

This has two specific benefits, one obvious and one less so. First, it means that Assembly can manage the very high-volume transaction pace of applications in specialized financial trading markets without scale concerns. Secondly, it means that in lower-volume environments, the extra ‘headroom’ that BFT-SMaRt affords Assembly can be used to store related data on the ledger without the need to resort to a centralized data store to hold, for example, scanned legal documents that support smart contracts.

Addressing security concerns

The same BFT architecture that supports Assembly’s fault tolerance also provides an additional layer of security, in that malicious node activity is actively identified and quarantined, while ‘honest’ nodes can continue to communicate and transact via consensus. Add in encryption of data, whereby Assembly creates a private security ledger within the larger ledger, and the result is a robust level of security for applications with significant risk of malicious activity in high-value trading and exchange.

Advantages of building a bespoke blockchain platform

Building its own blockchain platform cost Symbiont many hours and R&D dollars that competitors did not have to spend, but ultimately this decision provides Symbiont with three strategic advantages over competitors:

  • Assembly is purpose-built for BFT-relevant, high-volume environments. As a result, the platform has performance and throughput benefits for applications in these environments compared with broader-use blockchain platforms that are intended to be used across a variety of business DLT needs. To some degree this limits the flexibility of the platform in other use cases, but just as a Formula One engine is a bespoke tool for a specific job, so too is Assembly specifically designed to excel in its native use case environment. That provides real benefits to users electing to build their banking DLT applications on the Assembly architecture
  • Symbiont can provide for third-party smart contract writing, should it elect to do so. While this is not in the roadmap for the moment, and Symbiont appears content to build client solutions on proprietary deliverables from the contract-writing layer through the complete infrastructure of the solution, the company could elect to allow clients to write their own smart contracts ‘at the top of the stack’. Symbiont does intend to keep the core Assembly platform proprietary to the company for the foreseeable future
  • Assembly may attract less malicious activity interest than traditional platforms. The rising number of blockchain projects based on HyperLedger and Ethereum is certain to attract more malicious activity based on the commonality of the architecture across a broader common base of technology. In much the same way that Windows historically attracted more virus incursions than the OS platform, Assembly will tend to attract less attention than platforms with broader user bases. Moreover, Assembly’s BFT foundations will enable it to deal more effectively with those events that do occur.

Summary

Symbiont isn’t alone in developing its own proprietary blockchain technology architecture rather than choose from the broadly available offerings in the space, and as blockchain enters the mainstream of enterprise business, other provider organizations will surely go the same route.

What Symbiont has established is an exemplar for developing a purpose-built blockchain platform, beginning with the specific needs of the task environment at scale, and proceeding to address those needs carefully in the development process. 

]]>
<![CDATA[6 Ways to Prepare for Cognitive Automation During RPA Implementation]]>

 

2017 brought a surge of RPA deployments across industries, and in 2018 that trend has accelerated as more and more firms begin exploring the many benefits of a digital workforce. But even as some firms are just getting their RPA projects started, others are beginning to explore the next phase: cognitive automation. And a common challenge for firms is the desire to begin planning for a more intelligent digital workforce while automating simpler rule-based processes today.

Having spoken with organizations at different stages of their journeys from BI to RPA and on to cognitive, there are tasks that companies can begin during RPA implementation to ensure that they are well positioned for the machine learning-intensive demands of cognitive automation:

Design insight points into the process for machine learning

Too often, the concept of STP gets conflated with the idea of measuring task automation only on completion. But for learning platforms, it is vital to understand exactly where variance and exceptions arise in the process – so allow your RPA platform to document its progress in detail from task inception to task completion.

At each stage, provide a data outlet to track the task’s variance on a stage-by-stage basis. A cognitive platform can then learn where, within each task, variance is most likely to arise – and it may be the case that the work can be redesigned to give straightforward subtasks to a lower-cost RPA platform while cognitive automation handles the more complex subtasks.

Build a robot with pen & paper first

One of the basic measures for determining whether a process can be managed by BPM, by RPA, or by cognitive automation is the degree to which it can be expressed as a function of rigorous rules. So, begin by building a pen-and-paper robot – a list of the rules by which a worker, human or digital, is expected to execute against the task.

Consider ‘borrowing’ an employee with no familiarity with the involved task to see if the task is genuinely as straightforward and rule-bounded as it seems – or whether, perhaps, it involves a higher order of decision-making that could require cognitive automation or AI.

Use the process to revisit the existing work design

In many organizations, tasks have ‘grown up’ inorganically around inputs from multiple stakeholders and have been amended and revised on the fly as the pace of business has demanded. But the migration first to RPA and then on to cognitive automation is a gift-wrapped opportunity to revisit how, where, and when work is done within an organization.

Can key task components be time-shifted to less expensive computing cycles overnight or on weekends? Can whole tasks be re-divided into simpler and more complex components and allocated to the lowest-cost tool for the job?

Dock the initiative with in-house ML & data initiatives

Cognitive automation does not have to remain isolated to individual task areas or divisions within an organization. Often, ML initiatives produce better results when given access to other business areas to learn from. What can cognitive automation learn about customer service tasks from paying a ‘virtual visit’ to the manufacturing floor via IoT? Much, potentially – especially if specific products or parts are difficult to machine to tolerance within an allowed margin of error, they may be more common sources of customer complaints and RMAs.

Similarly, a credit risk-scoring ML platform can learn from patterns of exception management in credit applications being managed in a cognitive automation environment. For ML initiatives, enabling one implementation to learn from others is a key success factor in producing ‘brilliant’ organizational AI.

Revisit the organizational data hygiene & governance models

Data scientists will be the first to underscore the importance of introducing clean data into any environment in which decision-making will be a task stage. Data with poor hygiene, and with low levels of governance surrounding the data cleaning and taxonomy management function, will create equally poor results from cognitive automation technology that utilizes it to make decisions.

Cognitive software is no different than humans in this respect; garbage in, garbage out, as the old saying goes. As a result, a comprehensive visitation of organizational data hygiene and governance models will pay dividends down the road in cognitive work.

Discuss your vendor’s existing technology & roadmap in cognitive & AI

Across the RPA sector, cognitive is a central concept for most vendors’ 2018-2020 roadmaps. Scheduling a working session now on migrating the organization from RPA to cognitive automation provides clients with insight on their vendor’s strengths and capability set. It also enables vendors to get a close look at ‘on the ground’ cognitive automation needs in different organizational task areas.

That’s win/win – and it helps ensure that an existing investment in vendor technology is well-positioned to take the organization forward into cognitive based on a sound understanding of client needs.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[Application of RPA & AI to Unstructured Data Processing: The Next Big Milestone for Shared Services]]>

 

Shared Services Centers (SSCs) have made progress in the initial application of RPA, gained some experience in its application, and are typically now looking to scale their use of RPA widely across their operations. However, although organizations have often undertaken some level of standardization and simplification of their processes to facilitate RPA adoption, one stumbling block that still frequently inhibits greater levels of automation and straight-through processing is an inability to process unstructured data. And this is limiting the value organizations are currently able to realize from automation initiatives.

NelsonHall recently interviewed 127 SSC executives across industries in the U.S. and Europe to understand the progress made in adopting RPA & AI, along with their satisfaction levels and future expectations. To quote from one executive interviewed, “I think the main strategy in the past has been to avoid unstructured data or pre-process it to make it structured. Now we are beginning to embrace the challenge of unstructured data and are growing an internal understanding of how to piece together automation.”

Low Satisfaction in Handling Unstructured Data Widespread in SSCs

This is an important next step. Unstructured data remains rife in organizations within customer and supplier emails and documents, with, for example, supplier invoices taking on a myriad of supplier-dependent formats and handwritten material far from extinct within customer applications.

This need to process unstructured data impacts not just mailroom document management, but a wide range of shared services processes. By industry sector, the processes that have a combination of high levels of unstructured data and a significant level of dissatisfaction with its capture and processing are:

  • Retail & Commercial Banking: new account set-up and customer service
  • P&C Insurance: fraud detection, claims processing, mailroom document management, policy maintenance, and customer service
  • Telecoms: customer service.

Within finance & accounting shared services, the same issues are found within supplier & catalog management, purchase invoice processing, and 3-way matching.

So, it is highly important that SSCs get to grips within handling unstructured documents and data within these process areas. However, this is unknown territory for many SSCs; they are typically in the early stages of automating handling of unstructured data and lack expertise in effective identification and implementation of suitable technologies. In addition, SSCs often lack the necessary experience in process change management and speed of process change when handling RPA & AI projects. Indeed, SSCs have often struggled in the early stages of automation with the challenge of realizing the expected cost savings from this technology. Applying automation is one thing, but realizing its benefits through effective process change management and ensuring that unexpected exceptions don’t derail the process and the associated cost realization, has sometimes been a significant issue.

Combining OCR & Machine Learning is Critical to Processing Unstructured Data

Accordingly, it is critical that SSCs now automate data classification and extraction from their unstructured documents. At present, 80% of SSCs across sectors are still manually classifying documents, with OCR only used modestly and not to its full potential. However, there are strong levels of intention to adopt OCR and RPA & AI technologies in support of processing unstructured data within SSCs during 2018 and 2019, as shown below:

 

SSCs are considering a broad range of technologies for processing unstructured data, with OCR clearly a key technology, but further supported by machine learning in its various forms for effective text classification and extraction. To quote from one executive interviewed, “We want to speed up deployment of automation within the mailroom, we want more OCR and natural language processing in place.”

Need for Improved Turnaround Times Now the Main Driving Force

However, in terms of benefits achievement, there is currently quite a significant difference between organizations’ current automation aspirations and what they have already achieved. While organizations placed a high initial emphasis within their automation initiatives on cost savings, and the achievement of cost savings remains very important to SSCs, the focus of executives within SSCs has now increasingly turned to improving process turnaround times.

Within the telecoms sector, this leads to a high expectation of improved customer satisfaction. However, executives with property & casualty and finance & accounting SSCs tend to attach an equal or higher importance to the impact of these technologies on employee satisfaction - by automating some of the least satisfying types of work within the organization, thus allowing personnel to focus on more added value aspects of the process (i.e. other than finding and entering data from customer documents and invoices).

The principal benefits sought by SSCs from implementing RPA & AI in support of processing of unstructured data are shown below:

 

70% of SSCs Highly Likely to Purchase Operational Service Covering Unstructured Data Processing

While automation is often depicted as having an adverse impact on the outsourcing industry, the reality is often quite the opposite, and organizations seek help in effectively deploying new digital technologies. Indeed, this is certainly the case with unstructured data processing.

SSCs will tend to implement unstructured data handling in-house where the information being handled is highly sensitive, where security is critically important, and where regulation or the set-up of internal systems inhibits use of a third-party service. However, elsewhere, where these constraints do not apply, SSC executives express a high level of intent to purchase external services in support of document classification and extraction of unstructured data. ~70% of SSCs are highly likely to purchase operational services for document processing, including document classification and extraction of unstructured data, while only a minority express a high intent to implement in-house or via a systems integrator.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[Kryon’s Rebranding Focuses on the Business Benefits of RPA]]>

 

Kryon has today launched a new brand presence, along with a new strategic perspective on RPA focused on delivering business benefits. The former Kryon Systems (now simply Kryon) will now be organized around a three-pronged approach the company refers to as ‘Discover, Automate, Optimize’.

As part of this brand migration, several aspects of Kryon’s go-to-market approach will change, as described below.

Focusing on the human side of the RPA equation

Kryon’s former branding package included limited personification of the RPA offering under the Leo name, and also featured an anthropomorphized robot ‘mascot’ in much of the company’s promotional and industry relations materials. That component of the company’s branding has been eliminated from its new visual identity, which now focuses much more on the human side of the RPA equation and the concept of integrating RPA into a hybrid human-digital workforce.

A new focus on business benefits rather than technological innovation

As more RPA features begin to become ‘table stakes’ within the sector, NelsonHall has expected vendors to begin the shift from focusing on product features to business outcomes. Kryon joins that trend with its rebranding, which will include more case studies and success stories represented as a function of business KPIs, while keeping the technological conversation within the context of real-world improvements in cost, efficiency, and quality.

A new framework for the brand

The ‘Discover, Automate, Optimize’ theme speaks to Kryon’s three primary offering areas:

  • Process discovery (already soft-launched, but due for a more formal product rollout in early summer of 2018)
  • Traditional RPA
  • Analytics/AI.

To date, these have been marketed as components, but under the new branding they become part of a larger solution intended to reposition Kryon as an end-to-end provider of business process optimization solutions.

A clear effort to differentiate its offerings

Kryon has sometimes suffered in terms of its ability to break out from the pack of RPA providers and carve out a differentiated and sustainable niche for itself. Under the new brand positioning, the company is making a clear effort to differentiate its offerings based on the ability to do more than automate simple, repetitive tasks.

The company talks about enabling human workers to be mindful and focused on creative tasks by eliminating background work entirely though the application of RPA combined with AI and machine learning. While other firms offer similar messaging, Kryon’s new branding package treats repetitive work as ‘background noise’ to be removed from the typical employee’s workday.

A new name, logo & tagline

While these are often secondary in importance from a technology and business analyst’s perspective, it is worth mentioning what is and what, importantly, is not included in Kryon’s visual rebrand. Gone is the word ‘Systems’ from the old Kryon logo, in a clear effort to migrate the firm towards a broader service mandate.

The tagline ‘Be Your Future’ is added in place of ‘Systems’, again suggesting a broadening of the brand. Finally, the letter ‘O’ in the logo is given a half-gold, half-blue treatment to emphasize the hybrid human/digital nature of its offering.

Summary

2018 and 2019 are expected to be watershed years in the RPA sector, as competitive positioning begins to come into focus and leadership niches become occupied as the sector matures. Kryon is taking clear steps to include itself in the ‘tier one’ vendor conversation through a set of brand migration moves that position the company to compete well into the next decade.

]]>
<![CDATA[Redwood Introduces Disruptive New RPA Pricing Model]]>

 

Today, Redwood announced a new pricing model for its RPA software in which users pay only for units of work completed, and on a cost basis equivalent to efficient human work on the same task. As a result, if a Redwood robot sends an email, or retrieves specific data, or performs reconciliation work, the organization is charged on completion for specific amounts relevant to the parallel human cost of execution in a ‘perfect work efficiency’ environment.

This is a fundamental change from the prevalent model in the industry of paying for licenses for RPA software and estimating how many licenses will be necessary to perform specific tasks. While other pricing models exist – ranging from paying for the process rather than the robot, to buying robots outright as owned software properties – this is the first time that pricing is available both on completion and on a granular, task-centric basis. In essence, Redwood is enabling organizations implementing RPA to pay on a piecework basis, and only after the work is performed.

The new pricing model will mark the second major transition in the company’s client contracting medium in the last five years. Historically, Redwood sold its software on a perpetual licensing basis, which changed over time to a more traditional annual licensed offering (although some clients are still on perpetual licenses). Redwood will need to manage a transition period in which clients can switch to the utility pricing model on the anniversary of their licenses, which may introduce some unevenness to the company’s financial performance during 2018-2019.

There are more implications for Redwood, and for the RPA industry, as a result of deploying this new pricing model:

The new model changes the revenue & profit mix for Redwood…

The company expects to see some flattening of topline revenue as a result of this change, but improved margins, with an overall increase in transaction volume. Redwood believes that by reducing barriers to entry in RPA through enabling payment by the task, and after the fact, more prospective clients will adopt the Redwood solution. This is a logical evolution of the Redwood business model in that it promotes Redwood’s library of prebuilt robots to a larger prospective audience and smooths the on-ramp to Redwood adoption for more organizations.

…and demands that Redwood’s pricing model is appealing

The company has researched levels of productivity and cost in both Western and offshore economies and modeled a function that prices Redwood tasks at roughly 20 Euro cents per moderate-duty task (retrieving a report, reconciling data, sending an email, etc.) based on a perfectly-efficient Western worker performing 156 such tasks per hour for a fully-loaded employment cost of €50k. (A low-cost economy worker performs half as many such tasks per hour for half the cost in Redwood’s model.)

In order for Redwood to unlock the full potential value of this new pricing model, these assumptions and metrics need to be appealing to buyers.

Redwood creates more pressure on the traditional licensing model

This is still a relatively young industry in terms of establishing pricing and contracting norms, so disruptive acts (and Redwood’s new pricing model will certainly be disruptive at some level) creates pressure on ‘safer’, more traditional modes of client engagement. Redwood holds a degree of advantage in that the company has an extensive library of ~35,000 prebuilt robots that it can price and sell on this model, as opposed to RPA providers that provide software that is customized and deployed within the client organization. It will be more difficult for traditional RPA providers to cost-effectively match the Redwood model in the market.

Reporting & invoicing challenges are addressed through Redwood Robotics itself

Transitioning from a license-based contracting structure to a high-resolution, granular use-based contracting structure would normally be a steep challenge for a software organization accustomed to annual licensing, given the degree of reporting and invoicing complexity involved. Fortunately for Redwood, these processes are being handled in their entirety by additional automations, deployed to the client organization at no charge, which monitor and document Redwood automation usage and generate regularly-scheduled invoices for the client.

Summary

Redwood has put forth a compelling new framework for equating robotic and human labor costs, and for enabling organizations to pay only for work done rather than paying for the abstraction layer inherent to a robot license.

In effect, Redwood offers piecework rates in a market predominated by ‘salaried-FTE’ model robots. While this is unlikely to become the norm for RPA pricing, it provides Redwood with a new, and potentially sustainable, source of competitive differentiation.

]]>
<![CDATA[UiPath Gains Unicorn Status with Series B Funding; To Expand into AI]]>

 

This morning, UiPath announced that the company will be receiving $153m in Series B funding from a consortium including the company’s existing investors, with two new names involved – Kleiner Perkins and Capital G, the late-stage growth venture capital fund financed by Alphabet Inc.

The latter is of note as this arm of Google focuses on profit-centric investment rather than acquiring to serve Google’s overall strategic goals. Its notable investments to date have included Gusto (then ZenPayroll) in 2015, Airbnb and Snap in 2016, and Lyft in 2017.  As a result of these investments, Laela Sturdy of Capital G and John Doerr of Kleiner Perkins will be joining UiPath’s strategic advisory board.

This latest round of financing is meaningful on several fronts:

It places UiPath into unicorn territory

This round of funding places UiPath’s market valuation in the vicinity of $1.1bn, implying that the company has grown from seed funding to unicorn status in just 36 months. By contrast, fellow RPA unicorn Blue Prism was founded in 2001 and only recently crossed into unicorn status with a market value of $1.02bn.

…which requires more resources to support rapid growth

While this is both impressive supernormal growth on its own, and a rate that suggests that UiPath has taken considerable share in the past twelve months, it carries with it its own slate of challenges, as referenced in the profile of UiPath that NelsonHall published earlier this year. The company’s  level of growth needs infrastructural backfill in multiple areas, from R&D to sales and marketing. This is a company that is adding 2.5 customers a day on its existing funding levels and operating cashflow. What might UiPath’s organic growth trajectory look like with significantly deeper sales, marketing, deployment, and R&D capabilities? We are about to find out.

It positions the company to acquire in the AI space

The company now boasts a combined war chest of ~$200m in cash, more than enough for a tactical bolt-on or two in the areas of cognitive automation and AI. UiPath already has evolved partnerships with Celonis and Enate, so the company is likely to look outside of those firms’ service footprints for acquisitions. Specifically, UiPath is looking for capabilities in the areas of natural language processing, machine learning, and identity recognition. There will be no shortage of good candidates for UiPath to choose from in these areas, but betting correctly and acquiring for maximum value will be critical in positioning UiPath for success.

It ties the company closer to Google

The CapitalG investment certainly suggests a closer relationship between UiPath and Google, which might have already manifested in UiPath’s decision to utilize Google Cloud for its cloud machine learning initiative. Given Blue Prism’s alignment with IBM, the major RPA providers are beginning to find their technology partners for long-term competition in the segment.

Google will be able to provide UiPath with a host of competitive advantages in terms of technology licensure, partner ecosystem development, and market presence. It would be interesting to see where UiPath might be in a year’s time with a closer relationship with Google’s TensorFlow team, for example, or with its Generative Adversarial Networks working groups.

It likely launches the next wave of innovation in the segment

Armed with a substantive war chest of cash with which to build and acquire new capabilities, UiPath’s actions during 2018 are not likely to go unanswered by other segment leaders. As a result, UiPath’s next moves will likely signal the beginning of the next stage of evolution in the RPA sector – one we expect to bring out the best in technological innovation among those leaders. We see UiPath as a leader in that evolutionary process.

]]>
<![CDATA[7 Essential Tasks Prior to Any RPA Implementation]]>

 

With every new software release from RPA sector leaders, there is always much to be excited about as vendors continue to push the technological boundaries of workplace automation. Whether those new capabilities focus on cognition, or security, or scalability, the technology available to us continues to be a source of inspiration and innovative thinking in how those new capabilities can be applied.

But success in an RPA deployment is not entirely dependent just on the technology involved. In fact, the implementation design framework for RPA is often just as important – if not more so – in determining whether a deployment is successful. Install the most cutting-edge platform available into a subpar implementation design framework, and no amount of technological innovation can overcome that hindrance.

With this in mind, here are seven tasks that should be part of any RPA implementation plan before organizations put pen to paper to sign up with an RPA platform vendor.

Create a cohesive vision of what automation will achieve

Automation is the ultimate strict interpretation code: it does precisely as it’s told, at speed, and in volume. But it must be pointed at the right corporate challenges, with a long-term vision for what it is (and is not) expected to do in order to be successful in that mission. That process involves asking some broad-ranging questions up-front:

  • What stakeholders are involved – internally and externally – in the automation initiative?
  • What are our organization’s expectations of the initiative?
  • How will we know if we succeeded or fail?
  • What metrics will drive those assessments?
  • Where will this initiative go next within our organization?
  • Will we involve our supply chain partners or technology allies in this process?

Ensure a staff model that can scale at the speed of enterprise automation

We tend to spend so much time talking about FTE reduction in the automation sector that we overlook the very real issue of FTE sourcing (in volume!) in relation to the implementation of automation at enterprise scale. Automation needs designers, coders, project managers, and support personnel, all familiar with the platform and able to contribute new code and thoughtware assets at speed.

Some vendors are addressing this issue head-on with initiatives like Automation Anywhere University, UiPath Academy, and Blue Prism Learning and Accreditation, and others have similar initiatives in the works. It is also important that organizational HR professionals be briefed on the specific skillsets necessary for automation-related hires; this is a relatively new field, and partnering up-front on talent acquisition can yield meaningful benefits down the road.

Plan in detail for a labor outage

The RPA sector is rife with reassurances about digital workers: they never go on strike; they don’t sleep or require breaks; they don’t call in sick. But things do go wrong. And while the RPA vendors offer impressive SLAs with respect to getting clients back online quickly, sometimes it’s necessary to handle hours, or even days, of automated work manually. Having mature high-availability and disaster recovery capability built into the platform – as Automation Anywhere included in Enterprise Release 11 – mitigates these concerns to a specific degree, but planning for the worst means just that.

Connect with the press and the labor community

Don’t skip this section because it sounds like organized labor management only, although that’s a factor too. Automation stories get out, and local and national press alike are eager to cover RPA initiatives at large organizations. It’s a hot-button topic and an easily accessible story.

Unfortunately, it’s also all too easy to take an automation story and run with the sensationalist aspects of FTE displacement and cost reduction. By interacting with journalist and labor leaders in advance of launching an automation initiative, you’re owning the story before it can be owned elsewhere in the content chain.

Have a retraining and upskilling initiative parallel to your automation COE

Automation can quickly reduce the number of humans necessary in a work area by half or even more. What is your organization’s plan for redeployment of that human capital to other, higher-value tasks? Who occupies those task chairs now – and what will they be doing?

Once the task of automation deployment is complete, there is still process work to be done in finding value-added work for humans who have a reduced workload due to automation. Some organizations are finding and unlocking new sources of enterprise value in doing so – for example, front-line workers who have their workloads reduced through automation can often ‘see the forest’ better and can advise their superiors on ways to streamline and improve processes.

Similarly, automation can bring together working groups on tasks that have connected automations between departments, allowing for new conversations, strategies, and processes to take shape.

Have an articulation plan for RPA and other advanced technologies

RPA and cognitive automation do more than improve the quality and consistency of work – they also improve the quality and consistency of task-related data. That is an invaluable characteristic of RPA from the organizational data and analytics perspective, and one that is often overlooked in the planning process.

While it might take days for a service center to spot a trend in common product complaints, RPA platforms could see the same trend in hours, combine that data in an organizational data discovery environment with IoT data from the production line, and identify a product fault faster and more efficiently than a traditional workforce might. When designing an automation initiative, it is vital to take these opportunities into account and plan for them.

Create a roadmap to cognitive automation and beyond

RPA is no more a destination than business rules engines were, or CRM, or ERP. These were all enabling technologies that oriented and guided organizations towards greater levels of agility, awareness and capability. Similarly, deploying RPA provides organizations with insight into the complexity, structure and dependencies of specific tasks. Working towards task automation yields real clarity, on a workflow-by-workflow basis, of what level of cognition will be necessary to achieve meaningful automation levels.

While many tasks can be achieved by current levels of vendor RPA capability, others will require more evolved cognitive automation, and some will be reserved for the future, when new AI capabilities become available. By designating relevant work processes to their automation ‘containers’, an enterprise roadmap to cognitive automation and AI begins to take shape.

]]>
<![CDATA[7 Predictions for RPA in 2018]]>

 

The RPA sector is defined as one of rapid technological evolution, and every year it seems like what we thought to be bleeding-edge capability in January turns out to be proven and deployed technology long before year’s end. With this rapid pace of growth and maturation in mind, where might the RPA sector be by the end of 2018? Here are seven predictions.

The first wave of automation-inclusive UI design

To date, RPA has been adaptive in nature – automation software has done the interpretive labor to ‘see’ the application screen as humans do. But as more and more repetitive-task work becomes automated, software designers will begin taking the strengths and weaknesses of computer vision into account in designing applications that will be shared between human and digital workers. This will show up in small ways at first, particularly in interface areas that are challenging for RPA software to learn quickly, but over the course of 2018, ‘hybrid workforce UI design’ will become a new standard for enterprise software vendors.

Process mining makes RPA more accessible for midmarket & emerging large market segments

Early adopters of RPA have already established that detailed process mapping is key to successful task automation across the extended enterprise. For Fortune 1000 firms, that can be fairly straightforward, with retained consulting and systems integration partners on hand to assist in the process of mapping task flows for RPA implementation. Smaller firms, however, don’t always have the luxury of engaging large consulting firms to assist in this process – so vendors developing their own automated process mapping technology, or partnering with third-party providers like Celonis, will find demand booming in the midmarket.

Human skill bottleneck hits providers without education/certification plans

It’s ironic that human skill capital will end up as the limiting factor in the growth rate of successful RPA implementations, but 2018 will close with a clear shortage of qualified automation designers and deployment management professionals. Those organizations (like UiPath, Blue Prism, and Automation Anywhere) that saw this coming early on and established academic settings for the education and certification of on-platform skilled practitioners, will thrive. But those lacking these programs may find themselves in a skill bottleneck in the market – one that will begin to materially inhibit growth.

RPA becomes a designed-in factor for disruptors

In conversations I had with organizations implementing RPA during 2H17, one common factor came to the fore: that their initial FTE rationalization gains had already been realized, and going forward, they were looking to RPA as a means to manage significant growth in their operations.

For organizations coming to market as disruptors, this trend is even more pronounced, and organizations with designs on being disruptive forces are increasingly building automation capabilities into their growth plans from the ground up. Building an organization on a foundation of a hybrid human-digital workforce is a different endeavor entirely from retrofitting an existing company with automation – and as a result, we should begin seeing some real innovation in organizational design beginning this year.

Japan becomes the adoption template geo for big bets

To date, Japan has produced some of the largest implementations of RPA, with UiPath’s late 2017 deployment at SMBC pushing the envelope still further. Japan is betting big on RPA to become a sustainable source of competitive differentiation, and as more large organizations there implement large-scale RPA projects, the best practices library for RPA deployment at scale will expand in kind.

Companies worldwide looked to Japan for guidance in implementing robotics once before, during the rise of robotic manufacturing in the automotive sector. 2018 will see a second such wave.

RPA proves its case as a source of compliance gains

RPA has been marketed with a number of different value creation characteristics already, with the obvious cost reduction and quality improvement factors taking center stage. But RPA has significant benefits to offer organizations in regulated industries, most notably in the ability to secure access to sensitive information, systematize the process of accessing and modifying that information, and standardizing the documentation process and audit logging work associated with it.

2018 will be the year that organizations begin to see meaningful returns from adopting RPA as a solution to compliance task challenges.

Demand for specialist implementation navigators grows significantly

RPA implementation has been a partnered endeavor since the technology first arrived on the scene, with software vendors allying themselves closely with large consulting firms and systems integrators to optimize their client deployments. But demand is emerging for focused, automation-centric services, and right on time, the industry is seeing a surge of new RPA specialist service providers like Symphony and Agilify.

As buying organizations begin to ask more of their new – or revamped – RPA implementations, demand for these providers’ services will grow swiftly during 2018.

]]>
<![CDATA[CSS Corp’s Contelli Automation Platform Driving Improvements in Enterprise Network Management]]>

 

As 2018 begins, the RPA sector is starting to produce more segment specialists from within its vendor base. Whereas just two years ago the sector was still finding its footing in addressing common back- and front-office application automation, enterprise customers today have the luxury of building best-of-breed solutions that often incorporate two or more vendors working in concert to automate a broader spectrum of tasks.

CSS Corp’s Contelli is a relatively new automation platform, but one that is gaining attention for its capability set in a complex and high-value enterprise support area – namely, automated network management. Contelli received an elevated role at CSS in the wake of the company’s late 2016 reorganizaton, which saw CSS' board elect to change the direction of the firm. As part of this strategic direction change (one that saw an influx of new management talent take place in the executive suite), the company transitioned from a corporate focus heavy on legacy IT services to one centered on customer engagement and digital transformation. That transition also included an elevated role for CSS' automation platform, which was rebranded from AIMS (Automated Infrastructure Management Solution) to Contelli. 

The product continuously analyzes client IT operations and uses network traffic data, paired with algorithmic analysis of historical data, to predict downtime, reconfigure traffic for improved efficiency, dynamically provision and de-provision IT assets, and resolve repetitive support tasks. CSS estimates ~30-40% improvements in operational efficiency in IT operations, and ~45% to ~65% reduction in FTEs, in typical deployments of Contelli IT Management Engine.

Although Contelli’s brand name may be a new one in the market, the platform has already achieved success. For a leading managed network services provider with 450k network devices under management, Contelli software provided the client with a 25% improvement in average handle time for open ticket calls, a 22% improvement in case closure rate, and, perhaps most importantly, a 100% success rate in case audits performed on work Contelli automated.

Three factors make Contelli an appealing offering for organizations seeking to reduce their network management costs:

  • It touches a broad range of KPIs. Network optimization isn’t always realized by identifying a few significant sources of cost savings and quality improvement potential; often, the task involves incremental improvement of multiple KPIs, from throughput and traffic efficiency to asset provisioning speed, to support ticket resolution turnaround cycle. Contelli’s position within the network management stack enables the product to offer a broad array of improvements in KPIs across multiple task areas
  • It learns continuously from network data. Automating a fluid process is among the steepest challenges in intelligent automation today. As variables change within the task area to be automated, the RPA platform of choice must not only be able to adapt on the fly, but learn entirely new sets of events and exceptions as topologies and assets evolve. Contelli’s development team has invested considerable time and resources in the product’s machine learning layer to enable dynamic network management automation
  • It is a focus area for CSS’ Innovation Labs. Contelli is a mature offering today, but CSS has significant plans to improve and upgrade the product’s machine learning capabilities in the company’s Innovaton Labs, an R&D environment for continuous improvement of the platform.  CEO Manish Tandon has circled Innovation Labs in red as a key strategic plank for the company’s evolution, and Contelli is slated for considerable time ‘up on the lift.’

Contelli isn’t a ‘one stop shop’ for front- and back-office enterprise automation, but for organizations seeking to self-fund a larger-scale RPA initiative with a broad slate of KPI improvements in a critical business task area, it’s an appealing choice for network management administrators. 

]]>
<![CDATA[Intelligent Automation Summit Takeaways: Four Alternative Gain Frameworks for RPA]]>

 

At the Intelligent Automation (IA) event in New Orleans, December 6-8, snow in the Big Easy air was not the only surprise. As expected, there was plenty of technological innovation on show in the exhibition hall, but the event also played host to some energized discussions on human-centric gains to be realized from RPA implementation – suggesting that we are indeed moving into the next phase of considering automation holistically in the enterprise.

Specifically, many presentations and conversations shared a theme of human enablement within the enterprise – positioning the organization for greater long-term success, rather than focusing on the short-term fiscal gains of reductions in force and reduced cost to serve specific processes. Here are four automation gain frameworks I took away from the event that are focused on areas other than raw FTE reduction.

Automation as a disruption buffer

‘Disrupt or be disrupted’ has become a mantra for many change management executives across industries, and it was invoked numerous times during the IA event in relation to automation’s role as a buffer to disruptive change – in both directions. An automated workforce can quickly scale up (or down) as needed without costly and time-consuming facility management and workforce rationalization tasks. While there was some discussion regarding the downside containment role of RPA, far more participants at the event were looking to RPA as a tool to effectively manage explosive growth in their sectors

Automation as a ‘hazmat bot’

The idea of using bots to handle sensitive processes and data emerged as a strong theme for the near-term RPA sector roadmap. Where bots were actually less trusted with ‘low-touch’ environment data in highly-regulated industries, like BFSI and healthcare, the dialog is beginning to turn in favor of sending bots to touch and manipulate that data rather than humans.

The rationale is sound: bots can be coded with very narrowly-defined rights and credentials, self-document their own work without exception, and produce their own audit trails. Expect to see this trend gain steam in 2018 and beyond. ‘We send bots into nuclear reactors and onto other planets,’ one attendee told me. ‘We treat the data core in card issuance with no less of a hazmat perspective – where we can minimize human contact, we will, for everyone's benefit.'

Automation as a workflow stress diagnostic

The very process of automating workflows within the organization produces a wealth of usable data, and nowhere is that more evident than in analyzing those workflows for exception management stress points. In a given workflow, there are usually clearly defined and straightforward task components, and those that produce more than an average volume of exceptions. By mapping these workflows and using them to understand similar tasks in other areas of the organization, companies can leverage automation data to identify those phases of a workflow that are creating exception management stress for employees, and add support via process redesign, digitization, or assisted automation.

Automation as human capital churn ‘coolant’

Related to the previous point is the idea that RPA is beginning to serve as a very real source of ‘coolant’ for burnout-prone repetitive task areas in the organization by continuously separating work into automation-relevant and human-relevant. Eliminating the most burnout-causing task stages from the human workday reduces the proclivity for turnover and the total cost to the organization of managing the human side of the workforce.

Summary

Productivity, quality, and fiscal gains are often the first three topics of conversation when organizations discuss launching an RPA initiative. But automation has much more to offer, not only to the organizational bottom line, but to the human employees in the enterprise as well. As this sector’s technology offerings evolve and mature, so too do the use cases and benefit frameworks within customer organizations.

]]>
<![CDATA[Adventures in Blockchain: Mphasis Focuses on Client Revenue Growth, Supporting Compelling Use Cases]]>

 

In this article, I look at Mphasis’ Blockchain initiatives and at the segments they are focusing on for further development with their financial services clients. Mphasis began its Blockchain initiatives in 2016, initiating internal experiments and POCs to understand the technology and how it can be applied to business challenges.

Mphasis is working with a global financial services company on POCs and an approach to bringing a customer identity solution to the financial services market, in order to address consumer data challenges in a global environment. The customer and Mphasis are working to address multiple issues including:

  • Solution construct, design approach, and related technology considerations to select the right Blockchain technology from different options such as BigchainDB, HyperLedger, Ethereum, Multichain, network transaction currency and conversion to fiat, engagement layer and access point technologies
  • Industry ecosystem participation considerations – incentives, privacy protections, regulatory compliance considerations, trust and risk, and access point technologies to join the network
  • POC prototype and demo – for an initial MVP.

The POC took 7 weeks to demonstrate that the technology works and compliance is achievable. The solution was set up as a multi-node environment that enables the industry participants to transact, by enabling functions such as set-up and administration, search, crypto-payments, transaction administration, analytics, regulatory oversight and access.

Since then, Mphasis has built an ecosystem of Blockchain tools and best practices, and conducted multiple POCs. Clients are narrowing the range of use cases they wish to pursue further and are driving some of those into production.

Mphasis’ Blockchain services & use cases

Mphasis has a core group of 10+ engineers working on Blockchain initiatives who are based in Bangalore. Key attributes of Mphasis’ Blockchain ecosystem include:

  • POCs completed to date: 12, of which 50% were client requested and 50% internally undertaken
  • Clients engaging on Blockchain: 7 across banking, insurance, and airlines
  • COE founded: 2016
  • Platforms employed: Ethereum, Hyperledger, Multichain, and Bigchain.

Mphasis focuses on the Etherium and Hyperledger platforms in its Blockchain work, and expects to add a capability in Quorum soon. Key POCs to date include:

  • Trade finance for banks: enabling a decentralized network between importer, exporter, port authorities, and banks. Key issues addressed include document verification, fraudulent activity incidence, and document losses
  • Mortgage document management: the goal is to store documents on the DLT as a customer goes through the loan application process. This will allow vendors (e.g. insurance companies) to access the documents and speed up TAT, which will reduce cost of origination and improve customer experience
  • Record keeping: enabling a single version of the truth, with additional components including IOT and smart contracts
  • Patient health records: enabling confidential sharing of patient records and with intended participants
  • Baggage-as-a-service: distributed, decentralized system for tracking bags during travel by passenger using mobile device
  • Group insurance claims: stakeholders including hospitals, insureds, insurer, and third-parties transact and exchange documents to enable fast settlement of claims
  • Contract management: digital signing of documents on a Blockchain network to ensure transparency
  • KYC registry: enabling a KYC market utility using Blockchain.

Going forward, Mphasis will focus on:

  • Consulting for clients considering Blockchain initiatives
  • Delivering Blockchain implementations (POC or operational) with integrated application suites to reduce time to market and increase platform efficiency
  • Delivering operational support for Blockchain environments based on its solution experience.
  • Continuing to create use cases around KYC registry, mortgage document management, trade finance, baggage-as-a-service, and group insurance claims.

Conclusions

To date, most Blockchain services vendors have been focused on enabling small groups of direct stakeholders to use Blockchain to eliminate the need for third-party support. Mphasis has focused instead on enabling stakeholders to bring in third-parties as customers, and use Blockchain as a highly secure, reliable self-service tool. This should allow data holders, the sponsors of these initiatives, to monetize their investments in customer data and documents. This will allow Mphasis eventually to transition its Blockchain services towards operations support and cybersecurity. By supporting its clients’ efforts to drive revenue growth, Mphasis is able to support compelling use cases for employing this technology.

]]>
<![CDATA[In RPA Deployment, Slow Down... To Go Faster]]>

 

RPA software offers users the tantalizing possibility of being able to simply 'hit record and go' at the beginning of an enterprise automation initiative. But organizations that are seeing the greatest returns are slowing the initial process down, and framing their initiatives as they would treat any major technology migration.

At UIPath’s recent User Summit in New York City, one of the hottest topics was the right pace of RPA implementation, with UIPath’s customer and partner panels devoting a considerable amount of time to the topic. And the message was clear: RPA is a technology that encourages an implementation rate faster than the customer might want to sign up for.

That very idea is a strange one for most veteran IT and business executives, who are used to IT project implementations going slower than expected, with fiscal returns further in the future than they might have hoped. So when a technology like RPA does come along that promises to enable users to ‘hit record and go’, why shouldn’t beleaguered line of business heads take those promises at face value and get moving with automation today? After all, automation is often part of a larger digital transformation initiative, with expectations that projects will be self-funding through savings. Shouldn’t technologies, like RPA, that generate material cost reductions be implemented as quickly as possible?

It’s a fair question. But there are four simple reasons why RPA projects should still be managed in a stepwise fashion, like any other IT or business project:

  • Technical debt mounts quickly in too-quick RPA implementations. The ‘hit record and go’ philosophy might offer some minimal return in a short period of time, but federating the automation creation process means that multiple users often create similar automations for similar tasks, wasting time and resources consumed later in consolidating different versions of the same robot down to a single bot. In addition, individual users often create related-task bots based on their original automation scripts, multiplying the task of bot consolidation later. Often, organizations find that they have to start over completely, and only then do they undertake a more formal approach
  • Installing RPA through a traditional project framework brings stakeholders together. Automation is a technology that has the potential to bring IT and business stakeholders together in an enterprise service delivery partnership – or drive them apart with turf battles and finger-pointing. Establishing rules up front for which business units should be involved in automation design, which in automation coding, which in automation governance, and which in automation innovation establishes ground rules that all parties involved can respect and buy into for the long term, avoiding larger-scale conflict that can emerge when the process is entered into too quickly up front
  • Designing for scale demands both innovation and centralization. As automation demand scales both in terms of breadth of services within the organization and the number of workers involved, the need for centralization of automation design and deployment increases commensurately. Innovation can actually proceed faster in many organizations being managed from a CoE or automation ‘lighthouse’ than through trial and error at the desktop level. Add in the additional demands on automation systems that result from global organizations demanding localized automations and in-language service, and that scale factor becomes a critical component in achieving peak fiscal return from an RPA initiative
  • Most RPA providers rely on integration partners for ‘right-speed’ deployment and support. Across the RPA sector, strong partnerships have evolved between RPA software developers and major integrators and consulting service providers, and for good reason – the latter bring experience in change management, process design, and implementation at scale to the former’s technological innovations. This has quickly become a proven combination, and one that is returning significant fiscal and operational value to enterprise-scale organizations. Short-circuiting that value return chain by cutting partner perspective and capability out of the equation might again save some dollars and time in the short run, but will end up being more costly as RPA is scaled up.

RPA presents IT and business leaders with an alluring combination of immediacy of access, significant potential fiscal returns, and low to non-existent stack requirements on deployment. Organizations that have jumped into the deep end of enterprise automation from the ‘hit record and go’ perspective might see some immediate fiscal returns, but ultimately, they are selling short the full promise of professionally-managed automation projects executed in partnership between lines of business and IT. Providers like UIPath that are emphasizing speeding up implementation are doing so with a structured framework in mind – so that once the process is designed for scale, and implementation rules and procedures are put in place, the actual software component of the solution can proceed into deployment as quickly as possible.

But in the end, a few additional weeks or even months spent in up-front work can better enable enterprise-level organizations to achieve their peak automation return. Moreover, this approach saves costly rework and redesign stages that inevitably stretch a ‘hit record and go’ implementation out to the same project timeline, or often much longer, than a more structured approach. As strange as it may sound, the best practices in RPA deployment involve slowing down… in order to go faster. 

]]>
<![CDATA[Infosys’ Testing Practice Update: AI, Chatbots & Blockchain]]>

 

We recently caught up with Infosys to discuss where its Infosys Validation Solutions (IVS) testing practice is currently investing. This is a follow-up to a similar discussion we had with Infosys back in July 2016 that centered on applying AI and making sense of the data that client organizations have (see here).

Our most recent discussion looked at technologies such as AI, chatbots, and blockchain. The focus of IVS has expanded from immediate opportunities within software testing to Infosys’ overall development of new IT services offerings.

AI: more use cases are the priority

AI remains a priority for IVS, with the attention to date having centered on developing use cases in test case optimization and defect prediction. Its PANDIT IP correlates software new releases with past defects, feature changes, test cases, and determines what part of the new release’s code is responsible for defects. IVS points out that its implementation (in identifying the lines of code responsible for the defect) is relatively difficult. IVS is taking a gradual approach, and starting with COTS, the underlying rationale being that new releases of COTS are much more documented than custom applications: identifying the part of the code that is responsible for a bug is therefore easier and is likely to be in the custom code of the COTS.

Chatbots: testing response validity

The use of chatbots/virtual agents challenges the traditional functional testing model, which largely relies on a process, and on executing a test case (e.g. a user tries to login to a website), and to make sure the transaction outcome is valid (e.g. user is indeed logged in). With chatbots, the goal is not so much about process testing, but lies in response testing, for example:

  • Interpreting questions correctly
  • Dealing with the wide range of expression options end-users have for the same idea
  • Selecting the most appropriate response from a high number of potential responses.

Of course, as with any ML, this requires multiple iterations with SMEs for the virtual agent to learn, in addition to using language libraries; this is a work in progress with early PoCs with clients.

Blockchain: integration complexity & business rules testing

The complexities with blockchain are different from those with chatbot testing. With blockchain, as with IoT, the complexity lies in its principles: a decentralized architecture, and many parties/items involved. IVS is assessing how to conduct testing around authentication and security, communication across nodes, also making sure transactions are processed and replicated across nodes.

Looking ahead, there will be a challenge with functional testing, in testing the underlying business logic/ rules, while also complying with different local business regulations, and languages. IVS is developing approaches to validate these contracts and is in early phase of PoC with clients.

Conclusion: the challenge is to automate testing of complex software at scale

The challenge of testing chatbots and blockchain, also IoT, and physical robots, is not so much about effective functional and non-functional testing but about moving the testing of such technologies to an industrial level, using automation software that only exists partially today.

The good news is that the ecosystem of testing start-ups is vibrant, and larger software testing services providers like Infosys are investing now in preparation for the surge in adoption of such technologies. 

]]>
<![CDATA[Adventures in Blockchain: Capgemini Focuses on Helping Clients Develop Their Roadmap]]>

In this blog, I look at Capgemini’s Blockchain initiatives and what segments they are focusing on for further development with their financial services clients.

Initially, Blockchain engagements were focused on: 

  • Using POCs to develop an understanding of the capabilities and limitations of distributed ledger technology (DLT)
  • Developing business use cases, trying POCs to determine if there is an effective business application of the technology
  • Conducting due diligence on vendors to understand the supplier ecosystem.  

Recently, financial institutions have been narrowing the range of use cases and vendors they are willing to consider. They are looking to drive forward one or more use cases to full production, and their focus with Blockchain services vendors is to develop a selective roadmap for operational deployment of a few high priority engagements.

Capgemini’s Blockchain services & use cases

Capgemini has been pursuing Blockchain for two and a half years, and it has a group of 25+ engineers working on Blockchain initiatives, with seven engagements currently in play. Capgemini’s Blockchain practice believes successful initiatives require a combination of business domain and technology expertise, and it focuses on five areas:

  • Technology expertise: especially DLT, cybersecurity, communications, and data management
  • Domain expertise:
    • Structured finance: trade finance and factoring, non-listed, non-codified bilateral agreements
    • Payments: real-time international payments transactions, including compensation, settlement, and reporting
    • Capital markets: Post Trade Automation (including optimized Collateral operations), Syndicated & Commercial Lending, and Non-Listed Securities
    • Insurance and reinsurance: focused on European companies for smart contract management 
    • Digital identity: security and personal identity for access to the DLT
  • Program management: DLT projects are complex and agile, with the client and vendor are working together on the project  
  • Alliance partners: cloud providers, and product vendors. Capgemini participates on industry panels, especially on Hyperledger Fabric, to create and support roadmap development
  • Partner on business: platform-based operations delivery. Creation and governance of the utility that will provide service to the clients.

Currently, Capgemini works with four key technology stacks:

  • Symbiont
  • Hyperledger
  • R3 Corda
  • Ripple.

Capgemini believes it is differentiating to understand the current state environment within a given client (both business processes and technology processes). Further, that understanding is required to be able to effectively reimagine processes using any advanced technology, especially Blockchain.  

Ultimately, Capgemini wants to act as a universal integrator, partnering with technology providers to support clients redesigning their business with Blockchain centric services that also leverage complementary capabilities like AI or machine learning. Capgemini is aiming to serve as the Transformation Partner for their clients, where Distributed Ledger Technology is the transaction framework to deploy next generation, collaborative operating models. Working with key partners, they will continue to evolve core technical competencies in Blockchain to its clients, such as:

  • Blockchain as-a-service
  • Security as-a-service
  • Identity management as-a-service.  

Conclusions

To date, most Blockchain services vendors have been:

  • Delivering POC engagements to clients as clients work to identify opportunities to use Blockchain technologies, or…
  • Building Blockchain POCs for utilities they might productize for clients.

Capgemini is pursuing a third path of building on its extensive work with client legacy systems, and coupling that domain knowledge of the client with its own ability to coordinate multiple technology vendors to create faster, more effective business restructuring around Blockchain capabilities.

Ultimately, as Blockchain technology matures, Capgemini will transition to providing Blockchain infrastructure services focused on security and technology platform outsourcing. While the technology is still at a very early stage, adoption is increasingly looking to be done primarily by tier-one institutions. The technology will mature rapidly, and infrastructure providers will be harvesting most of the revenues being created for vendors in Blockchain.  

]]>
<![CDATA[Nvidia Draws on Gaming Culture to Compete for AI Chip Leadership]]>

 

Nvidia faces stiff new competition for the leadership position in the AI processing chip market. But the firm has a significant competitive advantage: a culture of innovation and production efficiency that was developed to address the demanding needs of a wholly different market.

Intel and Google have been making waves in the AI processing chip market, the former with the acquisitions of Nervana Systems and Mobileye, the latter with the new Tensor Processing Unit (TPU) announcement. Both are moves intended to compete more directly with Nvidia in the burgeoning market for AI processing chips.

James Wang of investment firm ARK recently set forth his long-term bet on the industry – and it favors Nvidia. Wang posits that products like TPU will be less efficient than Nvidia GPUs for the foreseeable future, arguing that “…until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.”

Wang is right, but his opinion may not actually go far enough in explaining why Nvidia should enjoy a sustainable advantage over other relative newcomers, despite their resources and experience in chipbuilding. That advantage, by the way, doesn’t have a thing to do with Google’s chip fabrication expertise, or Intel’s understanding of the needs of the AI market. It’s a deeper factor that’s seated firmly in Nvidia’s culture.

Cutting-edge engineering & savvy pricing: key strengths forged in the gaming cauldron

By the time 2017 dawned, Nvidia owned just over three-quarters of the graphics card segment (76.7%), compared with main competitor AMD’s one-quarter (23.2%). But that wasn’t always the case. In fact, for much of the past decade, Nvidia held an uncomfortable leadership position in the marketplace against AMD, sometimes leading by as few as ten points of market share (2Q10).

During that time, Nvidia understood that a misstep against AMD in bringing new products forth could yield the market leader position, and even send the company into an unrecoverable decline if gamers – a tough audience to say the least – lost confidence in Nvidia’s vision.

As such, Nvidia learned many of the principles of design thinking the hard way. They learned to fail fast, to find new segments in the market and exploit them – as they did with the GTX 970, a product that stunned the marketplace by being priced underneath its predecessor at launch – and to take and hold ground with innovation and rapid-cycle development. More importantly, they learned how to demonstrate value to a gamer community that wanted to buy long-term performance security when it was time for a hardware refresh. In short, they learned to understand the wants and needs of an extraordinarily demanding consumer public, in the form of gamers, and relentlessly squeezed their competition out with a combination of cutting-edge engineering and savvy segment pricing.

Much of the real-world output from that cultural core of relentless engineering improvement is the remarkable pace of platform efficiency that Nvidia has achieved in its GPU chips. The company maintained close ties with leading game publishing houses, and as a result kept clearly in mind what sort of processing speed – as well as heat output and energy draw – cutting-edge games were going to require. At multiple points in time, the standards for supporting new games have meaningfully advanced inside eighteen months. This often mandated that Nvidia turn over a new top-end GPU processing platform on a blistering production timeline.

In response, Nvidia turned to parallel computing, an ideal fit for GPUs, which already offered significantly more cores than their CPU cousins. As it turned out, Nvidia had put itself on the fast track to dominating the AI hardware market, since GPUs are far better suited for applications, like AI, that demand computing tasks work in parallel. In serving one market, Nvidia built a long-term engineering and fabrication roadmap nearly perfectly suited for another.

The competition is hot, but Nvidia poised to win?

Fast forward to 2017, and some are questioning whether Nvidia is in the fight of its life now with new, aggressive competitors seeking to take away part – or all – of its AI GPU business. While Wang has pushed his chips into the center of the table on Nvidia, others are unconvinced that Nvidia can hold its lead – especially with fifteen other firms actively developing Deep Learning chips. That roster includes such notable brands as Bitmain, a leading manufacturer of Bitcoin mining chips; Cambricon, a startup backed by the Chinese government; and Graphcore, a UK startup that hired a veritable ‘who’s who’ of AI talent. 

There’s no shortage of innovation and talent at these organizations, but hardware is a business that rewards sustained performance improvement over time at steadily reducing cost per incremental GFLOPS (where a GFLOP is one billion floating point operations per second). The first of these components is certainly an innovation-centric factor, but the second rewards organizations that have kept pace not only with the march of performance demands, but the need to justify hardware refresh with lower operating costs. Given that this is an area where Nvidia shines, as a function of its cultural evolution under identical circumstances in gaming, the sector’s long-term bet on Nvidia is the correct call. 

 

Dave Mayer is currently working on a major global project evaluating RPA & AI technology. To find out more, contact Guy Saunders.

]]>
<![CDATA[HCL's 3-Lever Approach to Business Process Automation: Risk & Control Analysis; Lean & Six Sigma; Cognitive Automation]]> HCL has undertaken ~200 use cases spanning finance & accounting, contact, product support and cross-industry customer onboarding, and claims processing, using products including Automation Anywhere, Blue Prism, UiPath, WorkFusion, and HCL’s proprietary AI tool Exacto.

This blog summarizes NelsonHall’s analysis of HCL's approach to Business Process Automation covering HCL’s 3-lever approach, its Integrated Process Discovery Technique, its AI-based information extraction tool Exacto, the company’s offerings for intelligent product support, and its use of its Toscana BPMS to drive retail banking digital transformation.

3-Lever Approach Combining Risk & Control Analysis, Lean & Six Ssigma, and Cognitive Automation

 

 

  • The 3 lever approach forms HCL’s basis for any “strategic automation intervention in business processes”. The automation is done using third-party RPA technologies together with a number of proprietary HCL tools including Exacto, a cutting-edge Computer Vision and Machine Learning based tool, and iAutomate for run book automation

  • HCL starts by conducting a 3-lever automation study and then creates comprehensive to-be process maps. As part of this 3-lever study, HCL also conducts complexity analysis to create the RPA and AI roadmap for organizations using its process discovery toolkit. For example, HCL has looked at their entire process repository for several major banks and classified their business processes into four quadrants based on scale and level of standardization

  • When generating the “to be” process map, HCL’s Integrated Process Discovery Technique places a high emphasis on ensuring appropriate levels of compliance for the automated processes and on avoiding the automation of process steps that can be eliminated

  • The orchestration of business processes is being done using HCL’s proprietary orchestration platform, Toscana©. Toscana© supports collaboration, analytics, case management, and process discovery and incorporates a content manager, a business rules management system, a process simulator, a process modeler, process execution engines, and integrated offering including social media monitoring & management.

Training Exacto AI-based Information Extraction Tool for Document Triage within Trade Processing, Healthcare, Contract Processing, and Invoice Processing

  • HCL’s proprietary AI enabled, machine learning solution, Exacto, is used to automatically extract and interpret information from a variety of information sources. It also has natural language and image based automated knowledge extraction capabilities

  • HCL has partnered with a leading U.S. University to develop its own AI algorithms for intelligent data extraction and interpretation for solving industry level problems, including specialist algorithms in support of trade processing, contract management, healthcare document triage, KYC, and invoice processing

  • Trade processing is one of the major areas of focus for HCL. Within capital markets trade capture, HCL has developed an AI/ML solution Exacto | Trade. This solution is able to capture inputs from incoming fax based transaction instructions for various trade classes such as Derivatives, FX, Margins, etc. with accuracy of over 99%.

Combining Watson-based Cognitive Agent with Run Book Automation to Provide “Intelligent Product Support”

  • HCL has developed a cognitive solution for Intelligent Product Support based on a cognitive agent LUCY, Intelligent Autonomics using for run book automation, and Smart Analytics with MyXalytics for dashboards and predictive analytics. LUCY is currently being used in support for IT services by major CPG, pharmaceuticals, and high-tech firms and in support of customer service for a major bank and a telecoms operator

  • HCL’s tool is used for run book automation, and HCL has already automated 1,500+ run books. uses NLP, ML, pattern matching, and text processing to recommend the “best matched” for a given ticket description. HCL estimates that it currently achieves “match rates” of around 87%-88%

  • HCL estimates that it can automate 20%-25% of L1 and L2 transactions and has begun automating internal IT infrastructure help-desks.

Positioning its Toscana Platform to Drive Digital Transformation in Retail Banking

  • HCL is embarking on digital transformation through this approach and has created predefined domain-specific templates in areas including retail banking, commercial lending, mortgages, and supply chain management. Within account opening for a bank, HCL has achieved ~ 80% reduction in AHT and a 40% reduction in headcount

  • In terms of bank automation, HCL has, for one major bank, reduced the absolute number of FTEs associated with card services by 48%, a 63% decrease based on the accompanying increase in the workload. Elsewhere, for another bank, HCL has undertaken a digital transformation including implementation of Toscana©, resulting in a reduction of the number of FTEs by 46%, the implementation of a single view of the customer, a reduction in cycle time of 80%, and a reduction in the “rejection rate” from 12% to 4%.

]]>
<![CDATA[Fast Data: The Smart Will Get Faster... and the Fast Will Get Smarter]]>

 

Fast Data is the emerging hot topic of discussion for business leaders seeking to get ahead of the next wave of data utilization. But Fast Data isn't just an evolution of Big Data; it's a market force unto itself that's asking more of traditional and start-up vendors in both traditional DBMS and AI.

I spent a (surprisingly snowy) morning this week talking with AI and Big Data thought leaders at the Global Data Summit 2017 in Colorado. While there’s no shortage of topics to hold their current interest, none was a higher business priority than solving the challenge of managing Fast Data through the application of AI. The consensus is certainly that the organizations that can best address this challenge will also be those best positioned to compete and win overall. But how best to get their arms around this opportunity and move forward effectively?

First, it's important to distinguish between the challenges of leveraging Big Data and Fast Data. Big Data is generally data at rest; it's explored at (relative) leisure, and doesn’t change so quickly or accumulate so rapidly that offline analytics become impossible. AI has no shortage of applications in Big Data, but in that environment, it's more the ability of an AI platform to manage complexity and work at scale that offers value.

Fast Data, by contrast, accumulates quickly and can change substantively within the course of a day or even an hour. Think adtech here, or online gaming, or vendor pricing with commodity costs as an input; vast amounts of data need to be ingested, analyzed, and understood by the second in order to secure the right ad placement at peak value, or to manage complex MMO games, or to ensure that pricing continuously secures competitive advantage at acceptable margin.

Fast Data becomes Big Data quickly, just by nature of its accumulation rate, and while it's often valuable to query the Big Data that Fast Data becomes to understand trends and cyclicality, Fast Data will always yield its peak value at the millisecond level. It’s the freshest layer that offers the most insight. The Big Data value proposition to retailers, for instance, is looking for cyclicality of demand and regional demand preferences over time; the Fast Data value proposition is understanding the products a shopper is looking at right now and making real-time recommendations for, say, footwear and accessories to match. AI can accomplish both tasks, but often needs to be set about different tasks – with different priorities and ground truths – to succeed. The implications for every phase of the organizational data analysis and workflow management platform – from MDM and data hygiene to machine learning and AI application – are immense.

In response, expect to start seeing considerably more focus from major AI platform vendors not just on depth of understanding by their products, but speed of reaction as well. Organizations big and small in the traditional data sector, from Oracle to VoltDB, are developing and marketing smarter Fast Data solutions, while AI leaders – like IBM and Wipro – are building capabilities for faster data management within their AI platforms.

Servicing this rapidly-growing need for Fast Data management will be a convergent effort: the smart will get faster… and the fast will get smarter.

 

Dave Mayer is a Senior Analyst responsible for NelsonHall's RPA & Cognitive Services research program, covering the areas of robotic process automation (RPA), artificial intelligence, cognitive business, and machine learning. He is currently working on a major global project evaluating RPA & AI technology. To find out more about the project, contact Dave Mayer or Guy Saunders.

]]>