NelsonHall: Digital Transformation Technologies & Services blog feed https://research.nelson-hall.com//sourcing-expertise/digital-transformation-technologies-services/?avpage-views=blog NelsonHall's Digital Transformation Technologies & Services program is designed for organizations considering, or actively engaged in, the application of robotic process automation (RPA) and cognitive services such as AI to their business processes. <![CDATA[A First Look at Blue Prism’s New RPA Initiatives]]>

 

Today’s announcement from Blue Prism covers new product capabilities, new service and design support services, and a new go-to-market framework that underscores the importance of automation as a means to enable legacy organizations to compete with 'born-digital' startups. Blue Prism’s announcement is equal parts perspective, product, and process. Let’s examine each in turn.

Perspective

The perspective Blue Prism is bringing to the table today is the notion of empowering digital entrepreneurs within an organization (under the flag ‘connected RPA’) with the intent of either disruption-proofing that organization or at least enabling self-disruption as part of a deliberate strategy.

In Blue Prism’s view, this is best accomplished through a package of three organizational automation design concepts. The first is the federation of the center of excellence concept – which is not to say that existing CoEs are obsolete, but rather now serve as a lighthouse for other disciplinary CoEs within, for example, finance, production, and customer care. Pushing more organizational automation authority and responsibility outward into the organization, in Blue Prism’s view, enables legacy organizations to begin acting more like ‘born-digital’ disruptors.

The second such principle, enabled by the first, is the concept of significantly accelerating the process of moving from proof of concept to at-scale testing to enterprise deployment. Again, the company positions this as a means to emulate born-digital firms and build both proactive and reactive organizational change speed through rapid automation technology deployment.

And third, Blue Prism is emphasizing the value of peer-to-peer interaction among organizational automation executives, a plank of its strategy that is being served through the rollout of Blue Prism Community – an area in Blue Prism Digital Exchange for sharing best practices and collaborating on automation challenges.

Product

The product announcements supporting this new go-to-market perspective include a process discovery capability, which will be available on the Blue Prism website. For those readers who recall seeing Blue Prism announce a partner relationship with Celonis in September of 2018, this may come as a surprise, but the firm has every intention of maintaining that relationship; this new software offering is intended as a lighter process exploration tool with the ability to visualize and contextualize process opportunities.

Blue Prism is careful to distinguish here between process discovery – the identification of processes representing a good fit for automation – and process mining, a deeper capability offered by Celonis that includes analysis of the specific stepwise work done within those processes.

Blue Prism also announced today the availability of its London-based Blue Prism AI Research Lab and accompanying AI roadmap strategy, which focuses on three areas: understanding and ingesting data in a broader variety of formats, simplifying automation design, and improving the relationship between humans and digital workers in assisted automations.

In addition, in an effort to put its expanded product set in the hands of more organizations, Blue Prism is also going to open up access to the company’s RPA software making it easy for people to get started, learn more, and explore what’s possible with an intelligent digital workforce.

Process

Finally, the process of engaging Blue Prism is changing as well. The company has established, through its experience in deployments, that the early stages of organizational automation initiatives are critical to the long-term success of such efforts, and has staged more support services and personnel into this period in response. Far from being a rebuke of channel partner efforts, this packaged service offering will actually increase the need for delivery partner resources ‘on the ground’ to service customers’ automation capabilities.

Blue Prism’s own customer success and services organization will offer to provide Blue Prism expertise into the customer programs through a series of pre-defined interventions that complement and augment the customers’ and partners’ efforts. The offering, entitled Success Accelerator, is designed around Blue Prism’s Robotic Operating Model (ROM), the company’s design and deployment framework. The intent of this new product is accelerating and accentuating client ROI by establishing sound automation delivery principles based on lessons Blue Prism has learned in its deployment history to date.

Summary

Blue Prism’s suite of product, process and perspective announcements today underscore an emerging trend in the sector – namely, the awareness that automation offers real improvements in organizational speed and agility, two characteristics that will be important for legacy organizations to develop if they are to compete with fast, reactive, born-digital disruptive startups.

The connected RPA vision that Blue Prism has outlined highlights the evolving power of automation. It extends beyond the limits of traditional RPA, giving users a compelling automation platform which includes AI and cognitive features. Furthermore, the new roadmap, capabilities, and features being introduced today enable Blue Prism’s growing community of developers, customers, channel partners, and technology alliances.

]]>
<![CDATA[Get Ready for Quantum Computing: 5 Steps to Take in 2019]]>

 

IBM recently announced the first ‘commercial-ready’ quantum computer, the 20-qubit Q System One. The date is certainly worth recording in the annals of computing history. But, in much the same way that mainframes, micros, and PCs all began with an ‘iron launch’ and then required a long pragmatic use case maturity curve, so too will this initial offering from IBM be the first step on a long evolution path. With so much conjecture and contemplation happening in the industry surrounding this announcement, let’s unpack what IBM’s announcement means – and how organizations should be reacting.

First, although Q System One is being billed as commercial-ready, that designation means that the product is ready for usage on a traditional cloud computing basis, not necessarily that it is ready to contribute meaningfully to solving business problems (although the device will certainly mature quickly in both capability and speed). What Q System One does offer is a keystone for the industry to begin working with quantum technology in much the same way that any other cloud utility supercomputing devices are available, and a testbed for beginning to explore and develop quantum code and quantum computing strategies. As such, while Q System One may not outperform traditional cloud computing resources today, its successors will likely do so in short order – perhaps as soon as 2020.

As I noted in my blockchain predictions blog for 2019, quantum computing has long been the shadow over blockchain adoption, owing to the concern that quantum computing will make blockchain’s security aspect obsolete. That watershed lies years in our future, if indeed at all, and it is important to note that quantum computing can as easily be tasked to enhance cryptographic strength as it can to break it down. As a result, expect that the impact of quantum computing on blockchain will net to a zero-sum game, with quantum capabilities powering ever-more evolved cryptographic standards in much the same way that the cybersecurity arms race has proceeded to date.

With this in mind, what should organizations have on their quantum readiness roadmaps? The short answer is that quantum readiness is more the beginning of many long-term projects rather than the consummation of any short-term ones, so quantum is more a component of IT strategy than near-term tactical change. Here are five recommendations I’m making for beginning to ready your organization for quantum computing during 2019.

Migrate to SHA-3 – and build an agile cybersecurity faculty

There is no finish line for cybersecurity, especially with quantum capabilities on the horizon, but when I speak with enterprise organizations on the subject, I recommend that a combination of NIST and RSA/ECC technologies approximates to something that will be quantum-proof for the foreseeable future. Migration off of SHA-2 is a strong prescriptive regardless, given the flaws that platform shared with its predecessor. But perhaps more importantly than the construction of a cryptographic standard to meet quantum’s capabilities is the design of an agile cybersecurity faculty that can shorten the time to transition from one standard to the next. Quantum computing will produce overnight gains in both security and exposure as the technology evolves; being ready to take swift counteraction will be key in the next decade of information technology.

Begin asking entirely new questions in a Quantum CoE

Traditional computing technology has taught us clear phase lines of the possible and impossible with respect to solving business problems. Quantum, over the course of the next decade, will completely redraw those lines, with more capability coming online with each passing year (and, eventually, quarter). Tasks like modeling new supply chain algorithms, new modes of product delivery, even new projections of complex M&A activity in a sector over a long forecast span will become normal requests by 2030.

Make sure data hygiene and MDM protocols are quantum-ready

Already, there have been multiple technologies – Big Data, automation, and blockchain are just three – that have strongly suggested the need to ensure that organizations are running on clean, reliable data.

As business task flow accelerates, and more cognitive automation and smart contracts touch and interact with information as the first actor in the process chain, it is increasingly vital to ensure that these technologies are handling quality data. Quantum may be the last such opportunity to bring the car into the pit for adjustments before racing at full speed commences in sectors like retail, telecom, technology, and logistics. This is a to-do that benefits a broad array of technological deployment projects, so while it may not be relevant for quantum computing until the next decade begins, the benefits will begin to accrue from these efforts today.

Aim at a converged point involving data, analytics, automation & AI

Quantum computing is often discussed in the context of moonshot computing problems – and, indeed, the technology is currently best deployed against problems outside the realm of capability for legacy iron. But quantum will also power the move from offline or nearline processing to ‘now’ processing, so tasks that involve putting insights from Big Data environments to work in real-time will also fall within reach over the course of the next decade. What you may find from a combination of this action and the two prior is that some of the questions and projects you had slated for a quantum computing environment may actually be addressable today through a combination of cognitive technologies.

Reach out to partners, suppliers & customers to build a holistic quantum perspective

Legacy enterprise computing grew up as a ‘four-walls’ concept in part because of the complexity of tackling large, complex business optimization problems that involved moving parts outside the organization. Quantum does not automatically erase those boundary lines from an integration perspective, but the next decade will see more than enough computing power come online to optimize long, global supply chain performance challenges and cross-border regulatory and financing networks. Again, efforts in this area can also benefit organizational initiatives today; projects in IoT and blockchain, in particular, can achieve greater benefits when solutions are designed with partners, suppliers, regulators, and financiers involved up front.

Conclusion

Quantum computing is not going to change the landscape of enterprise IT tomorrow, or next month, or even next year. But when it does effect that change, organizations should expect its new capabilities to be game-changers – especially for those firms that planned well in advance to take advantage of quantum computing’s immense power.

This short checklist of quantum-readiness tasks can provide a framework for pre-quantum technology projects, too – making them an ideal roster of 2019 ‘to-dos’ for enterprise organizations.

]]>
<![CDATA[7 Blockchain Predictions for 2019]]>

Blockchain has progressed considerably as an emerging technology during 2018. Many of 2017’s PoCs have become deployed commercial solutions as standards have begun to solidify, and with more organizations beginning to explore the potential of distributed ledger architecture and smart contracting.

We are still at the very beginning of the lifecycle of this particular technology, and nowhere close to seeing its full potential yet. But as the year comes to an end, what might 2019 bring in terms of distributed ledger maturity and trends? Here are my seven predictions for blockchain in 2019.

The use case landscape shakes out

Blockchain has a clear goodness-of-fit spectrum, as I have written about in a previous blog, and to date, that spectrum has often been tested at the low end with mixed results. Blockchain has clear strengths in a number of well-defined use cases, most notably supply chain and parts management, multiparty shipping and logistics tasks, remittances, securities clearance, and more. As 2019 dawns, we will begin to see providers focus less on exploring the use case spectrum and more on building more capability into those use cases that have been proven to be blockchain-relevant.

Use cases become playbooks

The secondary benefit of a more focused approach on the part of blockchain service providers is the swift emergence of proven playbooks for specific blockchain applications. Already, providers are beginning to slash the number of discussed use cases as it becomes obvious that cold-chain pharma, farm-to-fork agricultural provenance, and airplane parts sourcing and documentation, for example, are functionally multiple iterations of the same basic design with domain knowledge added per the specific deployment.

Interoperability fades as a limiting factor

Blockchain has presented something of a Betamax/VHS (or Blu-Ray/HD-DVD, for younger readers) quandary to date, with multiple standards each offering a unique source of value but on a mutually exclusive basis. But more providers are beginning to focus on hybrid blockchain solutions and platform interoperability, and the announcement in late October that Hyperledger Fabric will be able to execute smart contracts written for Ethereum certainly signals that we are entering the next phase of the market, in which multiple market leaders will need to play responsibly in the sandbox for this technology to take deep root.

Throughput speeds improve – but DB-like operation at-speed/at-scale is more likely in 2020

Blockchain’s primary drawback up to this point is that it can operate at speed, or at scale, but not both. That is slowly changing, with more blockchain accelerators emerging in the marketplace (Microsoft’s CoCo being just one example), and greater attention being paid to purpose-built platforms (like Symbiont Assembly) that are architected for at-speed/at-scale operation. Sharding and layer-2 protocols, both under exploration by Ethereum, show promise for keeping the core value of a distributed ledger system and adding the ability to accelerate transaction throughput to near-database speeds.

Quantum computing comes in from the cold

QC has been the hobgoblin looming over blockchain in the media for years, almost always framed as a technology that sits in opposition to blockchain – either as a security threat or a technology that will make the distributed ledger concept obsolete. But, like most technologies, it will emerge from the threatening media gloom to take its place at the solution table, in the form of a blockchain acceleration and security-improving offering. Quantum computing is still some way off from making a material difference in the IT landscape, but 2019 will bring a dose of sanity in removing the oppositional rhetoric from its emerging presence.

Automation, AI & IoT combine with blockchain for next-gen digital transformation

Blockchain is often discussed as if immutability and transaction security are its primary value proposition. But smart contracting and autonomous action within a DLT environment are at least as important in terms of overall value to the enterprise – and these are capabilities enriched and informed by other emerging technologies, including IoT, artificial intelligence, and cognitive automation. Increasingly, these four technologies are combining to form the basis of next-generation digital transformation for organizations seeking results beyond the limited promise of the initial wave of early transformational work (circa 2014-2017).

Convergence sets the stage for a viable long-term replacement for ERP

What these combined technologies are capable of reaches beyond the ‘four walls’ of the transformational enterprise; they enable whole supply chains to work together as extended ERP fabrics, and to incorporate financial, regulatory, and technology entities surrounding the production and distribution cycle. The discussions around these possibilities are just beginning as 2018 draws to a close, but we expect 2019 to bring more blueprinting and ecosystem construction conversations.

One final, overarching perspective for blockchain and DLT in general: we are progressing past the point of questioning whether these technologies have a role to play in the broader business IT ecosystem. When deployed against the right business challenges, on the right architecture for the task, with the right partner, blockchain is capable of remarkable improvements – and becomes a more strategic technology when considered as a transformational component alongside IoT, AI and automation. The future isn’t built exclusively on blockchain, but it is increasingly a part of the future of business transaction management.

]]>
<![CDATA[UiPath’s Go! Automation Marketplace Aims to Accelerate RPA Adoption in Enterprise Clients]]>

 

UiPath held its 2018 UiPathForward event October 3-4, 2018, in Miami, Florida. The focus of proceedings was the October release of the company’s software and a related trio of major announcements: a new automation marketplace, new investment in partner technology and marketing, and a new academic alliance program.

The analyst session included a visit from CEO Daniel Dines and an update on the company’s performance and roadmap. UiPath has grown from a $1m ARR to a $100m ARR in just 21 months, and the company is trending on a $140m ARR for 2018 en route to Dines’ forecasted $200m ARR in early 2019. UiPath is adding nearly six enterprise clients a day and has begun staking a public claim – not without defensible merit – to being the fastest-growing enterprise software company in history.

During the event, UiPath announced a new academic alliance program, consisting of three sub-programs – one aimed at training higher education students for careers in automation, another providing educators with resources and examples to utilize in the classroom setting, and the third focused on educating youth in elementary and secondary educational settings. UiPath has a stated goal of partnering with ~1k schools and training ~1m students on its RPA platform.

The centerpiece of the event, however, was Release 2018.3 (Dragonfly), which was built around the launch of UiPath Go!, the company’s new online automation marketplace. It would be easy to characterize Go! as a direct response to Automation Anywhere’s Bot Store, but that would be overly simplistic. Where currently the Bot Store skews more toward apps as automation task solutions, Go! is an app store for particulate task components – so while the former might offer a complete end-to-end document processing bot, Go! would instead offer a set of smaller, more atomic components like signature verification, invoice number identification, address lookup and correction, etc.

The specific goal of Go! is to accelerate adoption of RPA in enterprise-scale clients, and the component focus of the offering is intended to fill in gaps in processes to allow them to be more entirely automated. The example presented was, the aforementioned signature verification; given that a human might take two seconds to verify a signature, is it really worth automating this phase of the process? Not in and of itself, but failing to do so creates an attended automation out of an unattended one, requiring human input to complete. With Go!, companies can automate the large, obvious task phases from their existing automation component libraries, and then either build new components or download Go! components to complete the task automation in toto.

Dragonfly is designed to integrate Go! components into the traditional UiPath development environment, providing a means for automation architects to combine self-designed automation components with downloaded third-party components. Given the increased complexity of managing project automation software dependencies for automations built from both self-designed and downloaded components, UiPath has also improved the dependency and library management tools in 2018.3. For example, automation tasks that reuse components already developed can include libraries of such components stored centrally, reducing the amount of rework necessary for new projects.

In addition, the new dependencies management toolset allows automation designers to point projects at specific versions of automations and task components, instead of defaulting to the most recent, for advanced debugging purposes. Dragonfly also moves UiPath along the Citrix certification roadmap, as this release is designated Ready for Citrix, another step toward becoming Certified for Citrix. Finally, Dragonfly also adds new capabilities in VDI management, new localization capabilities in multiple languages, and UI improvements in the Studio environment.

In the interest of spurring development of Go! components, UiPath has designated $20m for investment in its partners during 2019. The investment is split between two funds, the UiPath Venture Innovation Fund and the UiPath Partner Acceleration Fund. The first of these is aimed directly at the Go! marketplace by providing incentives for developers to build UiPath Go! components. In at least one instance, UiPath has lent developers directly to an ISV along with funding to support such development. UiPath expects that these investment dollars will enable the Go! initiative to populate the store faster than a more passive approach of waiting for developers to share their automation code.

The second fund is a more traditional channel support fund, aimed at encouraging partners to develop on the UiPath platform and support joint marketing and sales efforts. The timing of this latter fund’s rollout, on the heels of UiPath’s deal registration/marketing and technical content portal announcement, demonstrates the company’s commitment to improving channel performance. Partners are key to UiPath’s ability to sustain its ongoing growth rate and the strength of its partner sales channels will be vital in securing the company’s next round of financing. (UiPath's split of partner/direct deployments is approaching 50/50, with an organizational goal of reaching 100% partner deployments by 2020.) Accordingly, it is clear that the company’s leadership team is now placing a strong and increasing emphasis on channel management as a driver of continued growth.

]]>
<![CDATA[7 Process Characteristics That Are Key to Blockchain Adoption]]>

 

With the rise of every new technology, there is a parallel rush among enterprises to incorporate it into the near-term IT roadmap, and for good reason: new technologies offer cost savings, CX improvement, better risk management, and a host of other benefits that look terrific in annual reports and quarterly earnings documentation.

Nowhere is that trend more prevalent than in blockchain, a technology that has created significant adoption pressure for enterprise clients amid a flurry of questions regarding platform selection, process redesign, and partner engagement. Blockchain’s benefits are much vaunted (security, immutability, fault tolerance, decentralization), but then there is no shortage of drawbacks (throughput speed, fragmented platform landscape, interoperability). Moreover, there is already talk of technologies that could replace or bypass the benefits of blockchain, from hashgraph to quantum computing, adding further to the murk surrounding the process of evaluating blockchain for organizational use.

In an environment spurring on organizations to adopt blockchain, there’s actually good cause to slow the overall technological rush and ensure that a blockchain solution is the right choice for a specific business challenge and commercial ecosystem. Blockchain is a transformative technology; it changes the fundamental way that transactions are encoded, stored, and tracked. In the right setting, it can be a material lever for unlocking value within the organization, in the supply chain, and among banking and regulatory interactors. In the wrong setting, it can be an expensive dead-end that diverts resources and time from a broader slate of digital transformation activities.

So how can organizations correctly sort the real blockchain opportunities out? In the course of my research in this area, I’ve identified seven key characteristics of business processes that form the basis of an organizational blockchain ‘goodness of fit’ checklist:

1. Transactional processes

Note that this is not the same as being financially transactional; any process where information changes hands between parties, even if compensation is not a part of the exchange, can be a candidate for blockchain deployment. Blockchain excels at documenting the transfer of value or information, and fiscal gains tend to accumulate with greater volumes of handshakes. As a result, the higher the transaction count in one cycle of a given process task, the more relevant blockchain becomes.

2. Frictional processes

Process friction can take many forms – from time delays in passing information from one party to the next, to per-message costs (such as SWIFT messaging expenses in financial services), to partner fatigue in disputing invoices or claims. The more time and expense accumulates within the process, the better a fit for blockchain technology a process is.

3. Non real-time, low-volume processes

Speed is not currently a significant blockchain platform strength, so processes that need to happen in real-time at scale may be a poor fit for the technology in its current form. While some specialized platforms – most notably Digital Asset, Symbiont, and Waves – offer compelling speed at scale, most of the big names in the platform space are not yet performing at speeds comparable to a relational database, so real-time processes happening in volume may be good candidates to consider when blockchain catches up in the next two years.

4. Simpler processes

The term smart contracts tends to be a confusing one in blockchain, as it suggests more intelligence than is really present in the technology currently. Smart contracts are smart in their management of the tasks surrounding a transaction, like document processing, notarization, and approval; a smart contract is self-executing in these areas and does not require additional input.

But for all that transactional intelligence, smart contracts remain relatively ‘dumb’ in terms of overall contract complexity. So, while most can follow relatively simple ‘if-then’ logic, complicated transactions with multiple forks and ‘fuzzy’ interpretation are beyond the current reach of most smart contract platforms. Again, this is a development priority for many platform providers, so expect to see this evolve swiftly in parallel with developments in AI – but at the time of writing, simpler processes are a better fit for blockchain implementation.

5. Oppositional processes

Transparency and trust are cornerstone components of an effective blockchain implementation, particularly so when there is an element of opposed goals in a process environment (payor versus payee being the most common such example). When all parties can monitor and oversee the documented process of content or payment through a process transparently from end to end, trust is improved and disputes tend to decline both in volume and in time required for resolution.

6. Fragmented processes

Intra-organizational applications of blockchain can produce meaningful benefits, but the real value is unlocked when a blockchain connects multiple parties operating in different domains – for example, in an ocean cargo management setting, exporters, banks, insurers, regulators, shipping providers, importers, and distributors. In such an environment, where responsibility and input are being passed among many organizations, the relevance of a blockchain solution increases considerably.           

7. Risk-accumulative processes

Corporate risk management is an accumulative function to begin with, as the audit task normally demands a large volume of signed and documented data – so the ability to produce the supporting documentation without significant organizational effort or data reconstruction is a vital task. Blockchain offers historically unparalleled data immutability and signed witness status, making it an exceptionally good fit for processes that accumulate large volumes of risk-relevant exchanges over time.

In conclusion

What can operations and IT executives take from this in planning for blockchain deployment? Currently, the most compelling fiscal and performance returns are coming from highly transactional processes with considerable process friction, prioritizing real-time transparency in low transaction volume, with minimal complexity, high levels of fragmentation, and considerable risk exposure.

However, it is critical to maintain a perspective on the role of implementing blockchain for these processes within the scope of a broader digital transformation initiative; blockchain demands many of the same transformation readiness checkpoints (big data capability, master data management and hygiene, and automation readiness) that other transformational initiatives do.

Finally, in assessing blockchain’s weaknesses, keep a weather eye on the horizon: blockchain’s two principal shortcomings to date (managing real-time transaction volume at scale and handling complex smart contracts) will increasingly become priorities for the major platform providers over the next two years.

]]>
<![CDATA[IPsoft’s Challenging Vision for Cognitive Automation]]>

 

I recently attended IPsoft’s Digital Workforce Summit in New York City, an intriguing event that in some ways represented a microcosm of the challenges clients are experiencing in moving from RPA to cognitive automation.

The AI challenge

Chetan Dube loomed large over proceedings. IPsoft’s president and CEO was onstage more than is common at events of this type, chairing several fireside chats himself in addition to his own technology keynote, and participating (with sleeves rolled up) at the analyst day that followed. He brought a clear challenge to the stage, while at the same time conveying the complexity and capability of IPsoft’s flagship cognitive products, Amelia and 1DESK, and making them understandable to the audience, in part by framing them in terms of commercial value and ROI.

RPA vendors have a simpler form of this challenge, but both robotic process automation and cognitive automation vendors have a hill to climb in gaining clients’ trust in the underlying technology and reassuring service buyers that automation will be both a net reducer of cost and a net creator of jobs (rather than a net displacer of them).

From a technological perspective, RPA sounds from the stage (and sells) much more like enterprise software than neuroscience or linguistics, so the overall pitch can be sited much more in the wheelhouse of IT buyers. The product does what it says on the tin, and the cavalcade of success stories that appear on event stages are designed to put clients’ concerns to rest. To be sure, RPA is by no means easy to implement, nor is it yet a mature offering in toto, but the bulk of the technological work to achieve a basic business result has been done. And overall, most vendors are working on incremental and iterative improvements to their core technology at this time.

AI differs in that it is still at the start of the journey towards robust, reliable customer-facing solutions. While Amelia is compelling technology (and is performing competently in a variety of settings across multiple industries), the version that IPsoft fields in 2025 will likely make today’s version seem almost like ELIZA by comparison, if Dube’s roadmap comes to fruition. He was keen to stress that Amelia is about much more than just software development, and he spent a lot of time explaining aspects of the core technology and how it was derived from cognitive theory. The underlying message, broadly supported by the other presenters at the event, was clearly one of power through simplicity.

IPsoft’s vision

The messaging statements coming from the stage during the event portrayed a diverse and wide-ranging vision for the future of Amelia. Dube sees Amelia as an end-to-end automation framework, while Chief Cognitive Officer Edwin van Bommel sees Amelia as a UI component able to escape the bounds of the chatbox and guide users through web and mobile content and actions. Chief Marketing Officer Anurag Harsh focused on AI though the lens of the business, and van Bommel presented a mature model for measuring the business ROI of AI.

Digging deeper, some of what Dube had to say was best read metaphorically. At one point he announced that by 2025 we will be unable to pass an employee in the hallway and know if he or she is human or digital. That comment elicited some degree of social media protest. But consider that what he was really saying is that most interaction in an enterprise today is performed electronically – in that case, ‘the hallways’ can be read as a metaphor for ‘day-to-day interaction’.

The question discussed by clients, prospects, and analysts was whether Dube was conveying a visionary roadmap or fueling hype in an often overhyped sector. Listening to his words and their context carefully, I tend towards the former. Any enterprise technology purchase demands three forms of reassurance from the vendor community:

  • That the product is commercially ready today and can take up the load it is promising to address
  • That the company has a long-term roadmap to ensure that a client’s investment stays relevant, and the product is not overtaken by the competition in terms of capacity and innovation
  • And perhaps most importantly, that the roadmap is portrayed realistically and not in an overstated fashion that might cause clients to leave in favor of competitors’ offerings.

I took away from Digital Workforce Summit that Dube was underscoring the first and second of these points, and doing so through transparency of operation and vision.

There are only two means of conveying the idea that you sell a complex product which works simply from the user perspective – you either portray it as a black box and ask that clients trust your brand promise, or you open the box and let clients see how complex the work really is. IPsoft opted for the latter, showing the product’s operation at multiple levels in live demonstrations. Time and again, Dube reminded the audience that it is unnecessary to grasp evolved scientific principles in order to take advantage of technologies that use those principles – so light switches work, in Dube’s example, without the user needing to grasp Faraday’s principles of induction. It still benefits all parties involved to see the complexity and grasp the degree to which IPsoft has worked to make that complexity accessible and actionable.

Conclusion

The challenge, of course, is that clients attend events of this kind to assess solutions. The majority of attendees at Digital Workforce Summit were there to learn whether IPsoft’s Amelia, in its latest form, is up-to-speed to manage customer interactions, and will continue to evolve apace to become a more complete conversational technology solution and fulfill the company’s ROI promises.

I came away with the sense that both are true. Now it is up to the firm’s technology group to translate Dube’s sweeping vision into fiscally rewarding operational reality for clients.

]]>
<![CDATA[Infosys Announces Blockchain-Powered Nia Provenance to Manage Complex Supply Chains]]>

 

EdgeVerve, an Infosys Product subsidiary, this week announced a new blockchain-powered application for supply chain management as part of its product line. Nia Provenance is designed to address the challenges faced by organizations managing complex supply chain networks with multiple IT stacks engaged across multiple stakeholders. Here I take a quick look at the new application and its potential impact.

Supply chain traceability, transparency & trust

Nia Provenance is designed to provide traceability of products from source of origin to point of purchase with full transparency at every point along the supply chain. The product establishes trust through the utilization of a version of Bitcore, the blockchain architecture used by Bitcoin. While this can be a relatively simple task in agribusiness and other supply environments in which a product involves only processing as it moves through the supply chain, environments such as consumer electronics or medical devices are much more complex, involving integration and assembly of multiple components along the way. The ability to isolate a specific component and trace it to its source of origin, through phases of value addition timestamped on a blockchain ledger, is invaluable in case of recall or consumer danger.

Transparency in Nia Provenance is provided through proof of process as the product or commodity moves through the system – so attributes that must be agreed on at specific phases of the supply chain, such as conflict-free or locally-sourced, can be seen in the system as they are accumulated. Similarly, regulatory inspections and certifications are more easily tracked and audited through a blockchain solution like Nia Provenance.

Finally, trust is gained in a system with a combination of data immutability, equality in network participation as a result of decentralization of the overall SCM ledger, and cryptographic information security. Over time, the benefits of a blockchain SCM environment accrue both to the organizational bottom line, in the form of cost savings, and to the organization’s brand as a function of increased consumer trust in the brand promise.

Agribusiness client case

As one example of how Nia Provenance is being leveraged in the real world, a global agribusiness firm undertook a proof of concept for its coffee sourcing division in Indonesia to track the journey of coffee from the growing site, through the roasting plant, the blend manufacturer, the quality control operation, the logistics providers, and on to the importer. This enabled the trader to provide trusted accreditation and certification information to the importer for properties such as organic or fair trade status, or that the coffee was grown using sustainable agriculture standards.

Providing strategic blockchain reach

Nia Provenance provides Infosys with three important sources of strategic blockchain ‘reach’ in an increasingly competitive market, because:

  • It is platform-agnostic and purpose-built to dock with multiple blockchain architectures. A supply chain solution that relies too heavily on the specific capabilities of one common blockchain architecture or another – for example, Ethereum or HyperLedger – would encounter difficulty working with other upstream or downstream architectures. By keeping the DLT technology in an abstraction layer, Nia Provenance eases the process of incorporating different blockchain architectures in a complex SCM task environment
  • It is designed to benefit multiple supply chain stakeholders, not just the client. Blockchain adoption becomes more appealing to upstream and downstream stakeholders, as well as horizontal entities like banks, insurers and regulators, when the ecosystem is built with clear benefits for them as well as the organizing entity. Nia Provenance is designed from the ground up with a mindset inclusive of suppliers, inspectors, insurers, shippers, traders, manufacturers, banks, distributors, and end customers
  • It is designed to span multiple industries. Although the platform has its origins in agribusiness, Nia Provenance looks to be up to the task of SCM applications in manufacturing, consumer goods/FMCG, food and beverage, and specialized applications such as cold-chain pharmaceuticals.

Summary

Supply chain provenance is a core application for blockchain, and one that we expect to be a clear value delivery vehicle for blockchain technology through 2025. The combination of – as Infosys puts it – traceability, transparency, and trust that blockchain provides is a compelling proposition. Nia Provenance offers a solution across a broad variety of industry applications for organizations seeking lower cost and greater security in their supply chain operations.

]]>
<![CDATA[The Advantages of Building a Bespoke Blockchain Platform]]>

 

For all the discussion in the blockchain solution industry around platform selection (are they choosing Fabric or Sawtooth? Quorum or Corda?), you’d be forgiven for thinking that every provider’s first stop is the open-source infrastructure shelf. But the reality is that blockchain is more a concept than a fixed architecture, and the platforms mentioned do not encompass the totality of use case needs for solution developers. As a result, some solution developers have elected to start with a blank sheet of paper and build blockchain solutions from the ground up.

One such company is Symbiont, who started down this road much earlier than most. Faced with the task of building a smart contracts platform for the BFSI industry, the company examined what was available in prebuilt blockchain platform infrastructure and did not see their solution requirements represented in those offerings – so they built their own. Symbiont’s concerns centered around the two areas of scalability and security, and for the firm’s pursuit target accounts in capital markets and mortgages, those were red-letter issues.

The company addressed these concerns with Symbiont Assembly, the company’s proprietary distributed ledger technology. Assembly was designed to address three specific demands of high-volume transactional processes in the financial services sector: fault tolerance, volume management, and security.

Supporting fault tolerance

Assembly addresses the first of these through the application of a design called Byzantine Fault Tolerance (BFT). Where some blockchain platforms allow for node failure within a distributed ledger environment, platforms using BFT broaden that definition to include the possibility of a node acting maliciously, and can control for actions taken by these nodes as well. The Symbiont implementation of BFT is on the BFT-SMaRt protocol.

Volume management

In addressing the volume demands of financial services processing, deciding on the BFT-SMaRt protocol was again important, as it enables Assembly to reach performance levels in the ~80k/s range consistently.

This has two specific benefits, one obvious and one less so. First, it means that Assembly can manage the very high-volume transaction pace of applications in specialized financial trading markets without scale concerns. Secondly, it means that in lower-volume environments, the extra ‘headroom’ that BFT-SMaRt affords Assembly can be used to store related data on the ledger without the need to resort to a centralized data store to hold, for example, scanned legal documents that support smart contracts.

Addressing security concerns

The same BFT architecture that supports Assembly’s fault tolerance also provides an additional layer of security, in that malicious node activity is actively identified and quarantined, while ‘honest’ nodes can continue to communicate and transact via consensus. Add in encryption of data, whereby Assembly creates a private security ledger within the larger ledger, and the result is a robust level of security for applications with significant risk of malicious activity in high-value trading and exchange.

Advantages of building a bespoke blockchain platform

Building its own blockchain platform cost Symbiont many hours and R&D dollars that competitors did not have to spend, but ultimately this decision provides Symbiont with three strategic advantages over competitors:

  • Assembly is purpose-built for BFT-relevant, high-volume environments. As a result, the platform has performance and throughput benefits for applications in these environments compared with broader-use blockchain platforms that are intended to be used across a variety of business DLT needs. To some degree this limits the flexibility of the platform in other use cases, but just as a Formula One engine is a bespoke tool for a specific job, so too is Assembly specifically designed to excel in its native use case environment. That provides real benefits to users electing to build their banking DLT applications on the Assembly architecture
  • Symbiont can provide for third-party smart contract writing, should it elect to do so. While this is not in the roadmap for the moment, and Symbiont appears content to build client solutions on proprietary deliverables from the contract-writing layer through the complete infrastructure of the solution, the company could elect to allow clients to write their own smart contracts ‘at the top of the stack’. Symbiont does intend to keep the core Assembly platform proprietary to the company for the foreseeable future
  • Assembly may attract less malicious activity interest than traditional platforms. The rising number of blockchain projects based on HyperLedger and Ethereum is certain to attract more malicious activity based on the commonality of the architecture across a broader common base of technology. In much the same way that Windows historically attracted more virus incursions than the OS platform, Assembly will tend to attract less attention than platforms with broader user bases. Moreover, Assembly’s BFT foundations will enable it to deal more effectively with those events that do occur.

Summary

Symbiont isn’t alone in developing its own proprietary blockchain technology architecture rather than choose from the broadly available offerings in the space, and as blockchain enters the mainstream of enterprise business, other provider organizations will surely go the same route.

What Symbiont has established is an exemplar for developing a purpose-built blockchain platform, beginning with the specific needs of the task environment at scale, and proceeding to address those needs carefully in the development process. 

]]>
<![CDATA[6 Ways to Prepare for Cognitive Automation During RPA Implementation]]>

 

2017 brought a surge of RPA deployments across industries, and in 2018 that trend has accelerated as more and more firms begin exploring the many benefits of a digital workforce. But even as some firms are just getting their RPA projects started, others are beginning to explore the next phase: cognitive automation. And a common challenge for firms is the desire to begin planning for a more intelligent digital workforce while automating simpler rule-based processes today.

Having spoken with organizations at different stages of their journeys from BI to RPA and on to cognitive, there are tasks that companies can begin during RPA implementation to ensure that they are well positioned for the machine learning-intensive demands of cognitive automation:

Design insight points into the process for machine learning

Too often, the concept of STP gets conflated with the idea of measuring task automation only on completion. But for learning platforms, it is vital to understand exactly where variance and exceptions arise in the process – so allow your RPA platform to document its progress in detail from task inception to task completion.

At each stage, provide a data outlet to track the task’s variance on a stage-by-stage basis. A cognitive platform can then learn where, within each task, variance is most likely to arise – and it may be the case that the work can be redesigned to give straightforward subtasks to a lower-cost RPA platform while cognitive automation handles the more complex subtasks.

Build a robot with pen & paper first

One of the basic measures for determining whether a process can be managed by BPM, by RPA, or by cognitive automation is the degree to which it can be expressed as a function of rigorous rules. So, begin by building a pen-and-paper robot – a list of the rules by which a worker, human or digital, is expected to execute against the task.

Consider ‘borrowing’ an employee with no familiarity with the involved task to see if the task is genuinely as straightforward and rule-bounded as it seems – or whether, perhaps, it involves a higher order of decision-making that could require cognitive automation or AI.

Use the process to revisit the existing work design

In many organizations, tasks have ‘grown up’ inorganically around inputs from multiple stakeholders and have been amended and revised on the fly as the pace of business has demanded. But the migration first to RPA and then on to cognitive automation is a gift-wrapped opportunity to revisit how, where, and when work is done within an organization.

Can key task components be time-shifted to less expensive computing cycles overnight or on weekends? Can whole tasks be re-divided into simpler and more complex components and allocated to the lowest-cost tool for the job?

Dock the initiative with in-house ML & data initiatives

Cognitive automation does not have to remain isolated to individual task areas or divisions within an organization. Often, ML initiatives produce better results when given access to other business areas to learn from. What can cognitive automation learn about customer service tasks from paying a ‘virtual visit’ to the manufacturing floor via IoT? Much, potentially – especially if specific products or parts are difficult to machine to tolerance within an allowed margin of error, they may be more common sources of customer complaints and RMAs.

Similarly, a credit risk-scoring ML platform can learn from patterns of exception management in credit applications being managed in a cognitive automation environment. For ML initiatives, enabling one implementation to learn from others is a key success factor in producing ‘brilliant’ organizational AI.

Revisit the organizational data hygiene & governance models

Data scientists will be the first to underscore the importance of introducing clean data into any environment in which decision-making will be a task stage. Data with poor hygiene, and with low levels of governance surrounding the data cleaning and taxonomy management function, will create equally poor results from cognitive automation technology that utilizes it to make decisions.

Cognitive software is no different than humans in this respect; garbage in, garbage out, as the old saying goes. As a result, a comprehensive visitation of organizational data hygiene and governance models will pay dividends down the road in cognitive work.

Discuss your vendor’s existing technology & roadmap in cognitive & AI

Across the RPA sector, cognitive is a central concept for most vendors’ 2018-2020 roadmaps. Scheduling a working session now on migrating the organization from RPA to cognitive automation provides clients with insight on their vendor’s strengths and capability set. It also enables vendors to get a close look at ‘on the ground’ cognitive automation needs in different organizational task areas.

That’s win/win – and it helps ensure that an existing investment in vendor technology is well-positioned to take the organization forward into cognitive based on a sound understanding of client needs.

 

NelsonHall conducts continuous research into all aspects of RPA and AI technologies and services as part of its RPA & Cognitive Services research program. A major report on RPA & AI Technology Evaluation by Dave Mayer has just been published, and coming soon is a major report on Business Process Transformation through RPA & AI by John Willmott. To find out more, contact Guy Saunders.

]]>
<![CDATA[Kryon’s Rebranding Focuses on the Business Benefits of RPA]]>

 

Kryon has today launched a new brand presence, along with a new strategic perspective on RPA focused on delivering business benefits. The former Kryon Systems (now simply Kryon) will now be organized around a three-pronged approach the company refers to as ‘Discover, Automate, Optimize’.

As part of this brand migration, several aspects of Kryon’s go-to-market approach will change, as described below.

Focusing on the human side of the RPA equation

Kryon’s former branding package included limited personification of the RPA offering under the Leo name, and also featured an anthropomorphized robot ‘mascot’ in much of the company’s promotional and industry relations materials. That component of the company’s branding has been eliminated from its new visual identity, which now focuses much more on the human side of the RPA equation and the concept of integrating RPA into a hybrid human-digital workforce.

A new focus on business benefits rather than technological innovation

As more RPA features begin to become ‘table stakes’ within the sector, NelsonHall has expected vendors to begin the shift from focusing on product features to business outcomes. Kryon joins that trend with its rebranding, which will include more case studies and success stories represented as a function of business KPIs, while keeping the technological conversation within the context of real-world improvements in cost, efficiency, and quality.

A new framework for the brand

The ‘Discover, Automate, Optimize’ theme speaks to Kryon’s three primary offering areas:

  • Process discovery (already soft-launched, but due for a more formal product rollout in early summer of 2018)
  • Traditional RPA
  • Analytics/AI.

To date, these have been marketed as components, but under the new branding they become part of a larger solution intended to reposition Kryon as an end-to-end provider of business process optimization solutions.

A clear effort to differentiate its offerings

Kryon has sometimes suffered in terms of its ability to break out from the pack of RPA providers and carve out a differentiated and sustainable niche for itself. Under the new brand positioning, the company is making a clear effort to differentiate its offerings based on the ability to do more than automate simple, repetitive tasks.

The company talks about enabling human workers to be mindful and focused on creative tasks by eliminating background work entirely though the application of RPA combined with AI and machine learning. While other firms offer similar messaging, Kryon’s new branding package treats repetitive work as ‘background noise’ to be removed from the typical employee’s workday.

A new name, logo & tagline

While these are often secondary in importance from a technology and business analyst’s perspective, it is worth mentioning what is and what, importantly, is not included in Kryon’s visual rebrand. Gone is the word ‘Systems’ from the old Kryon logo, in a clear effort to migrate the firm towards a broader service mandate.

The tagline ‘Be Your Future’ is added in place of ‘Systems’, again suggesting a broadening of the brand. Finally, the letter ‘O’ in the logo is given a half-gold, half-blue treatment to emphasize the hybrid human/digital nature of its offering.

Summary

2018 and 2019 are expected to be watershed years in the RPA sector, as competitive positioning begins to come into focus and leadership niches become occupied as the sector matures. Kryon is taking clear steps to include itself in the ‘tier one’ vendor conversation through a set of brand migration moves that position the company to compete well into the next decade.

]]>
<![CDATA[Redwood Introduces Disruptive New RPA Pricing Model]]>

 

Today, Redwood announced a new pricing model for its RPA software in which users pay only for units of work completed, and on a cost basis equivalent to efficient human work on the same task. As a result, if a Redwood robot sends an email, or retrieves specific data, or performs reconciliation work, the organization is charged on completion for specific amounts relevant to the parallel human cost of execution in a ‘perfect work efficiency’ environment.

This is a fundamental change from the prevalent model in the industry of paying for licenses for RPA software and estimating how many licenses will be necessary to perform specific tasks. While other pricing models exist – ranging from paying for the process rather than the robot, to buying robots outright as owned software properties – this is the first time that pricing is available both on completion and on a granular, task-centric basis. In essence, Redwood is enabling organizations implementing RPA to pay on a piecework basis, and only after the work is performed.

The new pricing model will mark the second major transition in the company’s client contracting medium in the last five years. Historically, Redwood sold its software on a perpetual licensing basis, which changed over time to a more traditional annual licensed offering (although some clients are still on perpetual licenses). Redwood will need to manage a transition period in which clients can switch to the utility pricing model on the anniversary of their licenses, which may introduce some unevenness to the company’s financial performance during 2018-2019.

There are more implications for Redwood, and for the RPA industry, as a result of deploying this new pricing model:

The new model changes the revenue & profit mix for Redwood…

The company expects to see some flattening of topline revenue as a result of this change, but improved margins, with an overall increase in transaction volume. Redwood believes that by reducing barriers to entry in RPA through enabling payment by the task, and after the fact, more prospective clients will adopt the Redwood solution. This is a logical evolution of the Redwood business model in that it promotes Redwood’s library of prebuilt robots to a larger prospective audience and smooths the on-ramp to Redwood adoption for more organizations.

…and demands that Redwood’s pricing model is appealing

The company has researched levels of productivity and cost in both Western and offshore economies and modeled a function that prices Redwood tasks at roughly 20 Euro cents per moderate-duty task (retrieving a report, reconciling data, sending an email, etc.) based on a perfectly-efficient Western worker performing 156 such tasks per hour for a fully-loaded employment cost of €50k. (A low-cost economy worker performs half as many such tasks per hour for half the cost in Redwood’s model.)

In order for Redwood to unlock the full potential value of this new pricing model, these assumptions and metrics need to be appealing to buyers.

Redwood creates more pressure on the traditional licensing model

This is still a relatively young industry in terms of establishing pricing and contracting norms, so disruptive acts (and Redwood’s new pricing model will certainly be disruptive at some level) creates pressure on ‘safer’, more traditional modes of client engagement. Redwood holds a degree of advantage in that the company has an extensive library of ~35,000 prebuilt robots that it can price and sell on this model, as opposed to RPA providers that provide software that is customized and deployed within the client organization. It will be more difficult for traditional RPA providers to cost-effectively match the Redwood model in the market.

Reporting & invoicing challenges are addressed through Redwood Robotics itself

Transitioning from a license-based contracting structure to a high-resolution, granular use-based contracting structure would normally be a steep challenge for a software organization accustomed to annual licensing, given the degree of reporting and invoicing complexity involved. Fortunately for Redwood, these processes are being handled in their entirety by additional automations, deployed to the client organization at no charge, which monitor and document Redwood automation usage and generate regularly-scheduled invoices for the client.

Summary

Redwood has put forth a compelling new framework for equating robotic and human labor costs, and for enabling organizations to pay only for work done rather than paying for the abstraction layer inherent to a robot license.

In effect, Redwood offers piecework rates in a market predominated by ‘salaried-FTE’ model robots. While this is unlikely to become the norm for RPA pricing, it provides Redwood with a new, and potentially sustainable, source of competitive differentiation.

]]>
<![CDATA[UiPath Gains Unicorn Status with Series B Funding; To Expand into AI]]>

 

This morning, UiPath announced that the company will be receiving $153m in Series B funding from a consortium including the company’s existing investors, with two new names involved – Kleiner Perkins and Capital G, the late-stage growth venture capital fund financed by Alphabet Inc.

The latter is of note as this arm of Google focuses on profit-centric investment rather than acquiring to serve Google’s overall strategic goals. Its notable investments to date have included Gusto (then ZenPayroll) in 2015, Airbnb and Snap in 2016, and Lyft in 2017.  As a result of these investments, Laela Sturdy of Capital G and John Doerr of Kleiner Perkins will be joining UiPath’s strategic advisory board.

This latest round of financing is meaningful on several fronts:

It places UiPath into unicorn territory

This round of funding places UiPath’s market valuation in the vicinity of $1.1bn, implying that the company has grown from seed funding to unicorn status in just 36 months. By contrast, fellow RPA unicorn Blue Prism was founded in 2001 and only recently crossed into unicorn status with a market value of $1.02bn.

…which requires more resources to support rapid growth

While this is both impressive supernormal growth on its own, and a rate that suggests that UiPath has taken considerable share in the past twelve months, it carries with it its own slate of challenges, as referenced in the profile of UiPath that NelsonHall published earlier this year. The company’s  level of growth needs infrastructural backfill in multiple areas, from R&D to sales and marketing. This is a company that is adding 2.5 customers a day on its existing funding levels and operating cashflow. What might UiPath’s organic growth trajectory look like with significantly deeper sales, marketing, deployment, and R&D capabilities? We are about to find out.

It positions the company to acquire in the AI space

The company now boasts a combined war chest of ~$200m in cash, more than enough for a tactical bolt-on or two in the areas of cognitive automation and AI. UiPath already has evolved partnerships with Celonis and Enate, so the company is likely to look outside of those firms’ service footprints for acquisitions. Specifically, UiPath is looking for capabilities in the areas of natural language processing, machine learning, and identity recognition. There will be no shortage of good candidates for UiPath to choose from in these areas, but betting correctly and acquiring for maximum value will be critical in positioning UiPath for success.

It ties the company closer to Google

The CapitalG investment certainly suggests a closer relationship between UiPath and Google, which might have already manifested in UiPath’s decision to utilize Google Cloud for its cloud machine learning initiative. Given Blue Prism’s alignment with IBM, the major RPA providers are beginning to find their technology partners for long-term competition in the segment.

Google will be able to provide UiPath with a host of competitive advantages in terms of technology licensure, partner ecosystem development, and market presence. It would be interesting to see where UiPath might be in a year’s time with a closer relationship with Google’s TensorFlow team, for example, or with its Generative Adversarial Networks working groups.

It likely launches the next wave of innovation in the segment

Armed with a substantive war chest of cash with which to build and acquire new capabilities, UiPath’s actions during 2018 are not likely to go unanswered by other segment leaders. As a result, UiPath’s next moves will likely signal the beginning of the next stage of evolution in the RPA sector – one we expect to bring out the best in technological innovation among those leaders. We see UiPath as a leader in that evolutionary process.

]]>
<![CDATA[7 Essential Tasks Prior to Any RPA Implementation]]>

 

With every new software release from RPA sector leaders, there is always much to be excited about as vendors continue to push the technological boundaries of workplace automation. Whether those new capabilities focus on cognition, or security, or scalability, the technology available to us continues to be a source of inspiration and innovative thinking in how those new capabilities can be applied.

But success in an RPA deployment is not entirely dependent just on the technology involved. In fact, the implementation design framework for RPA is often just as important – if not more so – in determining whether a deployment is successful. Install the most cutting-edge platform available into a subpar implementation design framework, and no amount of technological innovation can overcome that hindrance.

With this in mind, here are seven tasks that should be part of any RPA implementation plan before organizations put pen to paper to sign up with an RPA platform vendor.

Create a cohesive vision of what automation will achieve

Automation is the ultimate strict interpretation code: it does precisely as it’s told, at speed, and in volume. But it must be pointed at the right corporate challenges, with a long-term vision for what it is (and is not) expected to do in order to be successful in that mission. That process involves asking some broad-ranging questions up-front:

  • What stakeholders are involved – internally and externally – in the automation initiative?
  • What are our organization’s expectations of the initiative?
  • How will we know if we succeeded or fail?
  • What metrics will drive those assessments?
  • Where will this initiative go next within our organization?
  • Will we involve our supply chain partners or technology allies in this process?

Ensure a staff model that can scale at the speed of enterprise automation

We tend to spend so much time talking about FTE reduction in the automation sector that we overlook the very real issue of FTE sourcing (in volume!) in relation to the implementation of automation at enterprise scale. Automation needs designers, coders, project managers, and support personnel, all familiar with the platform and able to contribute new code and thoughtware assets at speed.

Some vendors are addressing this issue head-on with initiatives like Automation Anywhere University, UiPath Academy, and Blue Prism Learning and Accreditation, and others have similar initiatives in the works. It is also important that organizational HR professionals be briefed on the specific skillsets necessary for automation-related hires; this is a relatively new field, and partnering up-front on talent acquisition can yield meaningful benefits down the road.

Plan in detail for a labor outage

The RPA sector is rife with reassurances about digital workers: they never go on strike; they don’t sleep or require breaks; they don’t call in sick. But things do go wrong. And while the RPA vendors offer impressive SLAs with respect to getting clients back online quickly, sometimes it’s necessary to handle hours, or even days, of automated work manually. Having mature high-availability and disaster recovery capability built into the platform – as Automation Anywhere included in Enterprise Release 11 – mitigates these concerns to a specific degree, but planning for the worst means just that.

Connect with the press and the labor community

Don’t skip this section because it sounds like organized labor management only, although that’s a factor too. Automation stories get out, and local and national press alike are eager to cover RPA initiatives at large organizations. It’s a hot-button topic and an easily accessible story.

Unfortunately, it’s also all too easy to take an automation story and run with the sensationalist aspects of FTE displacement and cost reduction. By interacting with journalist and labor leaders in advance of launching an automation initiative, you’re owning the story before it can be owned elsewhere in the content chain.

Have a retraining and upskilling initiative parallel to your automation COE

Automation can quickly reduce the number of humans necessary in a work area by half or even more. What is your organization’s plan for redeployment of that human capital to other, higher-value tasks? Who occupies those task chairs now – and what will they be doing?

Once the task of automation deployment is complete, there is still process work to be done in finding value-added work for humans who have a reduced workload due to automation. Some organizations are finding and unlocking new sources of enterprise value in doing so – for example, front-line workers who have their workloads reduced through automation can often ‘see the forest’ better and can advise their superiors on ways to streamline and improve processes.

Similarly, automation can bring together working groups on tasks that have connected automations between departments, allowing for new conversations, strategies, and processes to take shape.

Have an articulation plan for RPA and other advanced technologies

RPA and cognitive automation do more than improve the quality and consistency of work – they also improve the quality and consistency of task-related data. That is an invaluable characteristic of RPA from the organizational data and analytics perspective, and one that is often overlooked in the planning process.

While it might take days for a service center to spot a trend in common product complaints, RPA platforms could see the same trend in hours, combine that data in an organizational data discovery environment with IoT data from the production line, and identify a product fault faster and more efficiently than a traditional workforce might. When designing an automation initiative, it is vital to take these opportunities into account and plan for them.

Create a roadmap to cognitive automation and beyond

RPA is no more a destination than business rules engines were, or CRM, or ERP. These were all enabling technologies that oriented and guided organizations towards greater levels of agility, awareness and capability. Similarly, deploying RPA provides organizations with insight into the complexity, structure and dependencies of specific tasks. Working towards task automation yields real clarity, on a workflow-by-workflow basis, of what level of cognition will be necessary to achieve meaningful automation levels.

While many tasks can be achieved by current levels of vendor RPA capability, others will require more evolved cognitive automation, and some will be reserved for the future, when new AI capabilities become available. By designating relevant work processes to their automation ‘containers’, an enterprise roadmap to cognitive automation and AI begins to take shape.

]]>
<![CDATA[7 Predictions for RPA in 2018]]>

 

The RPA sector is defined as one of rapid technological evolution, and every year it seems like what we thought to be bleeding-edge capability in January turns out to be proven and deployed technology long before year’s end. With this rapid pace of growth and maturation in mind, where might the RPA sector be by the end of 2018? Here are seven predictions.

The first wave of automation-inclusive UI design

To date, RPA has been adaptive in nature – automation software has done the interpretive labor to ‘see’ the application screen as humans do. But as more and more repetitive-task work becomes automated, software designers will begin taking the strengths and weaknesses of computer vision into account in designing applications that will be shared between human and digital workers. This will show up in small ways at first, particularly in interface areas that are challenging for RPA software to learn quickly, but over the course of 2018, ‘hybrid workforce UI design’ will become a new standard for enterprise software vendors.

Process mining makes RPA more accessible for midmarket & emerging large market segments

Early adopters of RPA have already established that detailed process mapping is key to successful task automation across the extended enterprise. For Fortune 1000 firms, that can be fairly straightforward, with retained consulting and systems integration partners on hand to assist in the process of mapping task flows for RPA implementation. Smaller firms, however, don’t always have the luxury of engaging large consulting firms to assist in this process – so vendors developing their own automated process mapping technology, or partnering with third-party providers like Celonis, will find demand booming in the midmarket.

Human skill bottleneck hits providers without education/certification plans

It’s ironic that human skill capital will end up as the limiting factor in the growth rate of successful RPA implementations, but 2018 will close with a clear shortage of qualified automation designers and deployment management professionals. Those organizations (like UiPath, Blue Prism, and Automation Anywhere) that saw this coming early on and established academic settings for the education and certification of on-platform skilled practitioners, will thrive. But those lacking these programs may find themselves in a skill bottleneck in the market – one that will begin to materially inhibit growth.

RPA becomes a designed-in factor for disruptors

In conversations I had with organizations implementing RPA during 2H17, one common factor came to the fore: that their initial FTE rationalization gains had already been realized, and going forward, they were looking to RPA as a means to manage significant growth in their operations.

For organizations coming to market as disruptors, this trend is even more pronounced, and organizations with designs on being disruptive forces are increasingly building automation capabilities into their growth plans from the ground up. Building an organization on a foundation of a hybrid human-digital workforce is a different endeavor entirely from retrofitting an existing company with automation – and as a result, we should begin seeing some real innovation in organizational design beginning this year.

Japan becomes the adoption template geo for big bets

To date, Japan has produced some of the largest implementations of RPA, with UiPath’s late 2017 deployment at SMBC pushing the envelope still further. Japan is betting big on RPA to become a sustainable source of competitive differentiation, and as more large organizations there implement large-scale RPA projects, the best practices library for RPA deployment at scale will expand in kind.

Companies worldwide looked to Japan for guidance in implementing robotics once before, during the rise of robotic manufacturing in the automotive sector. 2018 will see a second such wave.

RPA proves its case as a source of compliance gains

RPA has been marketed with a number of different value creation characteristics already, with the obvious cost reduction and quality improvement factors taking center stage. But RPA has significant benefits to offer organizations in regulated industries, most notably in the ability to secure access to sensitive information, systematize the process of accessing and modifying that information, and standardizing the documentation process and audit logging work associated with it.

2018 will be the year that organizations begin to see meaningful returns from adopting RPA as a solution to compliance task challenges.

Demand for specialist implementation navigators grows significantly

RPA implementation has been a partnered endeavor since the technology first arrived on the scene, with software vendors allying themselves closely with large consulting firms and systems integrators to optimize their client deployments. But demand is emerging for focused, automation-centric services, and right on time, the industry is seeing a surge of new RPA specialist service providers like Symphony and Agilify.

As buying organizations begin to ask more of their new – or revamped – RPA implementations, demand for these providers’ services will grow swiftly during 2018.

]]>
<![CDATA[CSS Corp’s Contelli Automation Platform Driving Improvements in Enterprise Network Management]]>

 

As 2018 begins, the RPA sector is starting to produce more segment specialists from within its vendor base. Whereas just two years ago the sector was still finding its footing in addressing common back- and front-office application automation, enterprise customers today have the luxury of building best-of-breed solutions that often incorporate two or more vendors working in concert to automate a broader spectrum of tasks.

CSS Corp’s Contelli is a relatively new automation platform, but one that is gaining attention for its capability set in a complex and high-value enterprise support area – namely, automated network management. Contelli received an elevated role at CSS in the wake of the company’s late 2016 reorganizaton, which saw CSS' board elect to change the direction of the firm. As part of this strategic direction change (one that saw an influx of new management talent take place in the executive suite), the company transitioned from a corporate focus heavy on legacy IT services to one centered on customer engagement and digital transformation. That transition also included an elevated role for CSS' automation platform, which was rebranded from AIMS (Automated Infrastructure Management Solution) to Contelli. 

The product continuously analyzes client IT operations and uses network traffic data, paired with algorithmic analysis of historical data, to predict downtime, reconfigure traffic for improved efficiency, dynamically provision and de-provision IT assets, and resolve repetitive support tasks. CSS estimates ~30-40% improvements in operational efficiency in IT operations, and ~45% to ~65% reduction in FTEs, in typical deployments of Contelli IT Management Engine.

Although Contelli’s brand name may be a new one in the market, the platform has already achieved success. For a leading managed network services provider with 450k network devices under management, Contelli software provided the client with a 25% improvement in average handle time for open ticket calls, a 22% improvement in case closure rate, and, perhaps most importantly, a 100% success rate in case audits performed on work Contelli automated.

Three factors make Contelli an appealing offering for organizations seeking to reduce their network management costs:

  • It touches a broad range of KPIs. Network optimization isn’t always realized by identifying a few significant sources of cost savings and quality improvement potential; often, the task involves incremental improvement of multiple KPIs, from throughput and traffic efficiency to asset provisioning speed, to support ticket resolution turnaround cycle. Contelli’s position within the network management stack enables the product to offer a broad array of improvements in KPIs across multiple task areas
  • It learns continuously from network data. Automating a fluid process is among the steepest challenges in intelligent automation today. As variables change within the task area to be automated, the RPA platform of choice must not only be able to adapt on the fly, but learn entirely new sets of events and exceptions as topologies and assets evolve. Contelli’s development team has invested considerable time and resources in the product’s machine learning layer to enable dynamic network management automation
  • It is a focus area for CSS’ Innovation Labs. Contelli is a mature offering today, but CSS has significant plans to improve and upgrade the product’s machine learning capabilities in the company’s Innovaton Labs, an R&D environment for continuous improvement of the platform.  CEO Manish Tandon has circled Innovation Labs in red as a key strategic plank for the company’s evolution, and Contelli is slated for considerable time ‘up on the lift.’

Contelli isn’t a ‘one stop shop’ for front- and back-office enterprise automation, but for organizations seeking to self-fund a larger-scale RPA initiative with a broad slate of KPI improvements in a critical business task area, it’s an appealing choice for network management administrators. 

]]>
<![CDATA[Intelligent Automation Summit Takeaways: Four Alternative Gain Frameworks for RPA]]>

 

At the Intelligent Automation (IA) event in New Orleans, December 6-8, snow in the Big Easy air was not the only surprise. As expected, there was plenty of technological innovation on show in the exhibition hall, but the event also played host to some energized discussions on human-centric gains to be realized from RPA implementation – suggesting that we are indeed moving into the next phase of considering automation holistically in the enterprise.

Specifically, many presentations and conversations shared a theme of human enablement within the enterprise – positioning the organization for greater long-term success, rather than focusing on the short-term fiscal gains of reductions in force and reduced cost to serve specific processes. Here are four automation gain frameworks I took away from the event that are focused on areas other than raw FTE reduction.

Automation as a disruption buffer

‘Disrupt or be disrupted’ has become a mantra for many change management executives across industries, and it was invoked numerous times during the IA event in relation to automation’s role as a buffer to disruptive change – in both directions. An automated workforce can quickly scale up (or down) as needed without costly and time-consuming facility management and workforce rationalization tasks. While there was some discussion regarding the downside containment role of RPA, far more participants at the event were looking to RPA as a tool to effectively manage explosive growth in their sectors

Automation as a ‘hazmat bot’

The idea of using bots to handle sensitive processes and data emerged as a strong theme for the near-term RPA sector roadmap. Where bots were actually less trusted with ‘low-touch’ environment data in highly-regulated industries, like BFSI and healthcare, the dialog is beginning to turn in favor of sending bots to touch and manipulate that data rather than humans.

The rationale is sound: bots can be coded with very narrowly-defined rights and credentials, self-document their own work without exception, and produce their own audit trails. Expect to see this trend gain steam in 2018 and beyond. ‘We send bots into nuclear reactors and onto other planets,’ one attendee told me. ‘We treat the data core in card issuance with no less of a hazmat perspective – where we can minimize human contact, we will, for everyone's benefit.'

Automation as a workflow stress diagnostic

The very process of automating workflows within the organization produces a wealth of usable data, and nowhere is that more evident than in analyzing those workflows for exception management stress points. In a given workflow, there are usually clearly defined and straightforward task components, and those that produce more than an average volume of exceptions. By mapping these workflows and using them to understand similar tasks in other areas of the organization, companies can leverage automation data to identify those phases of a workflow that are creating exception management stress for employees, and add support via process redesign, digitization, or assisted automation.

Automation as human capital churn ‘coolant’

Related to the previous point is the idea that RPA is beginning to serve as a very real source of ‘coolant’ for burnout-prone repetitive task areas in the organization by continuously separating work into automation-relevant and human-relevant. Eliminating the most burnout-causing task stages from the human workday reduces the proclivity for turnover and the total cost to the organization of managing the human side of the workforce.

Summary

Productivity, quality, and fiscal gains are often the first three topics of conversation when organizations discuss launching an RPA initiative. But automation has much more to offer, not only to the organizational bottom line, but to the human employees in the enterprise as well. As this sector’s technology offerings evolve and mature, so too do the use cases and benefit frameworks within customer organizations.

]]>
<![CDATA[In RPA Deployment, Slow Down... To Go Faster]]>

 

RPA software offers users the tantalizing possibility of being able to simply 'hit record and go' at the beginning of an enterprise automation initiative. But organizations that are seeing the greatest returns are slowing the initial process down, and framing their initiatives as they would treat any major technology migration.

At UIPath’s recent User Summit in New York City, one of the hottest topics was the right pace of RPA implementation, with UIPath’s customer and partner panels devoting a considerable amount of time to the topic. And the message was clear: RPA is a technology that encourages an implementation rate faster than the customer might want to sign up for.

That very idea is a strange one for most veteran IT and business executives, who are used to IT project implementations going slower than expected, with fiscal returns further in the future than they might have hoped. So when a technology like RPA does come along that promises to enable users to ‘hit record and go’, why shouldn’t beleaguered line of business heads take those promises at face value and get moving with automation today? After all, automation is often part of a larger digital transformation initiative, with expectations that projects will be self-funding through savings. Shouldn’t technologies, like RPA, that generate material cost reductions be implemented as quickly as possible?

It’s a fair question. But there are four simple reasons why RPA projects should still be managed in a stepwise fashion, like any other IT or business project:

  • Technical debt mounts quickly in too-quick RPA implementations. The ‘hit record and go’ philosophy might offer some minimal return in a short period of time, but federating the automation creation process means that multiple users often create similar automations for similar tasks, wasting time and resources consumed later in consolidating different versions of the same robot down to a single bot. In addition, individual users often create related-task bots based on their original automation scripts, multiplying the task of bot consolidation later. Often, organizations find that they have to start over completely, and only then do they undertake a more formal approach
  • Installing RPA through a traditional project framework brings stakeholders together. Automation is a technology that has the potential to bring IT and business stakeholders together in an enterprise service delivery partnership – or drive them apart with turf battles and finger-pointing. Establishing rules up front for which business units should be involved in automation design, which in automation coding, which in automation governance, and which in automation innovation establishes ground rules that all parties involved can respect and buy into for the long term, avoiding larger-scale conflict that can emerge when the process is entered into too quickly up front
  • Designing for scale demands both innovation and centralization. As automation demand scales both in terms of breadth of services within the organization and the number of workers involved, the need for centralization of automation design and deployment increases commensurately. Innovation can actually proceed faster in many organizations being managed from a CoE or automation ‘lighthouse’ than through trial and error at the desktop level. Add in the additional demands on automation systems that result from global organizations demanding localized automations and in-language service, and that scale factor becomes a critical component in achieving peak fiscal return from an RPA initiative
  • Most RPA providers rely on integration partners for ‘right-speed’ deployment and support. Across the RPA sector, strong partnerships have evolved between RPA software developers and major integrators and consulting service providers, and for good reason – the latter bring experience in change management, process design, and implementation at scale to the former’s technological innovations. This has quickly become a proven combination, and one that is returning significant fiscal and operational value to enterprise-scale organizations. Short-circuiting that value return chain by cutting partner perspective and capability out of the equation might again save some dollars and time in the short run, but will end up being more costly as RPA is scaled up.

RPA presents IT and business leaders with an alluring combination of immediacy of access, significant potential fiscal returns, and low to non-existent stack requirements on deployment. Organizations that have jumped into the deep end of enterprise automation from the ‘hit record and go’ perspective might see some immediate fiscal returns, but ultimately, they are selling short the full promise of professionally-managed automation projects executed in partnership between lines of business and IT. Providers like UIPath that are emphasizing speeding up implementation are doing so with a structured framework in mind – so that once the process is designed for scale, and implementation rules and procedures are put in place, the actual software component of the solution can proceed into deployment as quickly as possible.

But in the end, a few additional weeks or even months spent in up-front work can better enable enterprise-level organizations to achieve their peak automation return. Moreover, this approach saves costly rework and redesign stages that inevitably stretch a ‘hit record and go’ implementation out to the same project timeline, or often much longer, than a more structured approach. As strange as it may sound, the best practices in RPA deployment involve slowing down… in order to go faster. 

]]>
<![CDATA[Nvidia Draws on Gaming Culture to Compete for AI Chip Leadership]]>

 

Nvidia faces stiff new competition for the leadership position in the AI processing chip market. But the firm has a significant competitive advantage: a culture of innovation and production efficiency that was developed to address the demanding needs of a wholly different market.

Intel and Google have been making waves in the AI processing chip market, the former with the acquisitions of Nervana Systems and Mobileye, the latter with the new Tensor Processing Unit (TPU) announcement. Both are moves intended to compete more directly with Nvidia in the burgeoning market for AI processing chips.

James Wang of investment firm ARK recently set forth his long-term bet on the industry – and it favors Nvidia. Wang posits that products like TPU will be less efficient than Nvidia GPUs for the foreseeable future, arguing that “…until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.”

Wang is right, but his opinion may not actually go far enough in explaining why Nvidia should enjoy a sustainable advantage over other relative newcomers, despite their resources and experience in chipbuilding. That advantage, by the way, doesn’t have a thing to do with Google’s chip fabrication expertise, or Intel’s understanding of the needs of the AI market. It’s a deeper factor that’s seated firmly in Nvidia’s culture.

Cutting-edge engineering & savvy pricing: key strengths forged in the gaming cauldron

By the time 2017 dawned, Nvidia owned just over three-quarters of the graphics card segment (76.7%), compared with main competitor AMD’s one-quarter (23.2%). But that wasn’t always the case. In fact, for much of the past decade, Nvidia held an uncomfortable leadership position in the marketplace against AMD, sometimes leading by as few as ten points of market share (2Q10).

During that time, Nvidia understood that a misstep against AMD in bringing new products forth could yield the market leader position, and even send the company into an unrecoverable decline if gamers – a tough audience to say the least – lost confidence in Nvidia’s vision.

As such, Nvidia learned many of the principles of design thinking the hard way. They learned to fail fast, to find new segments in the market and exploit them – as they did with the GTX 970, a product that stunned the marketplace by being priced underneath its predecessor at launch – and to take and hold ground with innovation and rapid-cycle development. More importantly, they learned how to demonstrate value to a gamer community that wanted to buy long-term performance security when it was time for a hardware refresh. In short, they learned to understand the wants and needs of an extraordinarily demanding consumer public, in the form of gamers, and relentlessly squeezed their competition out with a combination of cutting-edge engineering and savvy segment pricing.

Much of the real-world output from that cultural core of relentless engineering improvement is the remarkable pace of platform efficiency that Nvidia has achieved in its GPU chips. The company maintained close ties with leading game publishing houses, and as a result kept clearly in mind what sort of processing speed – as well as heat output and energy draw – cutting-edge games were going to require. At multiple points in time, the standards for supporting new games have meaningfully advanced inside eighteen months. This often mandated that Nvidia turn over a new top-end GPU processing platform on a blistering production timeline.

In response, Nvidia turned to parallel computing, an ideal fit for GPUs, which already offered significantly more cores than their CPU cousins. As it turned out, Nvidia had put itself on the fast track to dominating the AI hardware market, since GPUs are far better suited for applications, like AI, that demand computing tasks work in parallel. In serving one market, Nvidia built a long-term engineering and fabrication roadmap nearly perfectly suited for another.

The competition is hot, but Nvidia poised to win?

Fast forward to 2017, and some are questioning whether Nvidia is in the fight of its life now with new, aggressive competitors seeking to take away part – or all – of its AI GPU business. While Wang has pushed his chips into the center of the table on Nvidia, others are unconvinced that Nvidia can hold its lead – especially with fifteen other firms actively developing Deep Learning chips. That roster includes such notable brands as Bitmain, a leading manufacturer of Bitcoin mining chips; Cambricon, a startup backed by the Chinese government; and Graphcore, a UK startup that hired a veritable ‘who’s who’ of AI talent. 

There’s no shortage of innovation and talent at these organizations, but hardware is a business that rewards sustained performance improvement over time at steadily reducing cost per incremental GFLOPS (where a GFLOP is one billion floating point operations per second). The first of these components is certainly an innovation-centric factor, but the second rewards organizations that have kept pace not only with the march of performance demands, but the need to justify hardware refresh with lower operating costs. Given that this is an area where Nvidia shines, as a function of its cultural evolution under identical circumstances in gaming, the sector’s long-term bet on Nvidia is the correct call. 

 

Dave Mayer is currently working on a major global project evaluating RPA & AI technology. To find out more, contact Guy Saunders.

]]>
<![CDATA[Fast Data: The Smart Will Get Faster... and the Fast Will Get Smarter]]>

 

Fast Data is the emerging hot topic of discussion for business leaders seeking to get ahead of the next wave of data utilization. But Fast Data isn't just an evolution of Big Data; it's a market force unto itself that's asking more of traditional and start-up vendors in both traditional DBMS and AI.

I spent a (surprisingly snowy) morning this week talking with AI and Big Data thought leaders at the Global Data Summit 2017 in Colorado. While there’s no shortage of topics to hold their current interest, none was a higher business priority than solving the challenge of managing Fast Data through the application of AI. The consensus is certainly that the organizations that can best address this challenge will also be those best positioned to compete and win overall. But how best to get their arms around this opportunity and move forward effectively?

First, it's important to distinguish between the challenges of leveraging Big Data and Fast Data. Big Data is generally data at rest; it's explored at (relative) leisure, and doesn’t change so quickly or accumulate so rapidly that offline analytics become impossible. AI has no shortage of applications in Big Data, but in that environment, it's more the ability of an AI platform to manage complexity and work at scale that offers value.

Fast Data, by contrast, accumulates quickly and can change substantively within the course of a day or even an hour. Think adtech here, or online gaming, or vendor pricing with commodity costs as an input; vast amounts of data need to be ingested, analyzed, and understood by the second in order to secure the right ad placement at peak value, or to manage complex MMO games, or to ensure that pricing continuously secures competitive advantage at acceptable margin.

Fast Data becomes Big Data quickly, just by nature of its accumulation rate, and while it's often valuable to query the Big Data that Fast Data becomes to understand trends and cyclicality, Fast Data will always yield its peak value at the millisecond level. It’s the freshest layer that offers the most insight. The Big Data value proposition to retailers, for instance, is looking for cyclicality of demand and regional demand preferences over time; the Fast Data value proposition is understanding the products a shopper is looking at right now and making real-time recommendations for, say, footwear and accessories to match. AI can accomplish both tasks, but often needs to be set about different tasks – with different priorities and ground truths – to succeed. The implications for every phase of the organizational data analysis and workflow management platform – from MDM and data hygiene to machine learning and AI application – are immense.

In response, expect to start seeing considerably more focus from major AI platform vendors not just on depth of understanding by their products, but speed of reaction as well. Organizations big and small in the traditional data sector, from Oracle to VoltDB, are developing and marketing smarter Fast Data solutions, while AI leaders – like IBM and Wipro – are building capabilities for faster data management within their AI platforms.

Servicing this rapidly-growing need for Fast Data management will be a convergent effort: the smart will get faster… and the fast will get smarter.

 

Dave Mayer is a Senior Analyst responsible for NelsonHall's RPA & Cognitive Services research program, covering the areas of robotic process automation (RPA), artificial intelligence, cognitive business, and machine learning. He is currently working on a major global project evaluating RPA & AI technology. To find out more about the project, contact Dave Mayer or Guy Saunders.

]]>
<![CDATA[Application of RPA & AI Technologies in Learning BPS]]>

 

My colleague, Gary Bragar, recently discussed RPA and AI initiatives in HR, including payroll, recruiting, and learning. Within learning BPS, the majority of RPA investments have been made at a basic level within learning administration, specifically around training scheduling. For example, it previously took ~40 FTEs to manage the entire scheduling process for ~1k classrooms, including identifying classrooms based on availability, identifying onsite facilitators for training days, sending notifications, etc. Through RPA, the same workload can be completed in 15 minutes.  

Vendors such as Raytheon Professional Services (RPS) and IBM, however, have used more advanced applications of RPA and AI throughout the learning lifecycle. IBM, for example, is currently expanding RPA to the design and development of learning content via its Cognitive Content Collator (C3). IBM is leveraging Watson to interpret structured and unstructured data to drastically reduce the number of man hours spent annually on tagging and chunking content and then matching it with curriculum, competence, and goals. Specifically, it takes ~50k man hours to tag, chunk, curate, and map structured courses for ~10k hours of learning content; with IBM’s C3, these activities are completed in 55 hours.

With respect to AI and cognitive, IBM has launched ‘Personalized Learning,’ which offers a consumer-grade experience for learners that provides recommendations to employees based on job role, business group, skill set, and personal learning history to encourage continuous employee development and skill growth. The experience includes ‘content channels’ that support a variety of needs and interests to facilitate simpler browsing, as well as a five-star rating system, and will include virtual job coaches that pull content for an individual to help them develop certain skills.

While interest in RPA and AI technologies by organizations is high, overall adoption rates for these technologies in learning BPS has been low for two reasons. First, RPA requires investment by organizations, which is often problematic since a company’s learning budget is typically low. In addition, RPA requires that an organization exposes its technology and data to the service vendor, which they are often hesitant to do, since learning technology relationships are often separated from service relationships.

Current adopters of RPA in learning BPS tend to be from heavily regulated industries, including financial services, healthcare/pharma/life sciences, oil and gas, and automobile manufacturing. These organizations are realizing a significant reduction in training resources, which is creating more time for value-added activities.

Over the next year, adoption rates for RPA within learning BPS will increase and still be applied mainly to learning administration services. To be successful, vendors will not only have to demonstrate the business case, expected ROI, and previous successful deployments of RPA, but will also need to have a consultative partnership in place within the client organization.

]]>