NelsonHall: Process Discovery & Mining blog feed https://research.nelson-hall.com//sourcing-expertise/digital-transformation-technologies-services/process-discovery-mining/?avpage-views=blog NelsonHall's Process Discovery and Mining program is designed for organizations considering, or actively engaged in, the application of process discovery and mining technologies as part of identifying processes for automation. <![CDATA[Harnessing Process Mining Data to Transform the NHS]]>

 

In the U.K., the efficiency of the NHS, and particularly the reduction of waiting lists, continues to be a hot political topic. While promises of cash injections are met with skepticism, a recent event in London showcased how data-driven insights could help improve the NHS’ operational efficiency without the need for major financial outlays. As part of its 22-city tour, process mining vendor Celonis presented its vision for  transformative applications within the NHS.

How Celonis can deliver NHS efficiencies

The Celonis process mining platform ingests process data from systems of record, for example CRM systems, and supports organizations in visualizing and analyzing process flows to identify opportunities for process efficiency gains.

A standout example at the event was the use of Celonis at University Hospitals Coventry and Warwickshire (UHCW). Collaborating with IBM, UHCW implemented Celonis to address inefficiencies and reduce waiting lists. While the hospital knew the issues it had, Dr Jacob Koris, one of the NHS’ representatives at the event and a fellow of the NHS’ Get It Right First Time initiatives, stated that the insights from Celonis helped uncover the data to guide process transformation programs. Here are some of the insights that were detected and what the trust has done to address specific issues:

Initial challenges

Challenges included:

  • 6.3% of patients did not attend pre-booked treatments
  • 21% canceled treatments less than five days before the appointment
  • 7% of appointments were canceled by the hospital itself

Data-driven solution

Celonis was deployed, integrating patient booking data to help UHCW understand the underlying reasons why treatment appointments were missed or cancelled, as well as suggesting and monitoring process improvement plans.

The Celonis platform mapped out the process, which included patients initially receiving text message reminders the day or even evening before treatment. These late reminders were identified by the Celonis platform as a possible root cause for late cancellations from patients. UHCW conducted A/B testing to send reminders four days before scheduled treatments compared to the initial one day reminder, with Celonis tracking the testing.

This change reduced missed appointments from 10% to 4% and improved proactive communication from patients

Impact

The improved appointment management allowed the hospital to reallocate previously vacant slots to other patients.

UHCW reduced its waiting list from 72,000 to 67,000, one of the few trusts to achieve a waiting list reduction in the past year.

UHCW’s success extends beyond reducing missed appointments. For example:

  • Cost savings and efficiency:
    • £1.4 million in annual benefits were realized from short-term interventions
    • 17,000 appointments were released by reducing wastage
    • 700 more patients per week were accommodated without increasing staff numbers
  • Operational improvements:
    • Addressed low-value clinical time use caused by inappropriate referrals
    • Enhanced hospital productivity by reducing high patient call volumes
    • Improved patient experience by decreasing waiting times for treatments.

The broader picture

The success at UHCW has led to the adoption of Celonis by other NHS hospitals, including University Hospital Dorset and Dorset County Hospital. These hospitals are also seeing significant benefits. University Hospital Dorset achieved an annualized benefit of £1.8m, while Dorset County Hospital reported an annualized benefit of £1.1m.

IBM is further supporting these hospitals by deploying Watsonx GenAI in PoCs to enhance patient issue coding and automate the validation of clinical letters, reducing the need for human intervention at the hospitals.

Despite these successes, some NHS trusts and hospitals remain cautious due to past experiences with IT projects that promised but failed to deliver significant improvements. However, the tangible benefits realized by UHCW and other hospitals demonstrate the potential of tools like Celonis to not only replicate existing wins but also uncover new efficiencies.

For example, speaking to IBM at the event, the possibility of gainshare or other rewards-based contracting appeared to be a possibility as incentives for other hospitals. Also, at IBM’s recent partner event focused on GenAI, the company demonstrated a number of outcome-based contracts with other clients focused on process transformation using GenAI technologies.

Future use cases discussed by Dr Jacob Koris at the Celonis event include:

  • Reducing the need for multiple hospital visits by scheduling all necessary tests on the same day
  • Coordinating appointments for patients and their dependents to occur within the same appointment window.

Conclusion

The Celonis event in London highlighted how data-driven process improvements can help the NHS tackle long-standing issues such as waiting lists without the need for significant additional funding. And while the savings made do not go very far in terms of reducing the overall funding gap, the potential impact for more hospitals and more use cases leveraging technologies such as Celonis to help address inefficiencies is clear.

]]>
<![CDATA[IBM Converging Risk Scores to Optimize Cybersecurity Offering]]>

 

NelsonHall recently attended an IBM Security analyst day in London. This covered recent developments such as IBM’s acquisition of Polar Security on May 16th to support the monitoring of data across hybrid cloud estates, and watsonx developments to support the move away from rule-based security. However, a big focus of the event was the subject of risk.

For the last few years, the conversation around cybersecurity has shifted to risk, highlighting the potential holes within a resiliency posture, for example, and asking questions such as ‘if ransomware were to shut down operations for six hours, what would the implications for the business be?’

IBM and a number of other providers, therefore, have been offering ‘risk scores’ related to aspects of an organization’s IT estate. These include risk scores from IBM’s Risk Quantification Service, IBM Guardium, for risk related to the organization’s data, including its relationship with data security regulations; risk scores from IBM Verify related to particular users; and from recently acquired Randori, the company’s attack surface management solution.

Randori, acquired in June 2022, is a prime example of IBM’s strengths in understanding and reducing risks. Its two offerings, Randori Recon and Randori Attack, aim to discover how organizations are exposed to attackers and provide continuous automated red teaming of the organization’s assets.

After running discovered assets, shadow IT, and misconfigurations through Randori Attack’s red team playbooks, clients are presented with the risks through a patented ’Target Temptation’ model. In this way, organizations can prioritize the targets that are the most susceptible to attack and monitor the change in the level of risk on an ongoing basis.

IBM’s Risk Quantification service uses the NIST-certified FAIR model which decomposes risk into quantifiable components: the frequency at which an event is expected and the magnitude of the loss that is expected per event. In this manner, the service performs a top-level assessment of the client’s controls and vulnerabilities, makes assumptions such as the amount of sensitive information stolen during a breach based on prior examples, and produces a probability of loss and the costs related to that loss, including fines and judgments from regulatory bodies.

It is not the first time we have seen this model and a similar approach being taken by vendors offering cyber resiliency services. One such vendor is Unisys, who in 2018 offered its TrustCheck assessment, which used security data and X-Analytics' software to analyze the client's cyber risk posture and how they associate with financial impacts. These financial impacts were plotted against the threat likelihood of the event.

TrustCheck was used as a driver for the Unisys cybersecurity business; it related the expected loss against guidance to whether the value of securing the client's environment was greater than the cost to remediate a gap, and it conveyed this information to the C-level.

So what is the difference between IBM’s approach to risk and Unisys’ TrustCheck service?

IBM has been approaching its risk qualification from both ends – a bottom-up measuring of user, data, compliance, and the IT estate using platforms such as Guardium, Verify, and now Randori, and a top-down view within its Risk Quantification Service. At the analyst event in London, there was a clear indication that these risk scores would be looking to converge over time to provide a more accurate and consistent view of an organization’s risk. For example, using the outputs from Randori Recon to understand the client’s exposure; Guardium and Polar security to understand what data is being held and where it could travel; and Verify to understand what user access exists. A consistent, accurate view of the client’s resiliency would then be used to drive decision-making.

This convergence of risk scores will not be an immediate development. Randori has just undergone a year of development to integrate its UX into QRadar for a unified experience, and its upcoming development will include being brought into the IBM Security QRadar suite as part of an Attack Surface Management (ASM) service before a consistent risk score service is complete. Likewise, the acquisition of Polar Security needs time to bed in to the data security estate.

NelsonHall does, however, welcome any moves that result in more organizations knowing more about the risks to their business, and the financial risks associated, which has traditionally been a major stumbling block for organizations in understanding what remediation should be taken to increase security postures beyond the baseline of compliance requirements.

]]>
<![CDATA[The Future of Process Discovery & Mining: 2023 and Beyond]]>

 

Process discovery and mining platforms, which examine organizations’ process data as part of transformation initiatives, have become an increasingly critical part of process automation and reengineering journeys.

Here I look at what to expect in the process discovery and mining space in 2023 and beyond.

Continuous Monitoring

Traditionally we have seen COEs use process discovery and mining to focus solely on single point-in-time process improvements as part of the transformation journey. Because of this, when an improved process is put into place the focus (and licenses) for the process discovery and mining suites are moved onto the next project.

In 2023, we predict that these solutions will be used to support more process analysis on an ongoing basis, with licenses being used on already reengineered processes to support KPI monitoring and ongoing process improvement. There are certainly features being built into the platforms and pricing models that are reenforcing this move, such as being able to use process discovery platforms to train users on parts of processes that cannot be automated, and unlimited usage licenses that aren’t tied to the amount of users or the amount of process data ingested.

In this way, process discovery and mining solutions can provide a real-time view of actual process performance, augmenting business process management (BPM) platforms.

Automation and Low Code Application Development Links

Process discovery and mining have always been great lead-ins for automation, revealing what processes are in place before automating them; however, that connection has mainly been one way, i.e. process discovery and mining platforms sending over the skeletons of a process to automation platforms to build an automation.

In 2023, we see process discovery platforms implementing more functionality in the reverse direction to take automation logs from the automation tools back into the discovered process to track the overall performance of the process on an ongoing basis, whether the steps of the process are automated or performed by a human.

Likewise, when a process cannot be fully automated and requires human effort, automation platforms are implementing low code applications to collect the necessary information. We envisage the process discovery and mining platforms not only building skeletons of the processes for automation, but also building suggestions from low code applications.

Digital Process Twins

Usually when we refer to digital twins we are talking about a digital representation of a manual process enabled by IoT as part of industry 4.0. However, at the end of last year, we saw one or two vendors moving towards the creation of digital process twins for business operations.

The digital process twin is the culmination of continuous monitoring of both a process and its automation. Using these features, process understanding solutions can be the future of BPM, providing real-time tracking of the performance of a process, and they can enable opportunities like preventative maintenance, leveraging root cause analysis to find when a process is showing signs that it is straying from the target model.

Object-Centric Process Mining

Traditional process mining ascribes a single case notation for every step of the process, but this isn’t the best fit for every process.

For example, in car manufacturing, the production of a car could be held up by the materials for the glass in the windows if single case notation is used. A car manufacturer will not be ascribing the same case notation for silicon arriving at the factory to the car that will eventually be fitted with windows made from that silicon consignment.

In object-centric process mining (OCPM) you would not use a single case notation as the only linking piece in a process. Instead, the case notation ceases to be the be-all-and-end-all of the process and each object, each aspect of the process, is tracked individually, with its own attributes, as part of a whole process.

The object-centric process could then, in the car example, relate case numbers from user issues to the vehicle, to the order they placed, to the windscreen, and to the delivery of silicon.

Such OCPM will expand the usefulness of process discovery and mining from processes that are fairly simple and related to a single case, such as a ticket number on an email, to a more complete view of the process.

***

In this quick look at the future of process discovery and mining, we acknowledge that the features described here may not be the core application of these platforms before the end of 2023, as organizations will continue to use existing functionality to target the bulk of legacy processes requiring quick fixes to reduce costs and perform one-off improvements to processes.

]]>