DEBUG: PAGE=domain, TITLE=NelsonHall Blog,ID=1469,TEMPLATE=blog
toggle expanded view
  • NelsonHall Blog

    We publish lots of information and analyst insights on our blogs. Here you can find the aggregated posts across all NelsonHall program blogs and much more.

    explore
  • Events & Webinars

    Keep up to date regarding some of the many upcoming events that NelsonHall participates in and also runs.

    Take the opportunity to join/attend in order to meet and discover live what makes NelsonHall a leading analyst firm in the industry.

    explore

Subscribe to blogs & alerts:

manage email alerts using the form below, in order to be notified via email whenever we publish new content:

Search research content:

Access our analyst expertise:

Only NelsonHall clients who are logged in have access to our analysts and advisors for their expert advice and opinion.

To find out more about how NelsonHall's analysts and sourcing advisors can assist you with your strategy and engagements, please contact our sales department here.

Accenture’ s Teach & Test Methodology for AI Testing

 

In the past three years, testing service vendors have been looking at how to adopt more automation and apply AI to testing. Accenture, meanwhile, has been approaching things from a different angle, looking at how to test AI software itself rather than applying AI to the delivery of testing services.

Here I look briefly at Accenture's approach to AI testing, which used a two-phase methodology called Teach and Test.

Teach phase: Identifying biased data

The broad principle of the Teach phase is to make the AI learn, iteratively, to produce correct output, based on a ML approach. Accenture highlights that before proceeding to this phase, a key priority is around data validation, in particular making sure the data used for teaching is not biased, as this would, in turn, introduce bias to the AI outcomes. One example is text transcripts from contact centers, that may well include emotional elements or preconceived ideas. Another is a claim processing operation, where there may be bias in terms of gender and age.

To prevent potential bias, Accenture uses a ML approach for clustering terms and words in the training data, and identifying which sets of data are biased based on the word clusters.

Curating the data also has the benefit of bringing a process approach to ML – processes that can then be communicated to regulatory authorities to prove how data is cleaned, how, and why.

One challenge in de-biasing data is that it reduces the level of data that can be used for training. So, Accenture differentiates between which parts of a data set need to be removed and which parts can be neutralized (i.e. by removing some of the bias). Accenture agrees with the client on which bias elements need to be removed and which ones can remain.

Test phase: Identifying defects

During the Test phase, Accenture’s priority is AI model scoring and evaluation. Here, the idea is that, even if Accenture does not know what the AI output of a single test input will be, it can still check the outputs of multiple inputs, taking a statistical approach.

This approach helps to identify the AI outputs that are materially different from the majority. If the AI model has defects that need to be fixed, Accenture will enhance the model and re-train it.

A first step for AI testing

Accenture formally introduced the Teach and Test methodology in Q1, and says it already has several AI testing projects in progress, an example being a virtual agent for the credit card processing activities of a financial services firm.

With Teach and Test, Accenture is providing a first step in the market for AI testing. And while the story of AI testing has yet to be written, two things are already clear:

First, manual testing will only be a marginal part of AI testing, as AI will require scale that manual testing cannot provide.

Second, with AI use cases such as virtual agents, the deterministic approach of test cases and test scripts is no longer relevant. In all likelihood, current testing software tools used in functional testing will not find their way into AI testing. This implies that the role of testers will continue to evolve, from functional testing towards technical and specialized expertise.

No comments yet.

Post a comment to this article:

close