DEBUG: PAGE=domain, TITLE=IT Services,ID=1424,TEMPLATE=program
toggle expanded view
programcode = ITS
programid = 115
database = t
alerts = t
neat = t
vendors = t
forecasting = t
confidence = f
definitions = t

Search within: IT Services:

Access our analyst expertise:

Only NelsonHall clients who are logged in have access to our analysts and advisors for their expert advice and opinion.

To find out more about how NelsonHall's analysts and sourcing advisors can assist you with your strategy and engagements, please contact our sales department here.

Subscribe to blogs & alerts:

manage email alerts using the form below, in order to be notified via email whenever we publish new content:

has Database = t

Contracts Database

for IT Services

Track the pattern of service adoption by monitoring IT Services contract awards by your peers. Identify who are the successful vendors this industry now. Updated monthly!

These documents are available to logged in clients that have purchased access to this program.

has Confidence = f -- IGNORED

Navigating AI-Based Quality Assurance Automation with Infosys

 

We recently talked to Infosys Validation Solutions (IVS), Infosys’ quality assurance unit, and discussed its continued investment in AI to automate testing and validate chatbots and AI models.

AI-Based Analytics from Test Case and Defect Data

The primary AI use cases in QA are around analytics: QA and development produce a vast amount of data that can be used for guiding QA activities. For instance, enterprises have vast amounts of test cases that often overlap or are duplicates. AI, through NLP, will go through test cases, identify keywords and highlight those test cases that are highly similar and probably redundant. This activity is called test optimization and can help remove between 3 and 5% of test cases. This may not seem a lot, but large enterprises have very significant repositories of test cases (Infosys has several clients with a hundred thousand test cases). Also, test cases are the basis for test scripts, which test execution software uses. More importantly, these test cases and test scripts need to be maintained, often manually. Reducing the number of test cases, therefore, has significant cost implications.

The analysis of test cases and defects brings many other use cases. The analysis of past test defects and their correlation with code changes is also helpful. Infosys can predict where to test based on the code changes in a new software release.

AI Brings Quick Wins that Are Precious for Agile Projects

There are many other data analysis quick-win opportunities. Infosys continues to invest in better testing. An example of a recent IP and service is test coverage. For websites and web applications, Infosys relies on URLs to identify transaction paths that need to be tested and compares them with the test cases. Another example is Infosys, for a U.S. bank, going through execution anomalies from the test execution tool and putting them into categories, providing an early step in root cause analysis. A rising use case is detecting test cases based on the comparison of user stories within agile and existing test cases.

We think the potential for AI-based analytics and resulting automation is without limits. NelsonHall expects a surge in such AI-based analytics and NLP, which will bring an incremental automation step.

Starting to Automate Human Tasks Outside of Test Execution

RPA also has a role to play in QA incremental automation steps. Outside of test script execution, functional testing still involves manual tasks. Infosys has developed a repository of ~500 testing-specific RPA bots to automate them; an example is a bot for setting up alerts on test execution monitoring dashboards, and another is loading test cases to test management tools such as JIRA.

With the predominance of agile projects, RPA can also be precious for highly repeatable tasks. However, RPA raises another issue: the maintainability of RPA scripts and how frequently they need to be updated. We expect Infosys to share its experience in this important matter.

Automation Step Changes Now in Sight

AI is also expanding its use cases from incremental automation to significant step changes. An example is Infosys using object recognition to detect changes in a release code and automatically update the relevant test scripts. In other words, Infosys will identify if an application release has a screen change such as a field or button changing place and will update the script accordingly.

There is more to come, we think, with web crawlers and next-gen record and playback testing tools. So far, client adoption is only just emerging, but this space is inspiring. Potentially, QA vendors could remove the scripting phase through automated creation or update of test scripts.

Chatbots Are Increasingly Complicated to Test

QA departments are moving out of their comfort zone with AI systems to test chatbots and AI models with AI systems.

In principle, chatbots are deterministic systems and rely on the pass-or-fail approach that QA tools use. Ask a chatbot simple questions such as the time or opening hours of a store. The response is straightforward and is either right or wrong.

However, the complexity of chatbots has increased. Voice plays a role and drives a lot of utterance training and test activity to solve language, accents, and domain-specific jargon challenges. Also, chatbots are increasingly integrated with hyper-scalers and rely on APIs for integration with back-end systems. Also, Infosys points to the increasing integration of chatbot functionality within AR/VR. This integration is bringing another layer of QA complexity and performance discussions. Infosys is taking a systematic approach to chatbot testing and has built several accelerators around voice utterances.

Testing of AI Models Is the Next Step Change Through Synthetic Data

With AI models, QA is moving to another world of complexity. AI models can be non-deterministic, i.e., not knowing the answer to a specific query; for example, identifying fake insurance claims for an insurance firm.

The traditional approach of QA, i.e., check the answer is correct or not, needs reinvention. Infosys is approaching the AI-model QA from several angles. For training and testing purposes, data plays an essential role in the accuracy of data science models. Infosys is creating synthetic data for training models, taking patterns from production data. With this approach, it is solving the challenge of the lack of sufficient data for training the AI model.

Another approach that Infosys is taking is a statistical method. It provides a series of statistical measures to data scientists, who can then decide on the accuracy of the data model.

AI model testing is still a work-in-progress. For instance, training data bias remains a challenge. Also, with QA meeting AI and data science, test engineers are clearly out of their expertise zone, and Infosys heavily invests in its UI and training to make its tools more accessible. The company points to further IP, such as using computer vision to check the quality of scanned documents.

There is much more to come:  the potential benefits of AI are limitless.

No comments yet.

Post a comment to this article:

close