DEBUG: PAGE=domain, TITLE=NelsonHall Blog,ID=1469,TEMPLATE=blog
toggle expanded view
  • NelsonHall Blog

    We publish lots of information and analyst insights on our blogs. Here you can find the aggregated posts across all NelsonHall program blogs and much more.

  • Events & Webinars

    Keep up to date regarding some of the many upcoming events that NelsonHall participates in and also runs.

    Take the opportunity to join/attend in order to meet and discover live what makes NelsonHall a leading analyst firm in the industry.


Subscribe to blogs & alerts:

manage email alerts using the form below, in order to be notified via email whenever we publish new content:

Search research content:

Access our analyst expertise:

Only NelsonHall clients who are logged in have access to our analysts and advisors for their expert advice and opinion.

To find out more about how NelsonHall's analysts and sourcing advisors can assist you with your strategy and engagements, please contact our sales department here.

Eviden’s Quality Engineering AI Journey


NelsonHall recently talked with Eviden, Atos’ consulting and application services business, about its QE practice, Digital Assurance.

Digital Assurance has 5k quality engineers, 65% offshore, reflecting a high leverage in North America (due to its Syntel heritage) counterbalanced by Atos’ large European public sector client base. The practice has aligned its service portfolio around high-growth offerings such as testing applications for digital projects, migration to the cloud, testing Salesforce migration projects from Classic to Lightning Experience, and SAP.

Beyond these technologies, Digital Assurance has focused on AI, initially traditional AI with ~45 pilots underway, and then around GenAI in 2023-24.

AI/GenAI as priorities

Eviden currently has five primary GenAI use cases relevant to testing being deployed on its GenAI QE Platform:

  • Test strategy generation
  • Ambiguity assessment and scoring
  • Test case creation
  • Test data
  • Test script automation.

One of the demos we attended was ambiguity assessment and scoring, where Eviden evaluates the quality of a user story/requirement. Other demos such as automated test case and test script generation provide several insights regarding the current art of the possible.

GenAI quick wins

GenAI provides quick wins that do not require significant ML model training.

An example is assessing the quality of user stories. Commercial LLMs will work out of the box and can be used as-is without further training. But LLMs only work if the input data (user stories in this example) follow best practices, e.g., are detailed enough and have clear acceptance criteria. If those fail, the LLM will reject the user stories.

Prompt engineering rather than data finetuning

Eviden is finding that the pretraining provided by the hyperscalers is good enough for most use cases, and is not currently contemplating conducting clients’ data training.

Eviden sees a need for structured prompt engineering, i.e., providing the LLM model with the right instructions. It is building repositories of standard/proven prompts. In addition, Eviden will adapt the prompts to the specificities of each application, e.g., table structures and user story patterns. Digital Assurance estimates that adapting prompts to the client’s applications will only last a few weeks. This approach is time-sensitive and provides quick wins, for instance, around automated test script generation.

Combining traditional AI and GenAI

Eviden is combining GenAI with more established impact analysis AI models (e.g., predicting the impact of code change/epics on test cases) and is conducting GenAI processing once it has done so with predictive AI model investments. The ecosystem approach goes beyond other AI models, and Eviden points out that it is deploying container-based delivery to execute GenAI models independently to shorten time-to-process.

The beginning of the GenAI journey for QE

This is just the start of the GenAI journey for Eviden’s Digital Assurance practice. The company is deploying early GenAI use cases and deriving its first best practices. Eviden also points out that human intervention is still required to assess GenAI’s output until GenAI reaches maturity. Even with GenAI, the testing industry is far from autonomous testing or even hyper-automation.

Eviden is working on other GenAI initiatives, including around Salesforce and SAP applications. For instance, Digital Assurance has used GenAI in SAP to generate a repository of ~250 T-codes (SAP transactions) with relevant test scenarios and cases.

Eviden is also exploring migrating to open-source tools away from SAP-recommended COTS for their regression testing needs. The migration goes beyond changing text execution tools and migrating test scripts. This is not the first time we have seen interest in moving away from commercial tools, but historically this has not materialized in massive migration projects. GenAI will ease the process.

No comments yet.

Post a comment to this article: