DEBUG: PAGE=domain, TITLE=NelsonHall Blog,ID=1469,TEMPLATE=blog
toggle expanded view
  • NelsonHall Blog

    We publish lots of information and analyst insights on our blogs. Here you can find the aggregated posts across all NelsonHall program blogs and much more.

    explore
  • Events & Webinars

    Keep up to date regarding some of the many upcoming events that NelsonHall participates in and also runs.

    Take the opportunity to join/attend in order to meet and discover live what makes NelsonHall a leading analyst firm in the industry.

    explore

Subscribe to blogs & alerts:

manage email alerts using the form below, in order to be notified via email whenever we publish new content:

Search research content:

Access our analyst expertise:

Only NelsonHall clients who are logged in have access to our analysts and advisors for their expert advice and opinion.

To find out more about how NelsonHall's analysts and sourcing advisors can assist you with your strategy and engagements, please contact our sales department here.

TechM Uses GenAI to Reshape Application Delivery

 

Tech Mahindra (TechM) recently briefed NelsonHall on TechM AppGinieZ, its GenAI solution for software engineering and SDLC.

Recent times have seen all major IT services providers release GenAI-powered solutions, targeting two broad scenarios: those that help developers build, test, and support applications more efficiently and those that enable capabilities like virtual assistants to help clients improve or even transform business processes.

Identifying GenAI opportunities internally

TechM’s AppGinieZ GenAI solution falls into the first category. AppGinieZ assists TechM’s teams in application services, including development, QE/testing, and support. AppGinieZ and other investments in AI/GenAI are a part of TechM’s strategic initiative, ‘Scale at Speed,’ where TechM promises clients accelerated delivery. This gives AppGinieZ senior management’s sponsorship and investment focus.

For now, TechM has taken a measured approach with AppGinieZ. It has been built by TechM’s ADMSNXT COE (application development and maintenance services), focusing first on the SDLC stages that provide opportunities for automation and then expanding into other use cases depending on client interest and GenAI’s evolving capabilities.   

Broadly, TechM AppGinieZ has two sets of capabilities.

  • GenAI: generates text and code from different inputs like text, code, image, etc.
  • Predictive AI: analyses data to perform activities like defect triage, risk-based testing, and log analysis.

TechM AppGinieZ supports the following use cases in the software development lifecycle, with code snippet generation, log analysis, and unit test generation seeing higher adoption.

  • Requirements refinement: generates or refines stories based on requirements artifacts, simple text
  • Code snippets: generates code snippets from text and image prompts
  • Code documentation and commenting: generates comments for multiple languages and synopsis of the code functionality in text
  • Unit tests: generates unit tests for the input code
  • Log analysis: reviews logs and generates reports in multiple formats
  • Code conversion: converts code from one language to another, like Java to Python
  • YAML: generates YAML code for automation tools like Ansible.

To date, TechM has trained around 25,000 employees in AI pair programming. It claims that in some DevOps implementations using TechM AppGinieZ there was a 25% effort saving. NelsonHall believes the effort and cost savings will be more determinable and subject to further improvement once working with GenAI becomes institutionalized. Initial engagements also require more effort towards training, familiarisation, oversight, and human-led reviews, which, with time, will get faster for all vendors with a GenAI play.

Client case study

TechM highlights a North American client success story. Taking the traditional Three Amigo concept of business, development, and testing perspectives in Agile development further, TechM added AppGinieZ as a GenAI assistant, which it claims helped delivery teams perform story reviews and rewrites faster and efficiently generate test cases from refined stories. Encouraged by the engagement's success, the client and TechM have jointly filed for a patent for the solution.

QE/testing activities have been early adopters across the STLC lifecycle in implementing automation and AI, and now GenAI. TechM AppGinieZ is used in QE across: 

  • Test strategy creation: converts requirement documents/user stories to a test strategy
  • User story refinement: takes rough user stories from Jira and other sources and generates detailed user stories
  • Test automation: generates test scripts from test cases
  • Test data generation: generates synthetic test data in multiple formats
  • Test case generation: generates scenarios and test cases based on inputs like requirement documents/user stories and images.

Test cases and script generation are currently the most popular QE use cases. In early deployments, TechM claims savings of 20-30% in the end-to-end test life cycle when using AppGinieZ. 

Overall, TechM feels that AppGinieZ and AI-driven development will have a positive and meaningful impact on margins in the future.

The road ahead

TechM showed us a demo of TechM AppGinieZ in action across QE and ADMS use cases. Based on the scenario, it can be connected to LLMs such as Gemini, OpenAI, Llama, and others. Its ability to be integrated with an increasing number of tools gives it flexibility and more acceptance into existing client landscapes.

Constant oversight and reviews are necessary when using GenAI, as the output can only be as good as the data quality and LLMs involved. This necessitates the infusion of client-specific rules to create a contextual layer to improve the accuracy of the response generated.

NelsonHall believes that the TechM AppGinieZ roadmap is pragmatic and will see the addition of more predictive AI, compatibility with more LLMs, and increased granularity of use cases across the SDLC.

No comments yet.

Post a comment to this article:

close