We continue to assess the impact of AI on software testing services in its various forms (ML, NLP, deep learning), talking to the major suppliers in the industry. Vendors have been accelerating their investment in AI technologies to make sense of the wealth of data available in defect management tools, production logs, and ITSM software tools, creating use cases – mostly around the vast number of test cases, their optimization, and prioritization.
Sogeti, a subsidiary of Capgemini Group, recently briefed NelsonHall on its Cognitive QA offering for testing activities such as testing project management and governance, looking at test coverage, test script prioritization and management.
Sogeti puts this Cognitive QA approach in the context of agile and DevOps, highlighting that with the adoption of agile, testing-related data is becoming less accessible, and the quality of data is decreasing. For instance, with agile methodologies, developers are less inclined to enter user requirements into their systems: this makes understanding of user requirements, for testing purposes, less easy. Also, the increased adoption of open source software, and of COTS from small ISVs, away from the testing product suites of HPE and of IBM, means data is now distributed across different software with different data structures. Data traceability is becoming more difficult.
Accordingly, Sogeti initiates its Cognitive QA projects by auditing the data quality and usage by the client across its applications, through a series of workshops. Sogeti argues that this data quality and maturity phase is key for deriving relevant analytics/insights.
Once the quality of data has been assessed, Sogeti proceeds to the first phase of its Cognitive QA projects, ‘Predictive QA Dashboards’, which is reporting-related and uses a dashboard with drill-down capabilities. The main AI use case is around defect prediction, from analyzing data in previous releases and identifying (from changes introduced in the new release) how many bugs are likely, and where. This phase also includes an effort estimation.
In a second phase, called ‘Smart Analytics for QA’, Sogeti deploys its testing best practices, e.g. test strategies and risk analysis around test case selection, prioritization and coverage, into a machine-readable form. Sogeti currently uses IBM SPSS for structured data and is starting to use Watson for unstructured data.
The next two phases are more traditional.
The third phase, ‘Intelligent QA Automation’, uses IP and accelerators that Sogeti has developed, mostly around DevOps, focused on test execution and test support activities such as test data management, and test environment provisioning, as well as service virtualization.
In the fourth and final step, ‘Cognitive QA Platforms’, Sogeti’s consulting approach steps in again, looking at how AI will have a role in the future. Sogeti is envisioning instant testing and validation, self-adapting test suites, self-aware test environment provisioning, and automated risk management.
Sogeti has worked with several clients to date on Cognitive QA, across four industries: high-tech, financial services, public sector, and telecoms. In one client example, Dell EMC, Sogeti helped in test case prioritization: the client has 350 testers deployed on product engineering testing. Its challenge is that Dell EMC’s products all combine hardware and software, and it has to ensure they will work with the many different middleware releases and patch combinations.
Sogeti’s positioning of QA Cognitive is interesting. Agile and DevOps is bringing back disruption and software tool fragmentation into the SDLC after years of investment by enterprises in IBM and HPE/Micro Focus’ suites of testing products to reduce that fragmentation. With many enterprises still looking to become more Agile-centric, we may be on the verge of a data testing disruption that will reduce visibility into testing activities. And this is where Sogeti’s data audit function comes into play.
Feb 17, 2018, by chandrabose Thavakkani