Virtusa recently briefed NelsonHall about how it conducts GenAI testing. With the emergence of LLMs, Virtusa has seen a rising interest in understanding how to validate them. However, testing LLMs is not easy, as traditional testing approaches are not relevant to LLMs. It requires reinventing software testing and looking beyond the output of a transaction.
Non-Deterministic LLMs Challenge How Testing Is Conducted
Welcome to the world of leading-edge technology and complexity! LLM testing is not easy and differs from testing other AI models: LLMs are non-deterministic (i.e., for the same input, they may provide different responses); other AI models, such as ML, provide the same output for the same input.
The non-deterministic nature of LLMs raises several challenges for testing/QE. The broad principle of functional testing is to validate that a specific transaction on a web application or website provides the intended result, e.g., ordering a good on a website and validating that payment has been processed and completed. However, with GenAI, the output is dynamic and can only be broadly defined. For example, testing a generated response to a question, summary, or picture under a traditional approach is not working, as there is no right or wrong answer. Also, several answers can be correct.
As part of its efforts to deal with this complexity, Virtusa has organized its capabilities under the Helio Assure Framework, which covers LLM data, prompts, models, and output.
Data Complexity
Data validation is a starting point for any LLM project. Virtusa offers traditional data validation services, such as checks around data integrity, consistency, and schema/models.
Virtusa also conducts statistical assessments specific to data used for training AI models; for example:
Beyond data and distribution validation, both well-understood activities, Virtusa emphasizes two approaches:
Of these two, data bias detection is the most difficult, mainly because bias identification varies across cultures and contexts and is challenging to automate. Virtusa continues to work on data bias detection.
Prompt Validation
For prompt validation, Virtusa relies on several approaches, including bias checks, toxicity analysis (e.g., obscenity, threats), and conciseness assessments (e.g., redundant word identification, readability). Virtusa highlights that prompt templatization, through a shared repository of standard prompts, also mitigates security threats.
Virtusa also uses adversarial attacks to identify PII and security breaches. Adversarial attack is the equivalent of pen-testing in security, initially developed in ML. The approach is technical and rapidly evolving as LLM vendors finetune their LLMs to protect them from hackers. Nevertheless, it includes methods such as prompt injection and direct attacks/jailbreaks.
LLM Accuracy Evaluation
For evaluating AI models such as LLMs, which is particularly challenging, Virtusa relies on a model accuracy benchmarking approach, creating first a baseline model. The baseline is an LLM whose training is augmented by a vector database/RAG approach relying on 100% reliable data (‘ground truth data’). It will evaluate the accuracy of LLMs vs. this baseline model.
The Roadmap is Creative Content Validation and LLMOps
Virtusa has worked on GenAI creative output/content validation, looking at three elements: content toxicity, its flow (e.g., readability, complexity, and legibility), and IP infringement (e.g., plagiarism or trademark infringement). Virtusa uses computer vision to identify content patterns present in an image or a video, classifying them into themes (clarity and coherence vs. the intent, blur detection, and sometimes assessing the relevancy of the images/video vs. its objectives. We think the relevancy of this offering for social media, education, marketing, and content moderation is enormous.
We think that GenAI is the next cloud computing and will have significant adoption: enterprises are still enthusiastic about what GenAI can bring, though recognizing they need to pay much closer attention to costs, IP violation, data bias, and toxicity. Governance and FinOps, to keep cost and usage under control, are becoming increasing priorities. GenAI vendors and other stakeholders are eager to move from T&M to a usage-based consumption model and want to monetize their investments.
]]>
Wipro recently briefed NelsonHall on its GenAI investments for quality engineering, discussing the creation of use cases and sharing the thinking behind some of its decision-making.
Wipro’s GenAI investments for QE are part of the company’s ai360 program, a $1bn investment that includes activities in developing use cases, training, and GTM. The launch of its QET GenAI Platform is part of this initiative.
It has identified ‘quick win’ use cases, including:
Like its peers, Wipro highlights the benefits of standard prompts, e.g., LLM’s output accuracy, lesser output variability, and capturing the client’s application and testing context. Wipro has created libraries of standard prompts, classified by role (UX designer, developer, tester, architect, BA, and application support) across the software development lifecycle.
RAG and Prompt Engineering
Beyond prompt engineering, Wipro wants to improve the accuracy of the LLMs. Rather than fine-tuning LLMs (through training the models on additional training data sets), it has chosen the retrieval-augmented generation (RAG) approach, which essentially relies on creating vector databases of the client’s testing artifacts. With the RAG approach, Wipro believes it takes a more relevant method to include the specific context of the client’s applications. To that extent, the company has created a tool that goes through various document formats (e.g., .doc, .pdf, .ppt) and creates a data set in a vector database.
For several use cases (e.g., test script generation, test data), Wipro wants to be LLM-agnostic and will connect with GenAI COTS (e.g., ChatGPT 3 and 4 and Azure OpenAI). It supports most test execution tools and languages (e.g., Selenium, Eggplant, Appium, and Playwright).
Looking to the future
The company is developing several GenAI use cases targeting specific tasks. Examples of these include locating an error in a Selenium script or writing a VB macro to migrate data from ALM to JIRA. Wipro is building a repository of use cases covering testing activities, taking a bottom-up approach.
There is a clear focus on helping clients beyond the interest stage and consulting engagements to PoCs and deployment. To facilitate client adoption, Wipro is looking to make its GenAI services enterprise-grade with assured data privacy and security. Options offered by the company include hosting on the client’s premises or its own.
Investments in GenAI will continue to be a priority in the foreseeable future. The company recently invested in data transformation and validation. Wipro plans to bring further depth to its user story analysis; it is exploring how to make user stories more standard and consistent within an enterprise. Current writers of user stories tend to have their own style. Wipro believes that GenAI can bring some standardization while increasing overall user story quality. The company also wants to go into more depth regarding automated root cause analysis beyond traditional defect classification.
Bringing an enterprise-grade service
Beyond LLM use case depth and standardization, Wipro believes that it will differentiate its value proposition by offering an enterprise-grade service. The company highlights it has taken several steps in this direction.
Wipro provides access to LLMs through its ai360. ai360 wants to ‘guardrail’ LLMs and systematically monitors and controls LLM usage. It ensures:
Wipro has also worked on decreasing the time to create test scripts from an initial 15 minutes to one minute, relying on proprietary Python test script libraries.
The company highlights it has also progressed well in LLM output consistency. It finds the LLM output/responses to English language prompts can be unreliable. To overcome the challenge, Wipro created a library of UML models for specific processes (e.g., completing an online purchase transaction). It will update the UML libraries for each client and subsequently create test cases and scripts. With this approach, the company believes it can also increase test coverage.
Wipro points out that clients hesitate to move from demos and PoCs to deployment. The company believes its enterprise-grade approach will help organizations make the move and will continue to invest in it.
]]>
NelsonHall recently talked with Eviden, Atos’ consulting and application services business, about its QE practice, Digital Assurance.
Digital Assurance has 5k quality engineers, 65% offshore, reflecting a high leverage in North America (due to its Syntel heritage) counterbalanced by Atos’ large European public sector client base. The practice has aligned its service portfolio around high-growth offerings such as testing applications for digital projects, migration to the cloud, testing Salesforce migration projects from Classic to Lightning Experience, and SAP.
Beyond these technologies, Digital Assurance has focused on AI, initially traditional AI with ~45 pilots underway, and then around GenAI in 2023-24.
AI/GenAI as priorities
Eviden currently has five primary GenAI use cases relevant to testing being deployed on its GenAI QE Platform:
One of the demos we attended was ambiguity assessment and scoring, where Eviden evaluates the quality of a user story/requirement. Other demos such as automated test case and test script generation provide several insights regarding the current art of the possible.
GenAI quick wins
GenAI provides quick wins that do not require significant ML model training.
An example is assessing the quality of user stories. Commercial LLMs will work out of the box and can be used as-is without further training. But LLMs only work if the input data (user stories in this example) follow best practices, e.g., are detailed enough and have clear acceptance criteria. If those fail, the LLM will reject the user stories.
Prompt engineering rather than data finetuning
Eviden is finding that the pretraining provided by the hyperscalers is good enough for most use cases, and is not currently contemplating conducting clients’ data training.
Eviden sees a need for structured prompt engineering, i.e., providing the LLM model with the right instructions. It is building repositories of standard/proven prompts. In addition, Eviden will adapt the prompts to the specificities of each application, e.g., table structures and user story patterns. Digital Assurance estimates that adapting prompts to the client’s applications will only last a few weeks. This approach is time-sensitive and provides quick wins, for instance, around automated test script generation.
Combining traditional AI and GenAI
Eviden is combining GenAI with more established impact analysis AI models (e.g., predicting the impact of code change/epics on test cases) and is conducting GenAI processing once it has done so with predictive AI model investments. The ecosystem approach goes beyond other AI models, and Eviden points out that it is deploying container-based delivery to execute GenAI models independently to shorten time-to-process.
The beginning of the GenAI journey for QE
This is just the start of the GenAI journey for Eviden’s Digital Assurance practice. The company is deploying early GenAI use cases and deriving its first best practices. Eviden also points out that human intervention is still required to assess GenAI’s output until GenAI reaches maturity. Even with GenAI, the testing industry is far from autonomous testing or even hyper-automation.
Eviden is working on other GenAI initiatives, including around Salesforce and SAP applications. For instance, Digital Assurance has used GenAI in SAP to generate a repository of ~250 T-codes (SAP transactions) with relevant test scenarios and cases.
Eviden is also exploring migrating to open-source tools away from SAP-recommended COTS for their regression testing needs. The migration goes beyond changing text execution tools and migrating test scripts. This is not the first time we have seen interest in moving away from commercial tools, but historically this has not materialized in massive migration projects. GenAI will ease the process.
]]>
We recently talked to Qualitest about its latest acquisition, Q Analysts, its sixth since 2019. Qualitest has been on an accelerated transformation journey under the ownership of PE BridgePoint. Q Analysts further strengthens Qualitest’s capabilities in next-gen digital QA, with expertise in testing AI-based devices such as AR/VR/MR headsets and generating data for training AI models.
Qualitest Has Accelerated its Transformation
Qualitest has bold growth ambitions targeting $1bn in revenues by 2026, and in support of this, it has further shifted its delivery network to India to gain scale. The QA Infotech and ZenQ acquisitions helped significantly in this respect. NelsonHall estimates that Qualitest has ~45% of its headcount in India, or ~3,400 FTEs. We expect this India-centricity to increase further.
Qualitest has also further verticalized its GTM, its salesforce now being organized around the following industry groups: technology, BFSI, healthcare & life sciences, telecoms, utilities, retail, and media & entertainment. In parallel, Qualitest has expanded its focus from its core technology clients to BFSI (now 30% of revenues, on par with technology). It recently strengthened its healthcare and telecom expertise with the Comply and Telexiom transactions.
The company is specializing its service portfolio and, at the same time, investing in automation. Continuous testing remains a priority. The 2019 acquisition of data science company AlgoTrace jumpstarted Qualitest’s expertise in AI-based testing. NelsonHall believes that AI-based automation will disrupt the QA industry by automating the generation of test scripts and breaking the lengthy requirement-test case-test script cycle by removing the test case phase.
Q Analysts Brings Specialized Testing Services for AI-based Connected Devices
Qualitest has developed its digital services portfolio beyond traditional mobile app testing, introducing next-gen offerings. The acquisition of Hyderabad-based ZenQ brought in capabilities around blockchain and testing connected devices such as smart homes, pet care, fetal health, and drones.
Now, the acquisition of Q Analysts further increases Qualitest’s investment in digital testing offerings, looking at products described as ‘AI-infused devices,’ i.e., AR/VR devices and virtual assistants.
Q Analysts currently services tier-one technology firms engaged in AR/VR/MR, wearables, and virtual assistant devices. The company has ~600 employees, is headquartered in Kirkland, WA, and has offices in Santa Clara, CA. Q Analysts has testing labs in Kirkland, Santa Clara, and Antananarivo, Madagascar. It has structured its portfolio around two activities: testing of ‘AI-infused devices’ (60% of revenues); and generating training data for these devices (40% of revenues).
The company has worked on AR/VR testing activities, often at the prototyping stage. It offers a full range of services, from devices to mobile apps, web applications, usability testing, and back-office integration testing. As with mobile devices, AR/VR devices bring specific QA activities, such as assessing the performance of an application on a device and estimating the impact of running this application on the device’s battery.
Q Analysts highlights its expertise goes beyond device testing. The company’s sweet spot is assessing image and video rendering on the device. The company has invested in its workforce to identify rendering issues such as image refresh rate or pixelization, a capability only a trained human eye can spot.
The company continues to invest in visual testing in the AR/VR/MR space. For example, the company tests new technologies such as foveated rendering (i.e., the devices have in-built inward-facing cameras to track a user’s eye movement and render images of higher resolution where the eye is focused) to minimize energy consumption and make device batteries last longer. The company considers visual testing to be key and requires advanced visual and technical skills.
Q Analysts’ second activity is generating training data or ‘ground truth data services’, a term borrowed from the meteorology industry. The company will generate training data in its labs and capture images and movements required using cameras and LiDAR scanners. Q Analysts’ know-how comes into play by generating datasets based on its client’s demographics and providing several real-world simulated set-ups, such as living rooms and offices and other variances (such as furniture and interior decoration). Q Analysts also provides related specialized services such as manual 2D and 3D image tagging to help train AI models.
High Potential Ahead
Qualitest has big ambitions for Q Analysts based on the expectation that demand for connected ‘AI-infused devices’ will expand from its product engineering niche. The use of AI-infused devices will become increasingly common across industries; for example, retail (virtual try-on), healthcare (physical therapy and 3D models), and energy & utilities (digital twin-based training). Longer term, Q Analysts targets the metaverse, expanding from its AR/VR and other AI device niche to the larger virtual world opportunity.
Complementing Q Analysts’ specialized capabilities, Qualitest brings increasing expertise in AI-based automation, including computer vision testing and connected device test automation. Client demand looks limitless, and Qualitest is building its next-gen testing expertise to address that demand year after year.
]]>
Software testing continues to be an industry of contrasts: the primary activity, functional testing, remains a human-intensive activity, despite the accelerated adoption of continuous testing (i.e., bringing functional automation as part of DevOps).
But testing has also grown in a highly specialized set of activities, earning the name of Quality Engineering (QE), ranging from support activities, test data and test environment management, shifting both left (early in the software lifecycle) and right (to application monitoring and now site reliability engineering).
Nevertheless, the most exciting event in QE remains the usage of AI to automate the creation and maintenance of test scripts. We think that despite somewhat limited adoption, automated script creation has the potential to redefine the QE industry.
Test Script Maintenance Will Become Easier
We talked to Cigniti about its recent investment in its iNSta IP to automate test script creation and maintenance. Cigniti repositioned iNSta two years ago, from a testing framework, as its primary automation point, aggregating all automation and specialized services. The company promotes it as an 'AI-enabled low code scriptless test automation platform'.
Now the company has enriched iNSta with its core Intelligent Recorder.
iNSta's Intelligent Recorder will create test scripts on the fly when a user goes through a transaction in an enterprise application. It will identify UI objects and build an object library to maintain test scripts. Intelligent Recorder will scan the UI for each release and identify changes in the UI. The maintenance of such test scripts is, we think, of high importance. Cigniti finds that 5% of test cases are outdated or not in sync with the current release and will lead to false positives or testing failures. The company continues to add incremental enhancements: should Intelligent Recorder fail to recognize that an object has changed, it will use computer vision to compare screen images of two different releases, identify the objects that have changed, and amend its object library.
Cigniti also accelerated the speed of execution of iNSta, relying on conducting test script execution in parallel across several VMs and containers. The company will add VMs/containers automatically through a scaling-out approach. With this offering, Cigniti wants to address development organizations operating in agile/DevOps with requirements for short testing timelines. It also targets applications that require extensive use of AI, which typically slows down test execution.
Cigniti also complements iNSta with automated test script creation, using NLP technology to translate English-written test cases from Excel and ALM into test scripts. Cigniti has created a dictionary and will custom dictionaries for its clients. The company finds that its English language translation AI model brings more benefits than the Gherkin language, as BDD requirements are done by testing specialists and not by business users. Nevertheless, Cigniti is also integrating its BDD framework in iNSta.
A strength of iNSta is that the Intelligent Recorder and the NLP translation are interoperable, and users can go back and forth between the two approaches. This maximizes, we think, the possibility of automation and helps with test script democratization.
New Opportunities with E2E Testing
AI is also opening QE to new testing opportunities. To a large extent, functional testing tools such as HPE/Micro Focus UFT and open-source Selenium have focused on one application technology. Still, they cannot operate across mobile apps, web, client-server, and mainframe applications.
Cigniti has expanded iNSta's Intelligent Recorder from web applications to mobile apps and client-server applications. This opens more automated test script opportunities. It also opens up business process/E2E automation opportunities. Several industries, including telecom, banking, retail, and government, have processes operating across different application technologies. Until now, E2E testing had to be manual or relied on RPA tools/bots.
Cigniti also intends to host iNSta on the cloud and sell it as a PaaS tool to favor its adoption. In the meantime, it will expand the Intelligent Recorder to SaaS applications (e.g., Salesforce), mainframe applications, and APIs.
We think the QE industry now has the technology to challenge the requirement-test case-test script model, and now is the time to focus on organization adoption. Cigniti highlights that initially, clients are hesitant to adopt iNSta due to organization and skill changes. We expect Cigniti to spend time with its clients evangelizing the market, relying on its QA consulting unit, and helping clients on OCM. More than ever, testing (and IT) is also about people's buy-in. We think tools like iNSta will help testers focus on more gratifying tasks such as analysis and remediation. This is good news for the industry.
]]>
We recently talked to ValueMomentum about its QE approach to product-centric development and testing. The company is helping its insurance clients improve the quality of their applications using agile best practices and DevOps tools. In support of this, ValueMomentum has refreshed its automation approach and created a continuous testing platform articulated around design (shift-left), execution (through automation), and monitoring (shift-right).
Most tier-one QA vendors today have their own continuous testing platforms, and these have become the backbone of test automation. Indeed, such platforms currently aggregate most of the existing automation and IP, running automation as part of each release cycle. These continuous testing platforms differentiate by adding new automation features to core automation.
Making Continuous Testing the Aggregation for Test Automation
ValueMomentum is investing in continuous testing through methodologies and adding new automation features. The company uses BDD, for instance, as it believes the Gerkhin language remains the best alternative for business users to write test cases that are automatically converted into test scripts, thereby reducing ambiguity in requirements. The company complements its BDD centricity with pre-defined business process diagrams for the insurance industry using MBT (Mantiz).
During the shift-left phase, to promote quality in the development phase of the project lifecycle, ValueMomentum has integrated code-related services into its continuous testing platform (e.g., unit testing and code review); test support services (e.g., test data management and service virtualization; AI-based analytics (such as code coverage and test impact analysis, and static code analysis); and non-functional (e.g., testing, and automated vulnerability assessment). As an example of its investment in AI, ValueMomentum is fine-tuning its defect prediction AI model by increasing data sources, from past defects to code changes in the release and developers’ coding quality.
In the testing environments, once the application release is completed, ValueMomentum uses a mix of full functional test automation (E2E testing) complemented by exploratory testing to maximize the chances of catching bugs before production.
Shift-Right Is the New Frontier. AI Will Help
Shift-right continues to be one of the open frontiers in the QE industry. Feeding back production information to the dev and test teams in an automated manner is still a challenge. AI is increasingly being deployed, but there remains considerable growth potential for its use.
ValueMomentum is accordingly investing in shift-right. Beyond APM tools for application monitoring, the company uses AWS tools for cloud applications, e.g., Canaries (monitoring of the performance of end-user devices), A/B testing (usability research), Game Day (simulating a failure or an event to test applications, processes, and team responses, similar to chaos testing), and rollbacks (redeploy a previous application release using CodeDeploy).
And indeed, ValueMomentum is gradually making its way to Site Reliability Engineering (SRE), where production specialists monitor applications and work with developers to remediate application issues quickly. For now, ValueMomentum is taking an AWS approach, relying on point solutions tools that AWS provides. It is fine-tuning AI use cases such as test case recommendations, defect triaging, defect-to-test case mapping, test case optimization, system comparison, and test case search. This is just the beginning of AI in shift-right for QE.
]]>
TCS recently briefed NelsonHall on its approach to site reliability engineering (SRE) in the context of quality engineering (QE).
SRE emerged almost a decade ago as part of the shift-right move, targeting production environments beyond traditional IT infrastructure activities such as services desk and monitoring activities. While no definition of SRE has fully emerged, TCS points out that SRE focuses on two topics: resiliency and reliability, through with observability and AIOps, automation, and chaos engineering as key services.
TCS prioritizes cloud-hosted applications for its SRE services, as cloud hosting increases the likelihood of application outage since applications that have been migrated were not initially designed and configured for cloud or multi-cloud hosting.
Generally, there has been very little SRE in QE activity, even though the industry has emphasized shift-right for several years. The shift-right notion in QE refers to feeding back production information to dev and test teams, breaking down the traditional silos between build and run activities. And in activities such as application monitoring (relying on the APM tools) and associated AI use cases (to make sense of APM-triggered events), the classification of defects found in production, and in sentiment analysis, have become common.
We think shift-right activities can still be improved, building on monitoring activities. Chaos engineering is a good example of a developing proactive service. More importantly, the feedback from production to dev and test needs to be improved, and we think SRE will help here.
Observability/Monitoring, AIOps, and Chaos Engineering
TCS' approach to SRE relies on application monitoring, AIOps, automation, and chaos engineering. Application monitoring ('observability') remains at the core of TCS' portfolio. For this, the company will deploy APM tools, collect logs and traces, and provide reporting. One of the challenges in application monitoring is data dissemination across different applications and databases. Accordingly, data centralization is a priority for TCS.
Once it has collected monitoring data, TCS deploys AI models (AIOps) to automate event detection and correlation and eventually move to a prediction phase. TCS' main AI use cases are predictive alerts, root cause analysis, event prioritization, and outage likelihood. The company will use third-party tools such as Dynatrace (combined with application monitoring) or deploy its own IP, depending on the client's tool usage.
For deployment and recoverability, its next step after AIOps, TCS will complement application deployment with automated rollbacks and ticket creation. At this stage, when facing application defects, the SRE team will also involve the development teams to conduct RCA and fix application defects.
TCS will also conduct chaos engineering. Chaos engineering complements performance engineering and testing in that it evaluates applications' behavior under more strenuous conditions. With chaos engineering, TCS will conduct attacks such as instance shutdown, increased CPU usage, and black holes to assess how the applications being tested behave. TCS has integrated tools such as Gremlins and Azure Chaos Studio in its DevOps portfolio to embed chaos engineering as part of continuous testing.
Demand Is Still Nascent
TCS typically deploys SRE teams of six engineers for monitoring applications. It highlights that SRE adoption is still nascent, and it will lead such programs with marquee clients initially.
In broad terms, the future of SRE lies in DevOps and becoming part of continuous testing, where all activities are scheduled and automated, for new build/release execution. TCS is an early mover in this area and is currently honing its tools and consulting capabilities. Platforms combining tools and targeting comprehensive services as part of continuous testing are the company's next step.
]]>
We recently talked to Testbirds, the largest Europe-headquartered crowdtesting firm, founded in 2012. We found Testbirds upbeat after the pandemic. The company had an excellent year in 2020, achieving revenue growth of 30% as organizations, challenged by closed offices, turned to Testbirds to conduct crowdtesting of their digital initiatives. This was followed by another excellent year in 2021, with revenue growth reaching 40%, led by digital projects, and Testbirds is expecting similar growth for this year. In parallel to this continuing sales momentum, Testbirds has reached operational breakeven and is currently funding its expansion organically. The company continues to recruit and now has ~600k crowdtesters in its community.
Expansion in Europe, Now Targeting U.S.
Expansion remains a priority for the company, which is increasing its office locations in Europe with new facilities in Leipzig and London, complementing its existing presence in Germany and the Netherlands, and to a smaller extent in France and Italy. Testbirds is structured into regional hubs, Leipzig and London being sales and project management centers serving clients in the German and English languages respectively. London is also a hub for project management, delivery, and sales and marketing activities to the U.S.
In addition to its direct sales activity, Testbirds wants to grow its indirect channel, increasing the level of work with partners. The company recruited a channel head in 2020 and expects its indirect channel to contribute revenues in 2022.
More Consulting and Specialized Services
Testbirds highlights that its indirect channel strategy will somewhat change its value proposition as partners will deliver the crowdtesting project management and analysis work themselves. Consequently, Testbirds has already changed its portfolio. In addition to offering crowdtesting project management and execution, the company is also now highlighting capabilities such as consulting and methodologies for advising clients on their crowdtesting goals and approaches. With this consulting-led service, Testbirds looks to accompany clients across their digital product journey. It has aligned its service portfolio around consulting, from defining a digital product concept to prototyping, development, testing, and release.
Beyond its consulting approach, Testbirds has expanded its offering beyond quality assurance and usability testing to online surveys, market research, and customer feedback. While QA remains core to its value proposition, the company is expanding in usability research and testing.
Testbirds highlights the specialized offerings of its Testbirds Exclusives brand. It recently launched its payment testing service, addressing online, offline, and in-store PoS. The company has set up a dedicated offering that can be provided on a standalone basis, focusing on European regulations on authentication or, more broadly, covering the customer journey, from product order to payment and returns management.
Alongside payment, Testbirds is promoting its offering verticalization, usually in the field. Examples include connected home equipment testing or EV charging station testing. Usability testing plays a key role in such verticalization.
Incremental Automation
Testbirds continues to invest in the Nest, its platform used by crowdtesters, its project managers, and clients. A recent example of incremental functionality is its companion app, which allows crowdtesters to log defects and screenshots directly from their mobile devices. The companion app simplifies crowdtesters’ work by avoiding going through a PC to log defect screenshots.
The company continues to invest in AI, using ML for mining defect comments and classifying defects into categories. It continues its work around defect prediction and automatically transcribing video voice into transcripts. While we initially expected AI to bring automation and efficiencies to crowdtesting, Testbirds finds that deploying AI use cases has been slower than expected.
So what’s next for Testbirds?
The company believes it has reached the inflection point where demand will move to hypergrowth. It has hired sales executives and counts on its indirect channel to grab this rising demand. The company has reorganized its service portfolio, driving specialized services. In parallel, Testbirds believes it has structured its execution to make its service repeatable. The company also pushes defect analysis work to its community through the Bird+ program to drive efficiencies. Finally, Testbirds is now opening again to further private equity funding. The company believes it will enter a hypergrowth cycle and external funding will help scale up.
]]>
We recently talked to Cigniti about its digital ambitions and its acquisition of RoundSqr.
While remaining focused on quality engineering, Cigniti has quietly expanded its capabilities to RPA over the past three years. This extension is logical: RPA shares much with testing, relying on creating and maintaining bots or test scripts. This is the start: Cigniti has broad ‘digital engineering’ ambitions, and RPA was the first step.
With its recent acquisition of RoundSqr, Cigniti has taken another step in its digital strategy. RoundSqr has ~100 employees and revenues of ~$2.8m in its FY22. The company has ~30 clients, most of which are in the U.S., U.K., Australia, and India.
RoundSqr started as a digital company and currently offers data, analytics, and AI services. The company is also active in web and mobile application development services, including architecture design and APIs.
RoundSqr strategically invested in AI, particularly in AI model validation and computer vision. The company brings in a methodology and expertise to model validation. RoundSqr has also developed an IP called Zastra that helps with computer vision-related annotation services.
AI Is Strategic to QE
RoundSqr highlights that testing of AI models is primarily restricted to evaluating their accuracy and relies on separating data into training and testing sets; it looks to take a more comprehensive approach across the model itself and its data.
The company evaluates AI models across six parameters, namely Stability (conducting testing several times on the same data); Sensitivity (mitigating the impact of noise and extreme scenarios on the output); Data leakage (using non-training data when building the model), Performance (the model will have the same outcome even if the data is changed), Bias, and Predictability.
Beyond the AI model, we think RoundSqr’s AI capabilities will be instrumental to Cigniti’s QE activities. Organizations have started using AI to conduct focused testing to identify areas where they expect bugs. But AI is also relevant for automating test script creation and maintenance. The offerings are getting ready, and client adoption is now starting. We think AI has the potential to revolutionize the QE industry if it removes human intervention around test scripts.
RoundSqr Brings Computer Vision Annotation IP
Zastra, the IP that RoundSqr has built over the past 18 months, is a computer vision product that targets image tagging and annotation, the action of identifying objects, people, or living organisms in a picture. Zastra can provide the necessary steps for identifying objects, including image classification, object detection, and semantic and instance segmentation. RoundSqr targets several sectors with Zastra, primarily manufacturing, medtech, and utilities. Its use cases include defect detection, track and trace, CT and MRI scans, and satellite images.
Zastra links nicely, we think, with QE in the UX testing area. The role of testing has primarily revolved around testing the functionalities of an application. However, testing image rendering, e.g., on a website, has been far more limited, mostly around pixel-to-pixel comparison. We think AI models open new use cases for websites and digital technologies such as AR/VR and quality control in manufacturing plants.
RoundSqr’s product roadmap for Zastra includes synthetic data generation and audio annotation. The company will also expand its hosting options beyond AWS to Google Cloud Platform and Oracle Cloud.
Revenues of $1bn by FY28
This is the beginning of the journey. The priority for Cigniti and RoundSqr is now cross-selling and accelerating further organic growth.
However, RoundSqr alone is not sufficient for Cigniti to reach its $1bn revenue target by FY28, up from $167m in FY22. To achieve this objective, the company will rely on both organic growth and M&A.
Future organic growth will come from further expansion of its service portfolio to digital offerings such as data, AI and ML, blockchain, cloud computing and IoT. The company also plans to grow within engineering and R&D services, both industry 4.0/digital manufacturing and product/platform engineering services. Cigniti targets connected devices, taking an AI-based approach.
Cigniti’s client base includes BFSI, healthcare, medtech, travel and hospitality, and retail. The RoundSqr acquisition further strengthens Cigniti in BFSI. It also brings further focus on ISVs, and the supply chain and manufacturing functions, which Cigniti sees as having great growth potential.
To support its portfolio expansion, Cigniti will need to continue to acquire. Acquisitions such as RoundSqr will bring further specialization and are precious. Cigniti will, however, need a transformational transaction. Watch this space.
]]>
We recently talked to Qualitest regarding its acquisition of ZenQ.
ZenQ is the latest in a series of recent acquisitions by Qualitest, under the ownership of PE BridgePoint. The company acquired four firms in 2021:
The latest addition, the Dallas-headquartered ZenQ, aligns with Qualitest’s objectives to build digital transformation capabilities. It strengthens Qualitest in DevOps/continuous testing consulting and brings specialized digital expertise such as AI and blockchain. Finally, it opens Qualitest to the world of product engineering QE, around high-growth areas such as connected devices/IoT, including AI-intensive equipment such as drones.
Continued Expansion in Digital
ZenQ brings capabilities in digital, including blockchain testing. The company has worked primarily for ISVs across various verticals and use cases. Blockchain QE adds a niche high-growth area of expertise to Qualitest’s expanding digital testing portfolio. The company has already expanded to RPA/bot testing and application migration to the cloud testing. Also, its December 2019 acquisition of Israeli start-up AlgoTrace helped kickstart its AI offerings, focusing initially on data science. Since then, Qualitest has expanded its AI analytics and automation portfolio in visual testing and test case optimization areas.
Qualitest Enters Connected Device Testing
Importantly, ZenQ adds expertise around connected devices across various products, including drones, petcare and medtech devices, smart home and logistics products, and solar panels. The company is active in product engineering, in specialized services such as communication protocol QE and interoperability. This brings Qualitest to a new world of bundled hardware and software, where software (e.g., embedded software, mobile apps) plays an increasing role and where Qualitest has its roots. With ZenQ, Qualitest expands to hardware testing, where lab-based automation emerged only a few years ago.
Importantly, connected product testing also brings AI, notably computer vision, e.g., for use cases such as inspecting the quality of goods produced in a manufacturing plant, monitoring the health of forests and crops, or animal geo-fencing. Qualitest has experience in this space and has developed its AI-based IP Test.Validator for image recognition.
Further Scale
In addition to its portfolio expansion toward digital QE, ZenQ reinforces Qualitest’s capabilities in three countries:
In total, ZenQ has ~700 employees.
The integration journey for ZenQ and Qualitest is in its early stages. Cross-selling is a priority. From a portfolio perspective, expect Qualitest to bring further quality engineering and AI capabilities to ZenQ’s projects. For Qualitest, assuring product engineering is a new field with tremendous growth potential, and we expect the company to invest in QE automation in this space.
Meanwhile, Qualitest still has bold growth ambitions. The company has aggressive plans to reach $1bn in revenue in the next two years. Further acquisitions to gain scale both onshore and offshore and expand the portfolio to digital are likely.
]]>
There is a big divide between IT sustainability and quality engineering (QE). In IT, sustainability is emerging from a carbon emission niche, expanding from a consulting to an execution phase. In QE, the focus remains primarily on functional automation with continuous testing/DevOps and AI as primary drivers. In short, the two have little in common.
As such, we had not anticipated that QE could soon become part of sustainability initiatives. However, Sogeti, part of Capgemini, recently briefed NelsonHall on how it is adapting its QE offering to sustainability with QE for Sustainable IT.
Measuring carbon footprint at the transaction level
Sogeti has designed QE for Sustainable IT, targeting the environmental side of sustainability (which also includes economic and social aspects). The company promotes a stepped transformation of IT rather than through big bang approaches. It highlights that once a client has started measuring its carbon footprint, it implements its strategy primarily by reducing its application estate and migrating its applications to the cloud.
Sogeti wants to offer a different approach to transformation, looking at the transaction level. The company will initially conduct its Green quality assessment, relying on its VOICE model, to understand the client’s sustainability objectives. Sogeti will then identify the most used ones in the production environment. It will then estimate how each transaction impacts the usage of hardware and networks (e.g., CPU, storage). Once done, the company will calculate the carbon footprint of each ERP transaction in production environments in the past 12 months. Once the applications have been transformed, Sogeti will reculcate the carbon emissions and measure its progress.
Where does QE fit within IT sustainability?
With its test case/test script approach, Sogeti highlights that QE already has the required experience and tools. The company will conduct the transaction, using functional test execution tools to measure the usage of hardware and networks. It will then capture each transaction’s hardware and network usage using APM tools.
Sogeti has worked with its development peers on the transformation side. The development teams will work on the code related to the ERP transaction, streamline the code, and remove dead code.
Sogeti looks to extend beyond this transformation phase and become a “sustainability quality gate”, mirroring the traditional role of testing in deciding if an application in development can be deployed in production environments. To do so, the company is currently working with a partner to build accelerators, e.g., a sustainable static code analysis to measure the “sustainability technical debt” of an application. The tool relies on checking if developers used sustainable development best practices.
This is just the beginning of Capgemini’s QE journey into sustainability. It sees increasing traction, thanks to regulatory pressure and consumer expectations, to reduce the carbon footprint of enterprises.
Capgemini’s roadmap for QE for Sustainability goes beyond ERP applications. The company wants to expand to other COTS and custom applications. With Capgemini’s CEO driving the company’s sustainability effort both internally and to external clients, expect to see more of these offerings in the next few months.
]]>
The world of quality assurance (QA) is continually evolving, alternating between cycles of centralization and decentralization. QA became part of testing CoEs in the 2000s, driving process standardization, test coverage and automation. More recently, in its latest organizational model, it has become part of the agile development structure and is spread across agile projects. Quality engineers work alongside developers in agile teams of three to seven specialists, focusing on test automation and targeting the holy grail of QA: in-sprint automation.
We recently spoke with Amdocs’ Quality Engineering (AQE) organization about how the unit is embracing this trend. While Amdocs is well known for its software products for communication service providers (CSPs), the company now primarily operates under an IT service model, with AQE enjoying rapid growth. For example, AQE recently won a significant standalone testing contract from a tier-one CSP. The scope is large and involves ~200 applications, including new builds and applications in maintenance. The company will scale up to several hundred quality engineers at peak time. AQE is approaching the project by implementing a new organizational model based on agile and continuous testing principles.
Amdocs adapts function points for agile QA
For this project, AQE reinvented the function point estimation model for QA that is common in software development. The unit uses certification points to estimate the time and effort required to complete a QA activity. Beyond functional testing, the pricing model also covers non-functional and other areas such as test environment provisioning.
The function point-like approach is not new (a few vendors already took that route back in the mid-2010s) and has both advantages and disadvantages. On the positive, it has helped CSPs and vendors move past a T&M model to mitigate risks in fixed-price projects. Yet function points had drawbacks, e.g., counting function points took time and were manual, with experts sometimes diverging on their function point estimates. AQE aims to resolve this challenge by automating most counting of new functional features using agile program increment (PI). Also, AQE provides its estimate two months before the PI gets to development, giving clients visibility of upcoming costs to refine the scope of PIs.
Redefining agile QA teams
In the organization space, AQE is promoting a different approach, incorporating both centralized and decentralized aspects. The idea is that rather than embedding QA into an agile development team, AQE relies on a separate team of functional and technical experts, independent from the agile development unit.
For example, for the abovementioned project, AQE created standalone atomic QA teams to provide a broad spectrum of quality engineering activities, from functional to non-functional and quality engineering. In addition, AQE employed its automation IP and accelerator portfolio to increase the level of test automation.
By covering processes and analysis, AQE’s organizational approach goes beyond just setting up standalone QA expert teams. The organization highlighted that, as part of this project, it discovered that the client had focused most of its QA activities on integration testing.
AQE took a broader pespective on the project, helping the client shift from integration testing to E2E testing. In addition, AQE introduced unit testing among developers, thereby detecting defects earlier in the lifecycle.
AQE’s targets for the client include improving velocity by 80%, achieving cost savings of up to 50%, moving from quarterly to monthly releases, increasing resiliency, and improving customer satisfaction rankings. They demonstrate that QA is having an increasing and quantifiable impact on business outcomes.
]]>
We recently talked to Erik van Veenendaal, the head of the TMMi Foundation, which promotes the TMMi QA process improvement methodologies.
TMMi Has Become A Widely-Accepted Methodology for Test Process Improvement
Founded in 2005, the TMMi Foundation is a not-for-profit organization focused on improving corporate test processes. It launched its TMMi methodologies at a time when organizations were beginning to formally structure their QA units and introduce best practices to increase productivity and quality in testing.
The Foundation decided not to address each tester's training and certification needs; it has an alliance for this with ISTQB, which remains the worldwide reference for QA training.
TMMi quickly became one of the two best-known testing process improvement methodologies. Its sphere of influence has gone beyond the number of certified organizations (250 globally). Many organizations have downloaded TMMi methodologies or purchased the books without formal certification. TMMi thus has gained an influence over QA that exceeds its client base.
Despite its share of mind success, the TMMi Foundation has faced challenges. One is the fast adoption of Agile methods; another is internal to the TMMi Foundation managing growth.
TMMi Foundations Updates its Methodologies & Books
The TMMi Foundation updated its methodologies and books to Agile. With the adoption of Agile, many organizations moved away from a process approach to transforming their QA. The TMMi Foundation continues to educate clients about the benefits of bringing a structured QA approach to agile development. Also, it launched an Agile version of its process methodology in 2015.
The Foundation is now developing a unified Agile and waterfall method. And the new methodology planned for 2024 will go beyond merging Agile and waterfall, with TMMi looking at including best practices and roadmaps around automation and AI.
Measuring TMMi's Benefits
Beyond refreshing its books and methodologies, the TMMi Foundation started to measure the impact of deploying its methodologies among certified organizations. The Foundation worked with the Universities of Belfast and Innsbruck, sending its questionnaire to organizations in its client database. The response rate of 64% provided a good level of accuracy.
The survey's findings show the effectiveness of TMMi. Approximately 90% of respondents expressed their satisfaction. Nearly three quarters (73%) of respondents reported that TMMi drove software quality improvement. However, TMMi does not lead to QA project reduction.
The survey also sheds some light on the TMMi corporate user population. Financial services, the largest spender on QA globally, is also the primary user (37%). Second is QA services/IT services vendors (30%). The remaining 33% span industries. Beyond improving the test process, QA organizations also use TMMi to demonstrate their capabilities, internally or to third parties, for regulatory compliance. Client organizations use, therefore, certification to showcase their QA transformation too.
Defining Clear Roles While Pushing its Service Ecosystem
In its expansion effort, the TMMi Foundation has also redefined roles and relationships with the TMMi ecosystem of partners. The Foundation plays a central role in methodologies and syllabi (for training). It is also the certification entity for client organizations undergoing TMMi assessment (through a sample approach). The TMMi Foundation also provides accreditations ifor training and assessment service providers and certifies individuals, e.g., as TMMi (Lead) Assessor.
The Foundation believes that partners will play a crucial role in such expansion, starting with local partners, i.e., the 'chapters'. These chapters drive TMMi's localization and marketing. They address testers at the individual level, and ensure training such as TMMi Professional training and certification (for testers who want to learn about TMMi methodologies) is available locally. The chapters also make sure testers at the corporate level who conduct consulting (Test Process Improver) or assess QA organizations (Accredited Assessor, Accredited Lead Assessor) are trained. They also advise QA consultancies that want to become training or assessment partners.
Currently, the Foundation has 23 chapters operating in 51 countries. Its local partners have a widespread presence across the continents. The TMMi Foundation realizes it still needs to strengthen its geographical footprint. It will announce a partner in Germany soon.
To sustain its local partner expansion, the Foundation shares half of its certification and accreditation fees back to the local chapters, intending to grow their marketing initiatives. Beyond recruiting new chapters, the TMMi Foundation wants to increase its activity level in each geography where it is present. The growth potential is significant.
The Need for Structured QA Remains
We find that TMMi's renewed expansion and international effort come at an opportune time. Agile is driving functional testing and beyond as part of continuous testing. Organizations are only transitioning and require help and consulting services for this journey.
AI is the next paradigm shift. AI-based analytics provide the Foundation with better-informed QA decisions and more focused testing. AI-based automation will drive the self-generation of test scripts. With technology evolving so far, QA organizations will need to resume a disciplined approach to QA while coping with Agile's decentralized QA needs.
]]>
The software testing/QA industry relies on three steps for functional testing:
Broadly, these three steps have remained the foundation of how functional testing operates. This is true even for agile projects, for which organizations are accelerating their automation efforts. However, these three steps have their limitations, mainly in terms of time and effort to create and maintain test scripts.
BDD, MBT, and record-and-playback automate test case & script creation
Of the various approaches to challenge this three-tiered foundation, Behavior-Driven Development (BDD) has been the most widely adopted. BDD relies on creating a standardized English test case language, Gherkin. Because it is standardized, BDD helps to automatically create scripts immensely. Yet, the adoption of BDD to date has not been as spectacular as expected.
Model-Based Testing (MBT) had a promising value proposition. It aimed to create business process flows/diagrams representing a business transaction on an application. Once defined, the business process flows are standard and can automatically be transcribed into test cases or scripts. However, MBT’s adoption was limited, possibly because MBT relies on adding another level of test artifacts, which in turn need to be maintained. However, MBT has had some success for applications such as COTS, with standard business processes. Industries such as communication services providers and financial services have also found MBT helpful.
And then, there is AI. AI has helped modernize record-and-playback tools. These tools mark down all the steps performed by a user when completing a transaction on an application. They then repeat the transaction and execute it in a functional execution tool. However, records are rigid and will fail when developers make minor changes such as a field name or location adjustment. AI helps deal with such minor changes and has improved the effectiveness of record-and-playback tools. The adoption of such tools is not widespread, but their value proposition is enticing.
With OAE, TCS brings it all together
We recently talked with TCS about its new automation initiative, TCS CX Assurance – One Automation Ecosystem (OAE). With OAE, TCS has aggregated its next-gen record-and-playback (TCS’ ScriptBot), MBT, and testing framework capabilities into one central tool. OAE brings together several approaches for automating the creation of test artifacts.
The beauty of OAE is that three tools are integrated and interoperable: a change in one immediately impacts the other two. OAE engineers can change views between the three tools and verify/edit conditions or edit the business process flow/test case/test script. For instance, a test engineer may modify recently recorded test cases and add new conditions in the test framework view. The tool interoperability also means that different personas can use OAE: test engineers, of course, and business analysis for creating business processes and power-users for recording transactions. This is a step toward test script creation democratization, one of the QA industry’s priorities to decrease costs and spread tool usage.
There is another benefit: with the three tools, OAE focuses on test artifact creation before the test script level, at the business process, or test case level. TCS can then use these artifacts to create the test scripts in its technology of choice, e.g., Selenium for web applications, Appium for mobile apps, a TCS IP for APIs, and Micro Focus for ERPs, mainframe and client-server applications. The approach minimizes the level of test script maintenance and pushes it back earlier in the automation process. TCS highlights that the conversion of test cases into scripts is instantaneous: it has not witnessed any performance issues in the conversion.
OAE also helps test transactions involving several applications running on different technologies. Typically, a transaction may start on a mobile or a web application/website and include testing APIs (for shipment) and even mainframe (payments). In short, OAE makes end-to-end testing much more accessible.
OAE requires the same discipline as for any testing framework. For instance, users still need to componentize test artifacts. An example is application login, which OAE users must set up as a test component shared across all tests. Also, to help with the discipline, OAE uses NLP: users creating a test artifact will be notified by the system when their artifact in creation already exists.
OAE integrates with other TCS IPs and benefits from some of these. One example is UX testing, where TCS can include accessibility and compatibility testing scripts in its functional ones. Another UX testing example is usability testing, which is the pixel-by-pixel and AI-based comparison of web pages to identify browser rendering differences.
Looking ahead, TCS has several development priorities for OAE, including accessibility on mobile app, integrating with functional automation tools such as Test Café/Cypress.io as an alternative to Selenium. TCS will also use its IP, SmartQE AI Studio, to collect application data during the SDLC and assess its quality. AI remains a priority.
OAE is a new IP, and TCS recently started promoting it among clients. NelsonHall welcomes TCS CX Assurance – TCS’ One Automation Ecosystem initiative for automating the creation of test cases and scripts. This is the future of functional testing, and it is AI-based. TCS is at the vanguard here.
]]>
DevSecOps Emerging
Application security testing has been part of functional testing for many years without being a significant investment topic. Organizations have typically favored functional testing automation while moving to agile/continuous testing; they have considered application security testing as an afterthought.
With the increased emphasis on cybersecurity, application security has become part of DevOps to create DevSecOps. DevSecOps promotes the democratization of application security testing. It also brings a shift-left focus, conducting application security at the development level rather than after functional testing.
Application security as part of DevOps and continuous testing requires automation. And this is where the challenge lies. Application security testing currently requires as much human expertise as software tool usage. Most testing services providers and their clients limit themselves to running scan tools such as source composition analysis (SCA) software and vulnerability detection software such as static and dynamic application security testing (SAST and DAST) tools.
However, running vulnerability detection software is not enough: these tools require going through the output and separating defects from false positives. Processing the tool output is time-consuming, tedious, and requires high application security expertise. Expect this analysis to slow down the continuous testing process.
Expleo Uses AI to Accelerate Vulnerability Analysis…
We recently talked to Expleo to understand how it is conducting and promoting application security testing within the context of continuous testing. The company is pushing application security test automation, and it has its own Xesa and Intelligent Vulnerability Assessment and Penetration Testing (iVAPT) IPs supporting this effort.
With Xesa, Expleo has pre-integrated several tools for integrating SAST and DAST (Portswigger BurpSuite) as part of continuous testing. Xesa also includes open-source ZAP Proxy for tool orchestration, and Defect Dojo (security defect management).
However, Expleo’s value-add relies on its automated defect analysis. iVAPT uses AI models to categorize defects by nature and severity, helping security experts shorten their analysis time. It uses ANN to process vulnerabilities based on past defect history. Manual testers will then verify the false positives allocation. This is the first test in the application security automation journey.
…And its On-Demand Digital Model for Shortening Provisioning and Delivery
Expleo has deployed its on-demand digital model and offering for application security to complement its automated vulnerability capabilities, still aiming to shorten time-to-market. The company relies on a shared delivery model and its X-Platform.
The company promotes a shared delivery center model for quickly ramping up its application security experts. Experts provide security across the application lifecycle, from the requirement level (e.g., security requirement reviews), to the design phase (threat modeling and design review), development and testing (SCA), and production (DAST and pen-testing).
The company highlights that it can mobilize experts through its shared service centers within 48 hours. Expleo has ~200 application security testers globally across multiple locations: in India, France, Ireland, the U.K., Germany, and soon Egypt and Romania. Expleo relies on its preferred tools, mostly open-source software, to provide the service and shorten delivery time.
Expleo recently launched its X-Platform. On the X-Platform, clients define their requirements, order their services, and follow the project’s progress and KPIs. X -Platform goes beyond service selection and includes project technology support, monitoring and analytics/reporting.
AI Will Play a Significant Role in DevSecOps
This is not the first time we have seen QA offerings that combine shared delivery, reliance on a service catalog to promote standard services, and a portal. Despite their value proposition, such offerings have had niche success.
In our view, such offerings have the potential for short-term activities such as threat modeling, pen-testing, and design review that regularly require services for up to three weeks. In these instances, the business case for clients to have a dedicated team can be difficult.
We see Expleo addressing the need for speed in continuous testing/DevSecOps from several angles. This is excellent news. AI, in particular, has the potential to bring many use cases. We think false positive identification is the first step in an AI journey to create intelligence out of vulnerability scanning.
]]>
Since its creation, the software testing services industry has focused on regression testing, i.e., ensuring that previously developed software still runs after a new build. Regression testing is the core of all QA activities and has been widely adopted as part of multi-year managed services contracts. The financial services industry, with its extensive application estates updated by one or two releases per year, has been the largest spender.
For many years, the testing industry has left out progression testing, i.e., testing new applications and new features (rather than enhanced ones, as found in multi-year contracts). There is a reason for this, as new-build projects are short-term in nature: they do not easily accommodate the longer-term view and costs of building automation over time.
With the widespread adoption of agile, the situation has changed to some extent: agile projects, focusing on iteration and speed, have required functional automation to support accelerated development.
COTS functional automation suffers from lack of time
Then there is systems integration/COTS testing. COTS testing has largely remained outside of test automation. Organizations are challenged by a lack of time and budget to drive functional automation for their ERP/COTS projects.
Technology plays a more significant role: several specialized ISVs have emerged, including Worksoft, Panaya, and Provar (Salesforce). Their tools have focused on handling the technology specificities of these COTS.
In addition, testing services providers have complemented specialist tools with repositories of test cases aligned by a transaction. The more advanced services providers have also used model-based testing (MBT) for modeling transactions. However, the test case repository approach has its flaws, as clients will need to customize test cases. Acceptance of MBT has been somewhat limited, as it requires creating another layer of testing artifacts.
AI helps to redefine COTS functional testing
The real gamechanger for COTS testing has come from AI. For instance, Atos released its Syntbots xBRID to address Salesforce, SAP, and Pega projects.
xBRID is a next-gen record-and-playback tool. While testers will complete a transaction in Salesforce xBRID will capture the activities performed by the user. It will generate test scripts automatically and avoid the human-intensive scripting phase.
xBRID is also a testing framework whose execution relies on the Eclipse Integrated Development Environment (IDE). Testers will thus need to apply the usual discipline and componentize test cases, e.g., log in or application launch, to provide common sub-activities across transactions. xBRID works for web applications and with mobile apps (Appium). It also integrates with BDD test frameworks.
Perhaps more important, xBRID helps with test script maintenance. Atos will identify all the objects in a UI/screen and create libraries of such objects with their location and other characteristics. xBRID will scan each UI/screen during each sprint, identify changes, and update objects. As a result, test scripts will handle UI changes such as field name or position change and go through execution. This is an important step: test scripts can be fragile, and their maintenance has in the past been heavily human labor-intensive.
Handling complexities of SaaS applications and Salesforce
Atos continues to develop xBRID. Each COTS has its specificities; for instance, Salesforce has two UIs, Classic and Lightning, that complicate test automation. Also, Salesforce, with its three annual releases, brings continuous changes in HTML, CSS, and APIs.
A feature of SaaS applications and Salesforce is that they force clients into several upgrades per year. With on-premise applications, organizations can decide when to update and upgrade, but with Salesforce and SaaS applications, clients need to test whenever the ISV deploys a new release. Having these regular releases provides a case for investing in functional automation. Tools like xBRID will help.
Atos estimates that with xBRID, it can save up to 90% in testing time. What appears to be a massive reduction is because xBRID replaces mostly manual test activities. It is essential that the industry increasingly targets automated test script generation, something Atos calls “Automation of Automation”. Automated test script creation and maintenance is a paradigm shift for the industry.
]]>
We recently spoke with Qualitest, the world’s largest QA pure-play. The company is in investment mode to accelerate its growth, backed by its majority owner, PE firm Bridgepoint. The company has added Cognizant leaders to its executive team (CEO, CMO, and India MD positions) with the intention of reaching $1bn in revenues in the next five years.
In support of this drive to accelerate growth, Qualitest has moved from a decentralized, country-led business to an integrated organization, and has embarked on several initiatives focusing on process standardization and automation, sales, and HR.
With its sales function, Qualitest is introducing an account management approach, land and expand strategies, and team-based selling, involving delivery teams in its bids. The company has maintained its focus on multi-year managed testing deals and has expanded its GTM target, building on its strengths in Israel, U.S., and the U.K, to include South America, Continental Europe, the larger Middle East, and India. It now targets five broad sectors: technology, BFSI, healthcare & life science, telecom, retail, media, & entertainment. The introduction of a systematic vertical sales approach is a significant change from a country-led GTM approach.
With its HR function, Qualitest has taken a comprehensive look across the employee lifecycle (from recruitment to upskilling, internal mobility, and succession planning) and matching the needs of projects. The program is vast, with Qualitest focusing initially on analytics to measure its HR effectiveness and then deploying intelligent automation.
The transformation of Qualitest also includes its value proposition. It has reshaped its portfolio significantly. Functional automation with agile/continuous testing has been a priority, along with digital and application migration to the cloud. Data and analytics, enterprise COTS are also priorities, along with AI. Its December 2019 acquisition of Israeli start-up AlgoTrace helped kickstart its AI offerings, focusing initially on AI-based analytics. Since then, the company has expanded its AI analytics and automation portfolio, with chatbot and data models as the new frontier. Qualitest has also rolled out several internal AI use cases in its sales and business support organizations. Examples include next best offer/action, fraud detection, and task allocation. The portfolio transformation continues, with AI and continuous testing as priorities.
Bridgepoint taking a majority stake in October 2019 has helped Qualitest accelerate its inorganic growth. The company has acquired four firms in 2021 so far: QA InfoTech (QAIT) in India, Olenick in the U.S., Comply in Israel, and an unnamed QA specialist in Germany. The first three acquisitions reinforced Qualitest’s presence in its core markets. The German specialist brings a footprint in a new geography in Continental Europe.
QAIT doubled the presence of Qualitest in India to a NelsonHall-estimated 2k FTEs, representing ~45% of its headcount. In terms of delivery, Qualitest’s value proposition was much more onshore-centric than its competitors. QAIT significantly changes the delivery profile, increasing its scale in India and giving it more recruitment visibility in Bangalore. Qualitest now plans to expand to Chennai and Hyderabad.
QAIT also somewhat expands the capabilities of Qualitest outside of testing services. The company has been active in agile software development, notably for front-end applications. While Qualitest’s primary focus is on QA, the company has also expanded to RPA. With clients awarding bundled development and test deals, Qualitest will gain from these development skills.
Chicago-headquartered Olenick Associates brings in 250 experts and a U.S. mid-west presence that complements Qualitest’s existing east and west coast footprint. Olenick brings in a client base in the financial services, legal, and utility sectors, with specific expertise in the electricity industry. The company provides performance testing across front-office applications (web and mobile apps, IVR, and text messaging), an offering that has increased in popularity after the Derecho storm in 2020. Qualitest has also gained through Olenick's capabilities around project management, DevOps.
Comply is a smaller acquisition, with 83 personnel. The company operates in the regulatory compliance space for the pharmaceutical and medical device industries, which are enjoying more vigorous growth than many other sectors. Comply works beyond QA and has a specialized software product, Skyline, for process analysis. With Comply, Qualitest gains further specialized and vertical capabilities. It will need to continue to invest in developing the acquired business.
Finally, the German pure-play brings an onshore presence with a client base in telecom and insurance. The company is sizeable with 250 employees and opens up a new territory for Qualitest. This is the first step for Qualitest into Continental Europe. No doubt Qualitest will deploy its model in the country and leverage its expertise in telecom and insurance in its significant geographies. The journey continues.
]]>
Capgemini is aiming for low or no maintenance and support fees as part of its ADM offering. Depending on its level of responsibility for development activities, the company commits to reducing maintenance and support activities and is, therefore, making a bold statement unheard of in the IT industry.
To achieve such high aspirations, the company believes Sogeti's application testing – or quality engineering (QE) – plays a central role. Such a ‘quality gate’ is hardly new, but its role has evolved with the adoption of agile development methodologies. Agile, with its frequent releases to production, has accelerated the demand for functional test automation. Clients are spending more on automation and targeting in-sprint automation, where the features of a new release are already functionally automated, limiting the level of manual testing activity.
Capgemini highlights it has significantly invested around continuous testing and AI.
With continuous testing, Capgemini has integrated the DevOps tools with test execution software. The company continues to expand the scope of such continuous testing platforms to include support activities such as test data management with synthetic data, test environment provision, and service virtualization. Capgemini has gone beyond functional testing to non-functional (with application security playing an increased role) and static code analysis tools. The expansion continues.
Capgemini’s Sogeti also brings its investments in AI. Currently, most AI use cases focus on ‘better’ testing, with test defect prediction and test case optimization as quick wins. NelsonHall sees an increase in the number of AI use cases quarter after quarter, e.g., matching agile user stories with test cases. We think the creativity of firms like Capgemini to identify better ways of testing is limitless, provided clients have enough quality data internally.
AI use cases in testing expanded a couple of years ago from better testing to test automation. Recent use cases enable users to automate the test script creation phase and sometimes the test case stage. A particularly promising technology is next-gen record-and-playback AI-based tools. Capgemini’s tool will record the transaction and translate it into a test script. It will also scan the application under test, identify changes in objects, and update scripts accordingly. This is the beginning of automated test script maintenance, the QE industry’s most significant challenge.
Unsurprisingly, Capgemini’s QE automation approach has several requirements. For example, the company targets multi-year mid-sized to large deals, whose size will help Capgemini recoup its test automation investments. The company also looks for build-test-run contracts to control QE and build activities, whether application development or systems integration, e.g., SAP programs.
Capgemini aims to bring digital and cloud capabilities to its application development activities. The company targets application resilience, scalability, and security, with application migration to the cloud as a central element. Again, QE plays a crucial role in testing these attributes.
NelsonHall believes that Capgemini has made a bold move with its low/no maintenance fees value proposition. This offering comes at the right time. With the pandemic, clients have reignited large application services deals, with offshoring and automation as fundamental principles. Clients have cost savings on the agenda if only to leave more budget for digital projects.
]]>
We recently talked to Infosys Validation Solutions (IVS), Infosys’ quality assurance unit, and discussed its continued investment in AI to automate testing and validate chatbots and AI models.
AI-Based Analytics from Test Case and Defect Data
The primary AI use cases in QA are around analytics: QA and development produce a vast amount of data that can be used for guiding QA activities. For instance, enterprises have vast amounts of test cases that often overlap or are duplicates. AI, through NLP, will go through test cases, identify keywords and highlight those test cases that are highly similar and probably redundant. This activity is called test optimization and can help remove between 3 and 5% of test cases. This may not seem a lot, but large enterprises have very significant repositories of test cases (Infosys has several clients with a hundred thousand test cases). Also, test cases are the basis for test scripts, which test execution software uses. More importantly, these test cases and test scripts need to be maintained, often manually. Reducing the number of test cases, therefore, has significant cost implications.
The analysis of test cases and defects brings many other use cases. The analysis of past test defects and their correlation with code changes is also helpful. Infosys can predict where to test based on the code changes in a new software release.
AI Brings Quick Wins that Are Precious for Agile Projects
There are many other data analysis quick-win opportunities. Infosys continues to invest in better testing. An example of a recent IP and service is test coverage. For websites and web applications, Infosys relies on URLs to identify transaction paths that need to be tested and compares them with the test cases. Another example is Infosys, for a U.S. bank, going through execution anomalies from the test execution tool and putting them into categories, providing an early step in root cause analysis. A rising use case is detecting test cases based on the comparison of user stories within agile and existing test cases.
We think the potential for AI-based analytics and resulting automation is without limits. NelsonHall expects a surge in such AI-based analytics and NLP, which will bring an incremental automation step.
Starting to Automate Human Tasks Outside of Test Execution
RPA also has a role to play in QA incremental automation steps. Outside of test script execution, functional testing still involves manual tasks. Infosys has developed a repository of ~500 testing-specific RPA bots to automate them; an example is a bot for setting up alerts on test execution monitoring dashboards, and another is loading test cases to test management tools such as JIRA.
With the predominance of agile projects, RPA can also be precious for highly repeatable tasks. However, RPA raises another issue: the maintainability of RPA scripts and how frequently they need to be updated. We expect Infosys to share its experience in this important matter.
Automation Step Changes Now in Sight
AI is also expanding its use cases from incremental automation to significant step changes. An example is Infosys using object recognition to detect changes in a release code and automatically update the relevant test scripts. In other words, Infosys will identify if an application release has a screen change such as a field or button changing place and will update the script accordingly.
There is more to come, we think, with web crawlers and next-gen record and playback testing tools. So far, client adoption is only just emerging, but this space is inspiring. Potentially, QA vendors could remove the scripting phase through automated creation or update of test scripts.
Chatbots Are Increasingly Complicated to Test
QA departments are moving out of their comfort zone with AI systems to test chatbots and AI models with AI systems.
In principle, chatbots are deterministic systems and rely on the pass-or-fail approach that QA tools use. Ask a chatbot simple questions such as the time or opening hours of a store. The response is straightforward and is either right or wrong.
However, the complexity of chatbots has increased. Voice plays a role and drives a lot of utterance training and test activity to solve language, accents, and domain-specific jargon challenges. Also, chatbots are increasingly integrated with hyper-scalers and rely on APIs for integration with back-end systems. Also, Infosys points to the increasing integration of chatbot functionality within AR/VR. This integration is bringing another layer of QA complexity and performance discussions. Infosys is taking a systematic approach to chatbot testing and has built several accelerators around voice utterances.
Testing of AI Models Is the Next Step Change Through Synthetic Data
With AI models, QA is moving to another world of complexity. AI models can be non-deterministic, i.e., not knowing the answer to a specific query; for example, identifying fake insurance claims for an insurance firm.
The traditional approach of QA, i.e., check the answer is correct or not, needs reinvention. Infosys is approaching the AI-model QA from several angles. For training and testing purposes, data plays an essential role in the accuracy of data science models. Infosys is creating synthetic data for training models, taking patterns from production data. With this approach, it is solving the challenge of the lack of sufficient data for training the AI model.
Another approach that Infosys is taking is a statistical method. It provides a series of statistical measures to data scientists, who can then decide on the accuracy of the data model.
AI model testing is still a work-in-progress. For instance, training data bias remains a challenge. Also, with QA meeting AI and data science, test engineers are clearly out of their expertise zone, and Infosys heavily invests in its UI and training to make its tools more accessible. The company points to further IP, such as using computer vision to check the quality of scanned documents.
There is much more to come: the potential benefits of AI are limitless.
]]>
We recently talked to passbrains, the crowdtesting pure-play with dual headquarters in Hamburg and Zurich that has just been acquired by German IT service and VAR vendor, msg. The acquisition is timely for passbrains. Its founder died in the summer of 2020 at a critical time, as the firm was developing the 2.0 release of its crowdtesting platform and diversifying its client base.
msg will help passbrains accelerate the development of its 2.0 platform
The name of msg may not be familiar outside Germany, although the company is sizeable: it generated revenues of €1.03bn in 2019 and has around 8,500 employees. msg is headquartered in Munich and has a dual activity as VAR and IT service vendor.
Its profile is unusual in that it operates as a federation of IT companies in 28 countries, providing its members with operational flexibility while providing them with shared services. For instance, msg’s internal development teams based in Germany will take over passbrains’ platform 2.0 development activities.
The ongoing development of platforms is critical for the success of crowdtesting vendors: the crowdtesting industry heavily relies on its proprietary platforms for managing projects, selecting and inviting crowdtesters, and identifying log and QC-ing defects, especially for their agile testing needs. Crowdtesting platforms have become a significant barrier to entry; we think this partly explains why more IT service vendors have not entered crowdtesting and crowdsourcing, except through acquisitions.
psassbrains will also benefit from msg’s ideation software product, which it will incorporate in its crowdtesting platform. This is an important module that msg uses to generate and rate ideas, given passbrains’ positioning on UX testing.
msg brings test automation expertise
The msg acquisition also solves one strategic challenge for passbrains: many competitors have aligned their crowdtesting capabilities around agile projects, putting less emphasis on exploratory testing and UX testing. They intend to transition clients from manual activities to test automation. This makes sense in the context of agile projects to bring functional automation. However, the challenge for crowdtesting pure-plays is to invest in test automation and compete with IT service vendors that have developed very significant test automation capabilities, IP, and accelerators.
Currently, passbrains has maintained a balanced portfolio, unlike competitors, with UX testing still representing ~60% of its revenues. msg’s expertise will help passbrains accelerate in agile testing. msg has a testing unit with ~300 FTEs located in Germany. It brings some test automation scale across functional (with a specialization around Tricentis’ Tosca) and non-functional.
msg brings cross-selling opportunities to passbrains
In the short-term, a priority for passbrains is joint GTM with msg, addressing testing opportunities with a full range of testing services. msg brings in a client base in SAP in the insurance, public sector, automotive, and healthcare sectors in the German market and will help passbrains expand from its telecom client base. passbrains will spend the next six months educating msg’s sales force about its crowdtesting capabilities.
passbrains’ mid-term priority is to deploy further AI use cases. The company is implementing msg.ProfileMap to match projects and skills requirements and identify the right crowdtesters.
In the longer-term, we think msg and passbrains can further expand into AI-powered testing. AI is dramatically transforming how functional automation operates and is a real paradigm shift. We believe there is a window of opportunity for passbrains/msg there.
]]>
We recently talked with CSS Corp about its investments in QA, in particular its new Digital Assurance offering. Digital Assurance targets UX QA, an area that has not had the attention it deserves.
Clients have prioritized investment in continuous testing & automation
While many large organizations have devoted considerable time and effort to continuous testing and functional automation, they have invested only selectively on UX testing. Indeed, demand for UX testing has been limited. Regulation has driven some UX activity, particularly around accessibility testing. The heterogeneity of devices and screen sizes also means organizations spend on compatibility testing. Also, performance testing has become somewhat more UX-centric by tracking performance using end-user metrics.
To a large extent, this is it. QA and UX are two different worlds and do not feed on each other. UX would gain from the automation expertise of QA, while QA would benefit from opening up to new challenges. The world of UX is vast and growing, especially in the space of usability research and testing. But so far, it is not really automated.
CSS Corp has created a UX testing platform with Digital Assurance
So NelsonHall welcomes the investment by CSS Corp to expand the boundaries of UX testing, relying on automation. With Digital Assurance, CSS Corp has aggregated opensource software tools with proprietary accelerators. The scope of Digital Assurance is considerable, ranging from performance testing to usability testing. Within usability testing, CSS Corp has focused on several dimensions, including “appeal,” “navigation,” “search,” and “content & information.”
An example of an engagement where CSS Corp developed an “appeal” feature is for a tier-one cosmetics company, verifying that the color displayed on a screen for a lip pencil was consistent with the company’s color palette. The challenge was one of scale, the client having 3k URLs and color palettes to validate. CSS Corp used computer vision technology to compare images, thereby removing manual comparison for the client’s 78 brands.
Another example is around “search”. CSS Corp has integrated SEO as part of the metrics it tracks. For instance, it will analyze the structure of a website and identify the number of steps end-users need to go through to complete a transaction.
A third example of a feature available in Digital Assurance is in “content & information”. CSS Corp is automating localization testing with spell checks and bundling it with grammar validation and readability analysis.
Alongside these features, CSS Corp has added more common functionality such as performance testing (with end-user KPIs) and sentiment analysis (the classification into different sentiments of opinions gathered on app stores and social networks such as Twitter), along with accessibility testing.
CSS Corp pushes the boundaries of UX testing
We think Digital Assurance has two main benefits. Firstly, CSS Corp is systematically extending the boundaries of UX testing automation. UX research and testing remain labor-intensive activities, and we think the potential for further UX automation is immense (see below). Secondly, CSS Corp is steadily aggregating tools around Digital Assurance and provides an increasingly comprehensive UX testing service.
Potential for UX automation
The potential for automation in UX testing is immense. While QA has focused on test automation, UX research also has the potential for automation.
An example is around videos: during the research phase, digital agencies record videos of end-user interviews. Going through these videos is time-consuming. AI bears the promise of focusing on parts of the video through sentiment detection, for instance.
Once a website or a mobile app is in production, clients are increasingly correlating its technical performance with its business performance (e.g. through integration with such tools as Google Analytics). Currently, understanding the correlation of multiple events takes time: using intelligent automation will help.
]]>
QA pure-play Qualitest recently briefed NelsonHall on a new AI-based offering the company launched in 2019.
The testing services industry began to deploy AI-based offerings about three years ago, starting with AI-based analytics use cases, looking at where and what to test better. Since then, the focus has shifted to AI-based automation, looking initially at the automated creation and maintenance of test scripts.
New to the table is the use of AI in consulting offerings. Qualitest is one of the early vendors to do this with its Knowledge and Innovation Assessment (KIA) offering. The company has bold ambitions and wants to create the new TMAP or TMMi (test process assessment and improvement methodologies), ones that will be based on data and automated, rather than purely relying on expertise.
Assessing Automation Efforts
The primary emphasis of KIA is bringing an automated approach to quality assessments. It relies on AI-based analytics use cases that exist in the market already, with Qualitest adding further use cases.
KIA’s approach relies on the collection and analysis of data from different sources; for example, project and defect management tools (e.g. Micro Focus ALM, and JIRA), ITSM tools (e.g. ServiceNow), and agile development tools (GitHub).
Qualitest highlights that this step requires mature clients, ones that have implemented tools and use them, and that can provide excellent data quality in abundance.
Once Qualitest has the required data, the next step is analytics, taking a two-step approach. It will identify KPIs such as:
The KPIs help to provide a high-level view of the client’s automation effort. Qualitest will then build a roadmap of where the client should deploy test automation, around several items; for example, governance and reporting, automation, NFT, knowledge management, defect management, and test environment and data. It will take a quick-win approach to its roadmap, and bring incremental improvements, focusing on 30-, 60-, and 90-day milestones.
Along with this automation assessment, Qualitest will proceed to automated root cause analysis of defects, logs, and incidents using ML technology. The company will categorize data in categories such as code, configuration, data, and security. The approach helps to identify where the priorities are. And Qualitest complements the root cause analysis by benchmarking it.
AI is Reshaping the QA Industry
The QA industry has become one where scale dominates, with large vendors that have significant R&D budgets being able to invest in new IP and accelerators. The deployment of AI use cases in QA is rapidly expanding and is reshaping the industry. We have seen smaller QA vendors taking a focused approach on AI, competing, sometimes effectively, with vendors with deeper pockets. Qualitest is one of these vendors reshaping the industry. Last year, the company acquired Algotrace, an AI technology firm, and it is rapidly changing its portfolio around AI, also continuous testing and specialized services. This is excellent news: the QA industry continues to reinvent itself at high speed.
]]>
We recently talked to crowdtesting specialist firm Testbirds about how the firm has adapted to COVID-19 and the macroeconomic conditions.
In the past two years, the crowdtesting industry has realigned its capabilities around agile testing, primarily providing services for mobile apps and websites along with chatbots and connected devices/wearables. The realignment has been successful. Nevertheless, the crowdtesting industry has remained a niche industry.
Impact of COVID-19 on buyers’ use of testing specialists
Significant industries for crowdtesting such as retail, travel & hospitality, and automotive that are being particularly badly impacted by the COVID-19 pandemic are also among the largest spenders on crowdtesting and would be expected to have cut their spending abruptly. This is true to some extent, with several clients putting their projects on hold. However, other industries such as energy & utiltiies, telecoms, media & broadcasting, and financial services have maintained their spending.
Testbirds has witnessed increased activity from both existing and new clients that were not prepared to conduct testing from home and are using Testbirds to complement their QA operations.
One example where a client has increased its use of Testbirds is a large automotive OEM, which put its plant workers on furlough but continued its investment in its digital projects. Testbirds has conducted UI testing for several mobile apps, including ones for its motorcycle and passenger car businesses (one acting as a remote control for vehicle connectivity and entertainment).
In parallel, Testbirds hired a channel director for its activities 12 months ago and initiated a partnership program at the beginning of 2020. The company highlights it has signed eight formal partnerships, including five tier-one QA vendors, and started joint projects. In contrast, 12 months ago, Testbirds had only tactical agreements for one-off projects.
Testbirds is accommodating its services for the indirect channel, letting the QA vendor manage the project and access all of the Testbirds IP, community and project manager. Testbirds continues, however, to supervise its community and its automation investment, even if it is white labeled.
Will COVID-19 reduce client concern around trust & security?
In the past, clients have expressed two primary concerns around crowdtesting: trust (will crowdtesters tell my competitors about my new app?) and security (will they hack my system?).
Testbirds believes that clients that have experienced work-from-home and enjoyed benefits such as higher productivity and work quality will now increase the use of crowdtesting. It emphasizes all of its testers are vetted and go through a thourough onboarding and qualification process, and its systematic NDA policy. Also, Testbirds is mitigating the IT security risk through secure connections and access to staging environments (i.e. a replica of testing or production environments).
Crowdtesting becomes more automated
With crowdtesting now emerging out of its niche, Testbirds is getting ready for changes. The company is increasing its investment in automation, focusing initially on crowdtesting member identification and notification (using AI and its companion mobile app) and automating communications and defect login.
Testbirds highlights that clients typically start their journey with exploratory testing and usability studies, combining functional and usability testing, and then expanding to its deeper usability research and functional testing portfolio. The company wants to further improve its UX capabilities (which already represent 50% of its business) and is soon to release an ML-based algorithm to go through to open question responses and categorize them.
Longer term, Testbirds wants to push waterfall testing to its client base, expanding from agile crowdtesting. This presents new opportunities for the company, given that most of the mobile apps/websites it tests are connected with back-end applications that rely on waterfall development. The company highlights it has already started this journey and developed relevant methodologies.
The sweet spot of crowdtesting is ‘out of the lab’
What is striking about the adoption of crowdtesting is that clients are using crowdtesting as an alternative to traditional functional QA manual activity. Clients turn to crowdtesting for their functional needs and then only gradually expand to usability testing. Testbirds highlights its clients use equally functional and UX testing and is now converting them to adopt further services.
We think the crowdtesting sweet spot also lies in providing QA ‘out of the lab’, whether at home with real network conditions, in the street, or the store. Accordingly, we expect clients will, over time, increase their focus on omnichannel testing and give more attention to usability testing. So far, demand has been relatively limited. COVID-19, by accelerating the transition to digital, is changing this. Now is the time for the crowdtesting industry.
]]>
Capgemini’s Sogeti recently introduced a new TMAP book, Quality for DevOps Teams., which is a direct successor to its TMAP NEXT book published initially in 2016. TMAP NEXT has remained one of the methodology bibles that guide QA practitioners in structuring their testing projects.
Sogeti has added regular updates around agile/scrum development, IoT, and digital testing that complemented TMAP NEXT. Now, with Quality for DevOps Teams, Sogeti highlights it has a complete revamp of TMAP in the context of agile and DevOps projects.
Moving to cross-functional teams
In writing the new book with agile and DevOps projects in mind, Sogeti has introduced a significant change in targeting the entire software development team and not just QA professionals. The company argues that in the context of DevOps, development teams need to go beyond having diverse skills (BAs, developers, testers, and operation specialists). The individual team members must be able to perform other team members’ tasks if required (which Sogeti calls cross-functional teams).
The impact of this approach is significant; it goes beyond tools and training and also includes change management. As part of this culture shift, team members have overall responsibility for their projects and are also required to learn new tools, and this might be outside of their comfort zone. To support this cultural change, program managers need to support team members and provide continuous testing tools and frameworks.
With this cross-functional team approach, Sogeti points to new practices in agile projects. Clients are currently implementing continuous testing strategies, and re-skilling their manual testers toward technical QA activities.
Despite its popularity, the adoption of SDET has remained a vision more than a reality: SDETs have remained focused on their activities to QA and are not able to swap jobs with other roles such as product owners, scrum masters, developers or business analysts.
Sogeti, therefore, points to an entirely new approach to agile and DevOps that will require further delivery transformation and investment among clients. The benefit of the Quality for DevOps Teams book, therefore, is in providing guidelines on how to structure delivery in the far future.
Aiming to make reporting easier
Another guiding principle of Quality for DevOps Teams is the VOICE model, which defines what the client wants to achieve with a project (value and objectives) and measures it through qualitative and quantitative indicators.
Sogeti’s approach goes beyond the traditional go/no-go to release an application to production based on UAT and improvement in KPIs, such as the number of defects found. VOICE also closes the loop with end-user feedback and experience and operations by incorporating their feedback.
Training at scale
Sogeti’s efforts around DevOps and continuous testing do not stop with the new book; it has relaunched its tmap.net website which it wants to turn into a testing community site providing resources and knowledge for agile and DevOps projects, along with more traditional approaches such as waterfall and hybrid agile.
Alongside this effort, Sogeti has refreshed its training capabilities and designed three new training and certification initiatives, working with specialized examination and certification provider iSQI. The company has created a one-day training class for testing professionals already familiar with TMAP, and three other three-day specialized courses.
Sogeti is rolling out the training, targeting teams beyond QA, including business analysts, developers, and operations specialists. Sogeti is also rolling out the program internally with the larger Capgemini group, targeting organizations involved in agile projects. The initiative is of scale since Capgemini has a NelsonHall estimated 150k personnel engaged in application services.
Sogeti’s new book provides a view of what agile will look like next
Sogeti’s Quality for DevOps Teams book provides a long-term view of what agile teams and continuous testing will be like in the future. Its benefit is that the book offers a structured approach. It is reassuring to see Sogeti deploying it through certifications so that it shows that the transformation to cross-functional teams can be a reality. NelsonHall is expecting the rollout to bring feedback and fine-tuning of Sogeti’s approach. We will continue to report on how Sogeti implements Quality for DevOps Teams.
]]>
We recently talked to Shishank Gupta, the practice head of Infosys Validation Services (IVS), about how the practice is adapting to the COVID-19 pandemic and ensuing economic crisis. Of course, the initial focus has been on employee health, helping clients, and enabling its employees to work from home, getting access to tools, applications, and connectivity.
The QA practice is gradually moving on from this phase: Mr. Gupta highlights that clients are starting now to reconsider the contracts they have in place, discussing the scope and prioritizing activities across run-the-business and change-the-business. Unsurprisingly, new deals are on hold as clients lack visibility of the short-term future.
COVID-19 Accelerates the Shift to Digital
Infosys Validation Services is also busy preparing for the post-COVID-19 world and what that will mean in terms of clients shifting their QA needs. On the delivery side, the practice is expecting that client acceptance for home working and distributed agile will increase. This will drive usage of cloud computing, collaboration tools, and virtual desktops, along with increased telecom connectivity.
The pandemic will accelerate the shift in IT budgets to digital, particularly in retail, government services, and healthcare, the latter with renewed spending in health systems, telemedicine, collecting health data, and clinical trials (resulting from increased drug discovery activity). Shishank Gupta also expects that demand for UX testing will grow alongside the growth in digital projects. He also anticipates a continued acceleration of digital learning adoption that will create further testing opportunities across applications and IT infrastructures.
Infosys is Changing its Go-To-Market Priorities
Infosys’ IVS practice is realigning its go-to-market priorities and emphasizing existing offerings that had previously generated only moderate client appetite but have potential for growth in a post COVID-19 world. One example of such an offering is in the area of data masking that IVS had created several years ago for financial services clients for anonymizing their production data for usage as test data. IVS expects new delivery models to drive demand around capabilities such as data security and privacy, risk, and compliance audits.
IVS also expects accelerated adoption of cloud computing both in terms of testing applications to the cloud and SaaS adoption.
Finally, Infosys IVS is increasing its go-to-market effort around crowdtesting. The practice highlights that security concerns were a barrier to crowdtesting’s commercial development. Mr. Gupta now expects clients will adopt crowdtesting as a service and require fewer background checks on the crowdtesters.
And, of course, Infosys knows the world post-COVID-19 will also require leaner operations and lower costs: IVS is expanding its commercial focus on testing open-source software, test process re-engineering combined with RPA. Mr. Gupta highlights an ongoing project with an APAC investment firm where it is deploying RPA tools to automate the monitoring of applications in production and feedback to QA and business users.
QA Becomes Less Internally Focused and More Digital
NelsonHall expects that the role of testing will become less focused on internal transformation (e.g. test process standardization and TCoE setup) and become more integrated within digital transformation programs, where testing is part of the required services.
Currently, clients are continuing to focus on the immediate imperative of business continuity. NelsonHall expects that, in a post-COVID-19 world, clients will make strategic decisions, including accelerating their cost savings programs, driving offshore adoption and distributed agile, and also renegotiating their existing multi-year managed testing services contracts. In parallel, they will redirect some of their savings to the digital-led QA activities that Shishank Gupta has described.
In this new world, enterprises will need a QA partner that offers both onshore advisory capabilities to shift their QA spending to change-the-business, and further offshoring and automation to reduce their run-the-business spending.
]]>
In a recent blog, we highlighted how Cognizant approaches the testing of connected devices. Testing connected devices brings new challenges to QA at two levels: conducting hardware testing and automation. Cognizant’s TEBOT IP is based on a combination of traditional test automation (mostly based on Selenium test scripts) and hardware powered by a Raspberry Pi, triggering physical actions/movements.
The PoS Ecosystem is Becoming More Diverse
Cognizant has identified a new use case for TEBOT, targeting point of sale terminals (PoS). The nature of PoS has changed over the years, with the rise of self-checkout terminals and a greater diversity of hardware, particularly in peripherals (e.g. barcode scanners) and a growing number of authentication methods (e.g. e-signature, PIN code) and payment methods (NFC, insert, or swipe).
The proliferation of hardware peripherals is challenging QA teams, with few vendors providing simulation software for their peripherals, and with a rising number of PoS/peripheral combinations.
These developments make PoS testing a good candidate for test automation.
In looking to automate human activities through TEBOT, Cognizant has focused on customer touch points; for example insert or swipe a card, enter PIN code or sign electronically. It has developed test scripts based on Selenium and conducted tests in its labs in Chennai.
Cognizant has conducted PoS testing for several clients, including:
The PoS Industry Continues its UX Transformation
Cognizant believes that the PoS industry will continue to invest in new equipment and peripherals: AR/VR and mobile PoS will become more prominent and drive further focus on UX. The number of installed PoS is expected to increase by 10% each year and this will require further investment in test automation.
Combining Robot-based Test Automation & Crowdtesting
We continue to explore how to best automate connected devices in various forms. The market is large and also quickly expanding from its IoT products niche to all devices and equipment that combine hardware and embedded software/firmware and are connected. In short, the testing market potential is huge and goes across industrial and consumer devices and equipment. Cognizant’s approach with TEBOT focuses on functional testing. Looking ahead, we think that Cognizant approach should be combined with crowdtesting and UX testing. The company has a crowdtesting value proposition with its fastest offering, which can also provide UX testing, complementing the functional test capabilities that TEBOT brings.
]]>
We recently talked to test IO, the crowdtesting vendor that was acquired by EPAM Systems last April. We wanted to understand the crowdtesting positioning of the company, learn more about the User Story crowdtesting offering launched in December 2019, and understand how test IO fits within the larger EPAM organization.
Founded in 2011, test IO today has 200 clients, many in the retail, media, and travel industries, with a sweet spot around customer-facing applications.
test IO has positioned its crowdtesting portfolio in the context of agile
test IO has focused on functional testing (along with usability testing), targeting agile projects in recent years. Crowdtesting in the context of agile development projects (i.e. continuous testing) remains a priority and test IO recently launched an offering named User Story testing.
The crowdtesting industry has to date relied on two main offerings: exploratory testing and test case-based testing. Under the exploratory testing model, a crowdtester gets few instructions on what to test and goes through the application-in-test by intuition. With a test case-based approach, the crowdtester relies on detailed instructions on how to complete a task or a business process.
With its User Story approach, test IO is promoting a method that lies somewhere between exploratory testing and test case-based testing. In agile development methodologies, user stories are the instructions given to software developers in the form of “as a [user/admin/content owner/], I want/need [goal] so that [reason]”. The challenge from a testing perspective is to test these user stories by setting up acceptance criteria that are specific enough to be tested but still meet the spirit of agile that relies on loose requirements and iteration.
Speed is of the essence, as with agile projects. test IO argues it can achieve User Story testing in approximately two hours, from the moment it sends a mobilization campaign to its members to the moment it receives defects back from crowdtesters. The company highlights that working with a population of crowdtesters with the right level of testing skills helps in achieving speed. test IO believes it has enough members to react quickly to any project invite while providing the right defect coverage.
Exploring portfolio synergies with EPAM
test IO has also expanded its delivery model from public crowds and private crowds (e.g. employees of the client) to the EPAM internal crowd. test IO can rely on EPAM’s 6k testing practice members to expand its reach and bring to the client career testers with vertical experience while reassuring a client that its application-in-test will be exposed to EPAM personnel only.
User Story testing and internal crowds are just the beginning: test IO and EPAM intend, over time, to expand their crowdtesting capabilities to specialized services: performance testing will be the first of these.
AI is also on the agenda. One of its first AI use cases has been around crowdtester selection, based on the technologies and experience needed by the client. A current priority is defect review. For each crowdtesting project, test IO will review the defects logged by crowdtesters and remove duplicates. The company wants to automate most of its test duplicate activity to free up time and focus on data analysis.
In a later phase, test IO wants to run through the defect data it has built during its nine years of existence and identify crowdtesting best practices and application defect patterns.
test IO hints that EPAM is exploring how to best use its experience in crowdsourcing and expand it to other software services activities. We will be monitoring with interest how EPAM develops the crowdsourcing model of test IO. Despite claims of attracting young talent through different employment models, most vendors still rely on permanent or freelancer positions. With test IO, EPAM may be able to invent a new employment model that will expand from testing to other application service activities.
]]>
In the world of testing services/quality assurance, data testing has in the past been somewhat overlooked, still largely relying on spreadsheets and manual tasks.
While much of the current attention has been on agile/continuous testing, data testing remains an important element of IT projects, and gained further interest a few years ago in the context of big data, with migration of data from databases to data lakes. This renewed interest continues with focus on the quality of data for ML projects.
We recently talked to TCS about its activities for automating data testing. The company recently launched its Big Data and Analytics Test Automation Platform Solution (BITS) IP, targeting use cases, including the validation of:
Testing Data at Scale through Automation
The principle of testing data is straightforward and involves comparing target data with source data. However, TCS highlights that big data projects bring new challenges to data testing, such as:
To cope with these three challenges, TCS has developed (in BITS) automation related to:
An example client is a large mining firm, which is using BITS for validating the quality of its analytics reports and dashboards. The client is using these reports and dashboard to monitor its business and requires reliable data that is refreshed daily. TCS highlights that BITS can achieve up to 100% coverage and improve tester productivity by 30% to 60%.
Overall, TCS sees good traction for BITS in BFSI globally, as the banking industry moves from EDWs and proprietary databases to data lakes. Other promising industries include retail, healthcare, resources and communications.
TCS believes BITS has great potential and wants to create additional plug-ins that can connect with more data sources, taking a project-led approach.
Validating the Data Used by ML
Along with data validation, TCS has positioned BITS in the context of ML through testing of ML-based algorithms.
The company started on this journey of ML-based algorithms, initially focusing on linear regression. Linear regression is one of the most common statistical techniques, often used to predict output (“dependent variables”) out of existing data (“independent variables”). Currently, TCS is in its early steps, and focuses on assessing the data that was used for creating the algorithm, identifying invalid data such as blanks, duplicate data or non-compliant data. BITS will automatically remove invalid data and run the analytical model and assess how the clean data affects the accuracy of the algorithm.
Alongside analytical model validation, TCS also works on linear regression-based model simulation, looking at how to best use training and testing data. One of the challenges of ML lies in the relative scarcity of data, and how to make best use of data across training the algorithm (i.e. improve its accuracy) and testing it (once the algorithm has been finalized). Overall, the more data is used on training the algorithm, the better its accuracy. However, testing an algorithm requires fresh data that has not been used for training purposes.
While the industry uses a training to testing ratio of 80:20, TCS helps in fine-tuning the right mix by simulating ten possibilities and selecting the mix that optimizes the algorithm.
TCS sells its data and algorithm testing services, using BITS, though pricing models including T&M and fixed price, and subscription.
Roadmap: Expanding from Linear Regression to Other Statistical Models
TCS will continue to invest in the ML validation capabilities of BITS and intends to expand to other statistical models such as decision trees and clustering models. The accelerating adoption of ML and also of other digital technologies is a strategic opportunity for TCS’ services across its data testing and analytical model portfolio.
]]>
We recently caught up with Tech Mahindra’s QA practice, Digital Assurance Services, to assess recent progress with their IP strategy.
Digital Assurance Services’ test automation strategy is based on IP and accelerators in its LitmusT platform. The company has been aggregating and integrating automation artifacts within LitmusT and intends to automate the full testing lifecycle, currently, across test execution (MAGiX), code quality, test support services with test environment, and test data management, analytics and AI, model-based testing, and non-functional testing.
Unlike some of its peers, Tech Mahindra is not looking to monetize LitmusT but is using the tools and accelerators it contains within its managed services engagements. The company is also relying mainly on open source software, using COTS only when no open source alternative is available. Just three of LitmusT’s modules rely on COTS; all the others use open-source software.
Tech Mahindra Continues to Invest in Functional Test Automation
MAGiX, which focuses on test execution, is a core module in LitmusT. Launched earlier this year as its next-gen test execution platform, Tech Mahindra was initially targeting SAP applications (ECC, S/4 HANA, and Fiori), rapidly expanding it to web-based applications, Java and .NET applications.
MAGiX aims to combine the ease of record-and-play first-generation test execution software tools with keyword-driven tools. In this approach, test scripts are created automatically during the process and software objects identified through its Object Spy feature. As a result, when the next release of the application comes, the test scripts are likely to still work. Test script maintenance activity is reduced. MAGiX also handles test data and integrates with test execution software such as Micro Focus UFT and Selenium, and with DevOps tools such as Jenkins.
Digital Assurance Services continues to invest in MAGiX, expanding it to API testing and database testing, and recently launching its Visual Testing offering.
Integrating Functional and UX Testing
Visual Testing expands the functional test execution approach of MAGiX to UX testing, focusing on automating image layout comparison, colors, and fonts across browsers and screen sizes. A major use case is apps for mobile devices where small screen sizes impact the layout. Potential buyers of Visual Testing go beyond the B2C industry and Tech Mahindra highlights that companies in regulated industries such as pharmaceuticals are interested in the offering for their data entry needs.
Visual Testing’s approach relies on having a screen baseline to compare with the screens of future releases. It highlights areas on the screen that have deviated from the initial screen baseline and identifies where the change came from in an icon (e.g. a change in the code). Tech Mahindra will go through all changes and decide if the bugs that Visual Testing identified are acceptable.
Visual Testing relies on scripts that are embedded in the main functional test scripts. The development of Visual Testing test scripts takes one to two weeks. Meanwhile, Tech Mahindra requires screenshots of all the different screen sizes.
Visual Testing uses several technologies, including for pixel-by-pixel comparison, open source software Sikuli and a COTS Applitools. Tech Mahindra has an agreement with Applitools and highlights that Visual Testing can be activated on MAGiX by pressing one button.
Adoption of UX Testing Will Accelerate
Quality Assurance continues to be an area of ongoing innovation in IT services. Tech Mahindra’s approach is attractive in that it converges functional and UX testing and allows simultaneous execution.
Despite all the discussions about UX in the last few years, we have not seen a broad adoption of UX testing, except in regulatory-led accessibility testing. By integrating Visual Testing in the DevOps testing tools, Tech Mahindra is making usability testing more automated and almost invisible. This automated approach is key to increasing the usage of UX testing.
]]>
We recently chatted with Amdocs about how the company has been progressing with its automation framework, ‘Ginger by Amdocs’. Amdocs launched Ginger four years ago, initially as a test automation framework designed for use by non-automation engineers. Since then, Amdocs has aggregated some of its test IP around Ginger and has become its main test automation platform, with functionality ranging from test automation to mobile testing.
In the world of functional testing, automation remains centered around the creation of test cases in natural language, and then its transformation into automated test scripts that are run by a test execution engine. The test case-to-test script process is well-defined and has become an industry standard. A challenge is that it requires test automation specialists for both the creation of artefacts (e.g. test scripts) and their maintenance an ongoing basis.
In the past two years, Amdocs has been working on two initiatives to make the test case-to-script process easier.
Easing the Creation of Test Scripts
Amdocs has worked on making the creation of test scripts from test cases accessible to non-automation engineers. Its approach has relied on decomposing test scripts into smaller elements that Amdocs calls ‘automation nuggets’ (e.g. UI objects for handling APIs or database validation) and that are stored in a repository. Test case designers can then use these nuggets, via a drag-and-drop approach, to create scripts.
A key element of this approach is the creation of nuggets repositories specific to each client’s application landscape. Amdocs relies on its Ginger Auto-Pilot feature to automate its creation. Auto-Pilot goes through the pages of a website (or a java-based UI) and identifies objects and their properties and values, creating a model of each page and corresponding objects using a POM approach. Also, Auto-Pilot employs a similar approach in modelling REST API-based applications by creating a model of the APIs and their input and output parameters.
Helping to Maintain Test Scripts
Another benefit of Auto-Pilot is that it is also useful in the maintenance of test scripts. Amdocs will run Auto-Pilot on a regular basis and discover changes in applications from the model, and identify which test scripts may fail as a result of the application changes.
Once Auto-Pilot has identified the changes from the model, it has several options, including the ability to automatically fix scripts that were impacted by the identified changes; for example:
Looking ahead, Amdocs continue to invest in the maintenance of tests scripts, focusing on issues faced during execution. With Auto-Pilot Self-Healing capability, the company is focusing on automatically fixing issues during the execution phase, along with sending alerts to automation engineers on what scripts were changed and how. Amdocs plans to introduce this new capability in early 2020.
Amdocs continues to invest in Auto-Pilot and plans to introduce some level of AI to the tool to help it recognize changes in objects or fields. The company is training the software using ML technology; for instance, in identifying field names that may have changed (e.g. a field being changed from ‘customer’ to ‘client’).
While Amdocs has positioned Auto-Pilot in the context of test script maintenance, its relevance for agile projects also comes to mind. With agile projects based on two-week incremental changes/sprints, Auto-Pilot provides a starting point for maintaining test scripts.
Amdocs Releases Ginger as Open Source
Most testing service vendors tend to consider IP such as Ginger as a differentiator to their service offering, whether provided as part of the service or sold under a license fee agreement. Amdocs has done the opposite to this and made a bold move in releasing Ginger as open source under an Apache 2.0 license.
Amdocs emphasizes that the release to open source will not stop it from making further investments in Ginger. An example of a recent investment is its end-to-end (E2E) testing capability, where Ginger provides an orchestration engine for test execution tools across most OS (e.g. Windows, Unix and Linux and z/OS), programming languages (Java and .NET, web-based applications, mainframe applications), and other tools (e.g., SoapUI, REST Assured and Cucumber). Ginger’s E2E capability is particularly relevant to industries that operate on standard business processes (such as telecom service providers that still represent Amdocs’ core market) and retail banks.
Looking ahead, Amdocs believes that by releasing Ginger as open source software, it will gain further visibility of its automation capabilities, attract new talent, and derive revenues from adapting Ginger to specific client requests, along with driving interest from open source community developers in complementing Ginger’s capabilities.
While testing services rely on an ecosystem of open source tools, from Selenium and Appium to DevOps tools such as Jenkins and Bamboo, we have not previously seen any significant firm such as Amdocs giving back to the community their central IP. We welcome this bold move.
]]>
NelsonHall has commented several times about the role of platforms in quality assurance (QA) and how these are playing a central role in functional testing in the world of agile methodologies and continuous testing. Platforms take a best-of-breed approach to software components and rely, by default, on open source software, sometimes including expert functionality from COTS.
In this blog, I look specifically at Ballista, a performance testing platform launched by Tech Mahindra.
The rise of functional testing platforms
The initial purpose of QA platforms was really to integrate the myriad of software tools required in continuous testing/DevOps with Jenkins at its core. This has evolved: IT service vendors have been aggregating their IP and accelerators around their continuous testing platforms, which become central automation points. The value proposition of platforms still centers on automation, but it is expanding to other benefits such as reducing testing software licenses.
Software license and maintenance fees represent approximately 35% of testing budgets, with the rest being external testing services (~40%) and internal resources (~25%). While software-related expenses represent an essential source of potential savings, few IT service vendors have yet invested in creating full alternatives to COTS in the form of platforms.
Tech Mahindra launches Ballista performance testing platform
Tech Mahindra has started with its performance testing and engineering activities. This makes sense: performance testing and engineering is more reliant on testing software tools than functional testing is: ~70% of performance budgets are spent on software license fees.
Tech Mahindra designed Ballista, relying on open source software, with JMeter (performance testing) at its core, along with Java-related frameworks and reporting tools such as DynamicReports and jqPlot, along with Jenkins (CI), in the context of DevOps.
The value of Ballista comes from integrating the different tools and avoiding tool fragmentation and silos, and from its functionality, which ranges from performance testing to monitoring of production environments (e.g. systems, application servers, and databases), bridging the worlds of testing and monitoring.
Ballista also has stubbing/service virtualization capabilities to recreate responses from a website (HTTP).
Tech Mahindra has created dashboards across the different tools, creating reports from tools that did not previously communicate. It has also worked on improving the UX for accessing the various open source tools.
Tech Mahindra’s approach to performance QA differs somewhat from functional testing platforms mainly in two areas:
Tech Mahindra will continue to enhance Ballista. Features on the horizon include:
Platforms will automate the full SDLC
The traditional boundary between software tools and IT services has become more porous thanks to IT service vendors and QA pure-plays investing in testing platforms. The functionality of platforms has been mostly functional, expanding into UX testing selectively, and now into performance testing. This is good news, as platforms are helping traditional test automation, expanding from its test script silo right across the SDLC. We will be monitoring the client response to Tech Mahindra’s no-cost performance platform along with market reaction to this “as-a-service” innovation.
]]>
In the world of testing services, crowdtesting stands out. While efforts towards automation are accelerating in testing services/quality assurance (QA), the perception that crowdtesting is labor intensive and relies on communities of tens of thousands of testers seems at odds with this. This perception is no longer valid: the crowdtesting industry has changed.
Managing crowdtesting communities
A core element of the activity of crowdtesting vendors is around managing their communities. In this industry, one-off events (e.g. where participants join for an event like a hackathon) are uncommon. Quite the contrary, crowdtesting vendors provide their activities on an ongoing basis, increasingly as part of one-year contacts.
Crowdtesting vendors have focused their effort for the past two years on getting a detailed understanding of their communities, on the skills and capabilities of crowdtesters, and helping them enhance skills through online training.
Crowdtesting firms continue to invest in their communities, focusing on two priorities: to accelerate the identification of relevant crowdtesters for a given project’s requirements, and to increase the activity level of communities to make sure crowdtesters with specific skills will be available when needed. Speed is now of the essence in the crowdtesting industry, primarily because of agile development.
Agile & crowdtesting
Crowdtesting vendors have largely repositioned their offerings to accommodate fast testing turnaround times. This makes sense: most of the technology tested by crowdtesters is mobile apps and responsive websites, which have driven the adoption of agile methodologies.
Crowdtesting vendors are now often providing agile testing during the weekends, making use of the availability and distributed location of crowdtesters.
Automation increasingly takes a primary role in crowdtesting.
Automation is changing
Initially, the crowdtesting industry relied on the comprehensiveness of its software platforms, focusing on three personas: the crowdtester for reporting defects, the crowdtesting vendor for managing errors and providing analysis and recommendations, and the customer for accessing the results and analysis.
Crowdtesting vendors continue to invest in their platforms, but that investment is not enough anymore. AI and enhanced analytics have made their way into crowdtesting. An example of an AI use case is defect redundancy identification: cleaning the raw defect data and eliminating, for instance, duplicate defects. In the past, this defect analysis was done manually; increasingly, it is done using ML technology to identify bugs that have similar characteristics.
Another example is emotion analysis where, in addition to their defect reports, crowdtesters also provide videos of their activities. In the past, emotion analysis required human video analysis; in the future, AI will help identify the emotions, negative or positive, of the crowdtester. This will help both the crowdtesting vendor and the client in knowing where to look within a given video.
The crowdtesting industry is also pioneering other AI use cases. AI use cases have expanded from enhanced analytics to automation. The most advanced crowdtesting vendors are looking to create test scripts automatically, based on the gestures of the tester, or use web crawlers. Over time, the crowdtesting industry will combine manual activities and automation.
IP protection
This is for the future. For now, crowdtesting is still a niche activity and needs to gain scale. A significant challenge for scaling up is IP and privacy protection. Clients still fear IP may leak to competitors or consumers during the testing cycle, and the crowdtesting industry is trying to address this fear by relying on ‘private crowds’.
We think that the crowdtesting industry will become more successful when clients recognize that not all their mobile apps or websites are strategic differentiators and that the impact of an incremental new feature in a mobile app being leaked in the open will be limited. And clearly, agile is promoting this incremental approach, which makes crowdtesting more acceptable over time.
]]>
NelsonHall has been commenting recently on the future of testing, looking at how AI algorithms and RPA tools fit in the context of QA. This blog takes a different perspective by looking at how one of the tier-one software testing service vendors is approaching its testing of bots and connected devices. With the fast adoption of connected devices, automating testing of consumer or industrial IoT-based products has become a new challenge for QA.
We recently talked with Cognizant to understand how the company is addressing this challenge, and learned that the company already has several projects in process. One example is its work for a telecom service provider that sells home security services to consumers based on devices on sensors that trigger an alarm if someone attempts to effect an entry. The client has outsourced the design and manufacturing of the devices, working with around ten suppliers, focusing itself on developing algorithms running on the firmware/embedded systems of the devices. Cognizant has been helping the client on regression testing for each new release of the client’s embedded software on the security devices. Cognizant’s approach includes two fundamental principles: to conduct testing with a lab-based approach, and to leverage automation in the testing.
Cognizant is using its TEBOT robotic testing solution to test for interoperability between firmware, devices, and OS. TEBOT automates human-to-machine and machine-to-machine interactions at the physical-digital level for IoT. It takes a UI approach, using software tools such as Selenum and Appium for its test execution needs and for invoking API calls for Raspberry Pi, triggering physical actions/movements. For those readers not familiar with a Raspberry Pi, it is a nano-computer developed by the University of Cambridge’s Raspberry Pi Foundation to help spread IT skills. The main benefits of a Raspberry Pi are that they are very inexpensive, with prices starting as low as $35, and small (the size of a credit card). Cognizant has been able to test several scenarios for the client around detection of flood, smoke sensors, door opening, motion in the house, light, and temperature change. TEBOT also has reporting capabilities with a dashboard that displays results of tests.
Cognizant has also been using TEBOT to test how the client’s home security product reacts to human instructions. The company also uses TEBOT for recreating human voice and provide instructions to virtual assistants (e.g. Amazon’s Alexa) and get results.
Cognizant continues to invest in TEBOT. Looking ahead, a priority is to put TEBOT into the agile process, with continuous testing. Another key priority is to keep the price of TEBOT at affordable levels while being able to replicate it in other sites. The company is currently conducting TEBOT-based testing in its Bangalore lab for one of its clients and highlights that it can replicate the lab anywhere, given the low level of investment required.
With its TEBOT IP, Cognizant is providing a lab-based approach to connected device testing. Cognizant claims this automation-based approach can deliver 30%+ reduction in test cycle times compared with manual testing and 40% reduction in cost of quality around smart device certification. Cognizant also offers real-life testing for connected devices, here using its internal crowdsourcing capabilities with its fastest offering.
]]>
NelsonHall has commented several times on how vendors have been introducing AI into their QA/software testing activities; for example, to enhance defect analysis and prediction.
We have talked less about the use of RPA because it did not seem to bring much innovation on top of what testing software products already offer. Testing software products have been around for over 20 years and are gradually expanding their automation capabilities from test script-based automation to new areas including service virtualization and test data management. In this context, RPA tools, which also tend to work at the same UI level as testing software does, seemed too generic and not specific enough for testing.
But the adoption of RPA in the context of testing is changing: we see clients experimenting with RPA workflow tools to complement or even replace test execution software. We recently talked with HCL Technologies about its RPA initiatives in the context of testing.
Using RPA workflows in the context of testing services has several prerequisites
HCL Technologies posits that there are prerequisites for using RPA in testing:
Also, it helps if the client already has license rights and can incorporate new bot usage in its existing license agreement.
Automating labor-intensive test activities
An obvious use case for RPA in testing is automating labor-intensive tasks such as test environment provisioning, job scheduling, test data generation or data migration. As HCL Tech highlights, data migration from one format to another, and subsequent testing, is a very good candidate for RPA-based automation, largely for volume reasons.
Some of these labor-intensive tasks can be automated using non-RPA workflows, and RPA is only one of the options for automating them. What matters is that the client can use its existing RPA license agreement and therefore automate these labor-intensive tasks at a limited extra cost.
End-to-end testing
A second use case is around end-to-end testing (E2E), also called business process testing. HCL Tech highlights that E2E testing often requires testing different applications based on different technologies (web, mainframe, client-server) and for which popular test execution tools such as Selenium won’t work. In this case, an important element of the HCL Tech’s automation strategy is around RPA software, initially looking at workflow use cases.
In one example of supporting a client that has business processes involving websites, client-server, and mainframe applications, the testing activities use two different test execution tools (Selenium, and Micro Focus UFT) and manual testing. HCL Tech implemented Automation Anywhere, taking a UI approach, for conducting business process test execution.
An added benefit of E2E testing is that UAT is another use case for RPA scripts.
Another example is for specific technologies, such as Citrix and virtual desktops.
HCL Tech will deploy RPA across testing clients
Looking ahead, HCL Tech wants to deploy RPA across its testing clients; it currently has about seven clients that have adopted RPA for their testing needs.
HCL Tech expects to develop further RPA use cases for test automation. A recent example is HCL Tech’s Zero Touch Testing (ZTT), which combines an ML and MBT approach. ZTT helps to convert UML diagrams into manual test cases and then test scripts using ML and RPA to capture objects from the applications.
Will RPA replace testing software products in the long-run? Probably not. Beyond the cost of license, what matters is the investment made in developing test scripts. Clients will need a very strong business case to scrap their test scripts and redevelop RPA scripts, unless vendors create an IP for automating the migration of test scripts into RPA scripts. The positioning of RPA tools therefore is in orchestrating different test execution tools across multiple applications in different technologies.
The tool ecosystem is also changing, with several RPA ISVs moving into the test automation space and test automation ISVs expanding in the high-growth RPA software market. The nature of tools in the testing space is likely to change and NelsonHall will be monitoring this space.
]]>
NelsonHall continues to examine the AI activities of major testing service vendors. In the past 18 months, many testing service vendors have expanded their AI capabilities around analytics, making sense of the wealth of data in production logs, defect-related applications, development tools, and in streamed social data.
We recently talked with Expleo with regard to its AI-related initiatives. Expleo is the new company that has resulted from the acquisition of SQS, a QA specialist, by Assystem Technologies, an engineering and technology service organization.
Expleo highlights that its expertise lies around rule-based systems, an area that is now considered part of ML, and for the past 12 years it has created several rule-based systems use cases (e.g. defect prediction and code impact analysis). It now has around ten AI use cases, several of which are now in widespread use in QA (e.g. for sentiment analysis, defect prediction, and code impact analysis).
Other use cases remain specific to Expleo. One example of an Expleo-specific use case is related to false positives, identifying test failures that are not due to the application under test but caused by external factors such as a test environment not being available or a network suffering from latency, during test runs. Expleo has developed an IP, automatic error analysis and recovery (EARI), that relies on error classification and a rules engine. EARI will launch a remedy to the false positive by the applying a ‘last-known good action’.
Expleo continues to invest in developing AI use cases. The testing industry is mature in automation once test scripts have been created, but remains a manual activity before the creation of scripts. Expleo is currently working on creating test scripts from test cases written in plain English, using NLP technology. Another AI use case is a real-time assessment of software quality or script auto-generation based on user activity and behavior.
AI in QA is still in its infancy phase and many issues remain to be solved. Expleo is taking a consultative approach to AI and testing, educating clients about the ‘dos and don’ts’ of AI in the context of QA. The company has compiled its best practices and wants to help its clients redefine their test processes and technologies, taking account of the impact of cognitive technologies on organizations.
Data relevancy is another priority. Expleo points out that clients tend to place too little emphasis on the data being used for AI training purposes. Data may be biased, not relevant from one location to another, or just not large enough in volume for training purposes. Expleo has been working on assessing the data bias, based on data sampling and a statistical approach to AI outputs. Once complete, Expleo can identify the training data causing the output exceptions and remove it from the AI training data.
Expleo is also working on bringing visibility to how ML systems work, with its ‘Explainable AI’ offering. The company highlights that understanding why an ML system came to a specific outcome or decision remains a challenge for the industry. Yet, understanding an AI’s decision process will soon become a priority for compliance or security reasons. An example is around autonomous vehicles – to understand why and how vehicles will make decisions. Another example is for compliance reasons, being able to prove to authorities that an AI meets regulatory requirements.
With its new scope and size (15k personnel), Expleo is expanding its QA capabilities towards the engineering world, around embedded software, production systems, IoT and PLM, which will require further investment in AI. This is just the beginning of Expleo’s AI and testing story.
]]>
NelsonHall continues to explore how cognitive is reshaping software testing services, and here I look at how Amdocs is automating chatbot testing.
The market is shifting from continuous testing to cognitive
Over the last year, for many testing services providers, the focus evolved from creating continuous testing platforms (in the context of agile and DevOps adoption), to incorporate AI with the intent of automating testing beyond test execution.
The next priority of the testing industry is going beyond the use AI to automate testing to testing AI systems and bots – which brings new levels of complexity. Some of this comes from the fact that AI and bot testing is new, with methodologies to be created and automation strategies to be formulated.
Chatbot testing is one new area where the industry is getting organized, with initiatives such as making sure the training data is not biased and creating text alternatives (“utterances”) to the same question to a chatbot: whatever way the question was asked, the response must consistenly be the same.
Amdocs’ approach to chatbot testing
To most readers, the name Amdocs is probably reminiscent of OSS/BSS and other communication service provider-specific software products, and rightly so. But Amdocs has also become an IT service company, providing services around its own products, non-Amdocs products, and custom applications.
Amdocs recently briefed NelsonHall on the work it does in chatbot training and testing. Its approach to chatbot training and testing relies on several priorities:
Automating chatbot training and testing
Amdocs uses several technologies to help automate chatbot training and testing, for example NLP for word clustering, and ML for classification to help in understanding the intent of a customer’s interaction with a chatbot. Amdocs highlights that it can achieve an accuracy level of ~96%.
Amdocs relies on integration with other applications and APIs for its context needs.
For its chat flow needs, Amdocs is using the BDD approach. Under BDD, testers or business analysts will write test cases (named feature files) in English (and then in the Gherkin language) and translate them into test scripts. With this approach, Amdocs creates series of scenarios guiding the bot step-by-step on how to react to the customer interactions.
Amdocs also uses open source software Botium, which relies on the same principles as BDD. The integration of Botium helps it testing chatbot technology from vendors, including Facebook, Slack, Skype, Telegram, WeChat, and WhatsApp.
Amdocs has integrated the BDD approach in its main testing framework, Ginger. Ginger integrates with CI tool Jenkins, which means the BDD scripts can be run in a continuous testing mode as part of an iterative approach to training and testing. The integration with Ginger also provides back-end integration testing, including API testing, notably for its context needs.
Testing AI systems
Amdocs’ approach brings some priority on what to test in chatbots, and some level of automation. This is the beginning: chatbots are currently relatively simple and, as their sophistication grows, so will their testing needs.
This, of course, raises further questions:
NelsonHall will be publishing its latest report on software testing services soon, focusing on next-gen services, including the role of AI and RPA in testing, along with mobile and UX testing.
]]>We recently had a briefing with Topcoder, an entity acquired by Wipro in 2016. Topcoder is known for its crowdsourcing capabilities and its extensive community network and recognized for the wide range of services it offers across several areas of technology services, with a focus on agile development, digital design, and algorithms/analytics.
Topcoder’s four main operational principles
Topcoder has based its operations on four broad principles, which it adapts to its various activities. They are:
Synergies between Wipro and Topcoder
Commercial expansion remains a priority and Topcoder continues to evangelize about the benefits of crowdsourcing. In response to one area of resistance to client adoption around security, Topcoder points to the work it has done for NASA as an example of its security capabilities.
Cross-selling into Wipro’s clients should help Topcoder in its commercial expansion. It is targeting the large BFSI client base of Wipro, with the intent of overcoming the traditional resistance to change in the sector. The alignment with Wipro goes beyond sales: Topcoder is using Wipro personnel for its crowdsourcing activities, targeting the 75k or so Wipro employees that are relevant to its activities. The company estimates it derives 45% of its revenues from Wipro’s clients, with Topcoder complementing Wipro for reaching out to talent, for instance. Also, Topcoder and Wipro have aligned their offerings in certain areas, notability in crowdtesting/QA as a service (QaaS). See this blog to learn more.
Managing the community
Topcoder feels it has the scale and is finding it easy to grow its community, currently by ~50k members per quarter. It is looking to promote the rise of crowd members to specialized tasks, such as co-pilot (i.e. project manager), for helping to break down projects into pieces or setting up the success criteria for a given project. Topcoder is creating a career path to help.
Further investment in IP and enabling tools
Topcoder’s most significant IP is its marketplace, which connects clients with the community and is used for managing projects, clients, and members. It currently spends ~30% of its revenues on R&D, and highlights that it needs to maintain this level of investment in R&D to stay abreast with technology innovation. Examples include application containerization to distribute applications across members, the use of AR/VR, and even quantum computing.
In the mid-term, Topcoder is looking at ML: the company has 15 years of data and 4m software assets it intends to analyze and start creating algorithms to help automate parts of the software development lifecycle. This should bring Topcoder to a whole different business model and bring IP to its human-intensive service.
We couldn’t agree more with Topcoder’s vision. The future of crowdsourcing vendors lies in bringing automation to their service activities. Automation is already there for activities such as project management, crowd member sourcing, and work analysis. Looking ahead, the future lies with ML to analyze the mass of data collected through years of work. This is an exciting time for crowdsourcing in IT services.
]]>
Software testing services continue to show vitality in their adoption of tools to increase automation. One of the areas in which vendors are investing heavily is AI, not just leveraging AI to increase automation through an analytics approach, but also in testing of AI systems. Back in November 2017, we reported on how Infosys was expanding its testing service portfolio in the areas of AI, chatbots, and blockchain. Infosys had created several use cases for ML, focusing on test case optimization and defect prediction. It was also exploring how to test chatbots and validate the responses of a chatbot.
We recently talked to Infosys about the progress it has been making in this area over the last year.
Adding more use cases for ML-based automation
Infosys continues to invest in developing additional AI use cases, and has expanded from use cases around test suite optimization, defect prediction, and sentiment analysis into three new areas:
Infosys is now systematically introducing these ML use cases with clients, and is currently at the PoC and implementation stage with 25 clients.
Infosys continues to work on additional AI use cases such as:
Testing ML systems
Testing ML systems is also a priority. Infosys is initially focusing on several use cases across deterministic MLs (i.e. that will always produce the same output) such as bots, and on non-deterministic systems (e.g. defect prediction).
For bot testing, Infosys has been working on making sure a chatbot can provide the same response to the many different ways a question can be asked. For one client, it has created an algorithm for generating text alternatives around a question. It then validates that the chatbot’s response is consistent for all question alternatives, using Selenium.
In addition, Infosys is working on voice bots, and image and video recognition. For image recognition, it creates alternatives to an image and validates that the ML system recognizes items on the image.
Testing MLs is only just beginning; vendors such as Infosys are working on the challenge, and in use case after use case, are creating comprehensive methodologies for ML testing.
Some autonomous testing is on the horizon
Infosys is working on several pilots around autonomous testing. This is a bold ambition. Infosys has based its first autonomous testing approach on a web crawler. The web crawler has several goals. It scans each page of a website to pick up defects and failures such as 404 errors, broken links, HTML related errors. More importantly, the web crawler will create paths/transactions across one or several screens/webpages, and then create Selenium-based test scripts for these paths/transactions. This is the beginning, of course, and the first test use cases are simple transactions such as user login or order-to-pay in an online store.
NelsonHall will be monitoring the development of autonomous testing with interest.
]]>
We recently caught up with Sogeti, a subsidiary of Capgemini, to discuss the use of AI and RPA in software testing.
In testing, AI use cases are focusing on making sense of the data generated by testing and from ITSM and production tools. For RPA, adoption of RPA workflows and chatbots in automating testing services has to date been minimal.
Continued investment in Cognitive QA through AI use cases and UX
Earlier this year, we commented on the 2017 launch by Sogeti of its Cognitive QA IP. Sogeti had developed several AI use cases in areas including test case optimization and prioritization, and defect prediction. Sogeti continues to invest in Cognitive QA to gain further visibility around test execution. Recent use cases include:
Sogeti, like most of its peers, also continues to invest in sentiment analysis capabilities. The principle of sentiment analysis tools in the context of testing is to cluster data across several keywords, e.g. functional defect, UX defect. Sogeti is working on translating its sentiment analysis into actionable feedback to developers.
The company is finding that these AI use cases are a good door-opener with clients and open new broader discussions on test & development and data quality: with the increased adoption of agile methodologies and DevOps open source tools, the fragmentation of tools used in the SDLC is impacting the quality and comprehensiveness of data.
Bringing UX to AI use cases
While we were expecting Sogeti to maintain its focus on AI use cases, we had not expected that Sogeti is also focusing on the UX. The first step in this journey was straightforward, with Cognitive QA being accessible on mobile devices and going beyond a responsive website approach, e.g. creating automated alerts for not meeting SLAs, and automatically setting up emergency meetings.
Sogeti is also bringing virtual agents into Cognitive QA. It offers access to the IP through both voice and chatbot interfaces. With this feature, test managers can access information, e.g. number of cases to be tested for the next release of an application, which one, and what level of prioritization. The solution handles interaction through the virtual agents of Microsoft (Cortana) and Skype, IBM (Watson Virtual Agent), AWS (Alexa), and Google Home. Sogeti has deployed this virtual agent approach with two clients, with implementation time taking between two to three months.
Another aspect of Sogeti’s investment outside of a pure AI use case approach is its project health approach. Cognitive QA integrates with financial applications/ERPs. The intention is to provide a view on the financial performance of a given testing project and integrate with HR systems to help source testing personnel across units.
Deploying RPA workflows to automate testing services
The other side of automation is RPA. We have mentioned several times the similarities between RPA tools and test execution tools, and the fact that they share the same UI approach (see the blog RPA Can Benefit from Testing Services’ Best Practices & Experience for more information). The world of testing execution software and RPA workflow tools is converging with several testing ISVs now launching their RPA software products. Several of Sogeti’s clients are using their RPA licenses to automate testing. The frontier between testing execution and RPA is about to become porous.
We have not historically seen extensive use of RPA tools to automate manual testing activities. To a large extent, the wealth of testing software tools has been comprehensive enough to avoid further investment in software product licenses. Sogeti is indicating that this is now changing: with several clients it is using RPA to automate activities related to test data management, test environment provisioning, or real-time reporting. This is all about a business case: these clients are making use of their existing RPA software licenses rather than buying additional specialized software from the likes of CA. To that end, Sogeti has been building its RPA testing-focused capabilities internally, and has ~100 test automation engineers now certified on UiPath and Blue Prism.
AI and RPA are only one of the current priorities for testing services
The future of testing services goes beyond the use of AI and RPA: there is much more. One major push currently is adopting agile methodologies and DevOps tools, and re-purposing TCoEs to become more automated and more technical. And there is also UX testing, which itself is a massive topic, and requiring investment in automation. The reinvention of testing services continues.
]]>
In our last testing blog on TCS in July 2018, we discussed the work TCS has conducted around UX testing and the introduction of its CX Assurance Platform (CXAP). In this blog, I look at the IP that TCS launched in mid-2018 addressing another feature of the digital world: DevOps and AI.
TCS’ Smart QE Platform is built on existing TCS IP, including:
The Smart QE Platform integrates NETA and 360 Degree Assurance, and this integration is an acknowledgement of the fragmentation of software tools in testing. The goal of Smart QE Platform is to drive automation and bring AI capabilities to continuous testing.
New AI use cases for automating testing
AI and analytics are a priority for TCS; it has complemented its ML use cases for test suite optimization and defect prediction, and added:
Enhancements to existing functionality
Along with its investment in AI use cases, TCS continues to enhance the functionality of Smart QE Platform; e.g. static code analysis, continuous deployment, dashboards, and a self-service portal. These enhancements and new functionality are incremental. Examples include the impact of code change in testing data format and requirements, and monitoring the availability of test environments.
To a large extent, TCS is driving its testing IP towards more specialization and it aims to automate an ecosystem of services that fell outside of testing activities in the past. The good news is that by increasing the availability of test environments, for instance, TCS is removing several bottlenecks that were impacting testing activities.
Incremental features in future
TCS will maintain its investment in the Smart QE Platform:
There has been a significant change in testing IP over the last few years, with vendors aggregating IP and accelerators into platforms around a theme: DevOps or digital testing.
What is new is that, with leading providers such as TCS, AI is now becoming a reality in the automation of test services across the testing lifecycle. With developments such as these by TCS, testing is moving closer to genuinely reflecting its new name of Quality Engineering.
]]>
We recently caught up with TestingXperts (Tx), a software testing/QA specialist. Tx was set up in 2013 and has presence in Harrisburg, PA, London, and Chandigarh and Hyderabad in India. Revenues of Tx in 2017 were $15m, and its current headcount is 500.
The model of the company is based on Indian delivery: currently, around 80% of its personnel are located in India, primarily in Hyderabad and Chandigarh, with the remaining staff mostly located in:
Portfolio specialization around IP-based offerings is TestingXperts’ priority
To drive its differentiation, Tx continues to expand its service portfolio to specialized and next-gen services such as mobile testing, UX testing, DevOps/continuous testing, and data migration testing. Newer offerings include testing of AI, blockchain, chatbots, and infrastructure-as-Code (IaC). Tx dedicates a high percentage of its revenues, ~5%, to internal R&D, and has developed eight IPs in support of these offerings.
IaC is unique to Tx; it tests the scripts that define servers, set up environments, balance loads, open ports, based on tools from ISVs such as Hashicorp’s Terraform, Ansible and puppet. Tx has created a testing framework, Tx-IaCT, for writing the python scripts that are used for validating that the right cloud infrastructure has been provisioned, conforms to specific benchmarks such as CIS standards (around server configuration, for security purposes) for AWS and internal corporate rules. Tx-IaCT is a differentiator for Tx. It continues to invest in it, expanding from AWS and Azure to Google Cloud. Tx is also expanding its test suite to industry-specific standards such as banking’s PCI and U.S. healthcare’s HIPAA.
The IP that Tx currently uses the most is Tx-Automate. It is a continuous testing platform that pre-integrates DevOps software tools such as CI, test management, defect management, and static analysis tools and test support activities, such as test data management, and web services/API testing. Tx-Automate integrates with Selenium for web-based applications and Appium for mobile applications, as well as with more traditional test execution tools such as Micro Focus UFT and Microsoft’s Visual Studio.
Along with Tx-Automate, TestingXperts has created mobile device labs in Chandigarh and in Hyderabad, with a smaller one in its Harrisburg facility. TestingXperts maintains its own lab, despite the abundance of cloud-based mobile labs, for several reasons. It provides access to real devices, rather than a mix of devices and emulations. The company also highlights that owning its own labs with 300 devices allows it to offer more competitive services to its clients and brings it flexibility. Along with the test labs, Tx has developed a set of core scripts based on Appium and Selendroid.
Conclusion
To some extent, the arrival of digital testing and other next-gen testing offerings (such as UX testing, crowdtesting, AI automation and testing, and RPA/chat bots) is redefining what ‘state of the art’ means for software testing services. With testing becoming much more technical, NelsonHall is finding that expertise-based offerings are no longer sufficient, and more comprehensive IP-based offerings are becoming the new norm.
With this in mind, it is refreshing to see that a small testing firm can bring specialized offerings (such as IaC testing) to market that few other vendors have.
]]>
We recently talked with Performance Engineering, a horizontal unit within Tech Mahindra and a growth story within the firm’s testing practice: the unit’s headcount has nearly tripled in the last four years to 1.2k.
Part of this success relates to changes that Tech Mahindra has made to its service portfolio, which has expanded from a specialized testing offering (performance testing) to include project activity (performance engineering); automation (with the introduction of the equivalent of a functional testing framework called SPARTA); shift left consulting (introducing performance engineering earlier in the project lifecycle); and also shift right (monitoring of applications in production, using APM tools).
And, importantly, Tech Mahindra has made its performance testing services portfolio relevant to digital (e.g. cloud, mobile apps, and IT projects) and DevOps initiatives. While many of its recent offerings reflected an expansion towards consulting and engineering, with DevOps, Tech Mahindra has been focusing on testing and technical skills. The Performance Engineering unit introduced its CONPASS IP with the intention of providing testing services as part of the DevOps process, from continuous integration to application performance monitoring (APM).
AI-based bug and event identification along with root cause analysis
The most recent offering from the unit is around AI use cases. Earlier this year, Performance Engineering launched a new IP, IMPACTS, which identifies events and incidents and proceeds to root cause analysis. It relies on Dynatrace’s APM software technology, and is integrated with most DevOps tools to make it relevant to agile projects. IMPACTS helps to identify the topology of systems (including applications, services, processes, hosts, and data centers), and detect where anomalies created by incidents lie.
Performance Engineering is using IMPACTS beyond production, the core activity of APM: the unit is deploying it across development, testing, and production environments to find bugs in development and test environments, and the resulting incidents in production. IMPACTS has different sets of KPIs depending on the environment. At the development stage, it looks for issues such as memory leaks, garbage collection, or database threat counts. KPIs for the testing phase cover auto-scaling failures or inadequate server resources. In production, the focus of the KPIs is on items such as network bandwidth, availability, latency, user experience, and conversion factors.
Tech Mahindra highlights the need to use AI tools for handling large applications, and for correlating bugs and events across the different environments. One of the challenges of DevOps testing is that the development and testing environments are a scaled-down or incomplete replica of production environments. AI helps to extrapolate patterns found in development and testing to production.
Performance Engineering is targeting several use cases for IMPACTS, including:
To date, Tech Mahindra has five clients using IMPACTS, including three communications service providers (U.S., U.K., and Europe), and a large bank in India. Implementation time takes up to six weeks and requires two releases of the software. Tech Mahindra Performance Engineering resells the Dynatrace application licenses but does not charge for its IP.
Priorities include more AI
Tech Mahindra’s immediate priority with IMPACTS is getting more of its clients using the platform.
The roadmap for IMPACTS includes more AI use cases. At the moment, Tech Mahindra is using the technology provided by Dynatrace as a core element. In the next few months, the unit intends to develop its own AI use cases based on open source software.
IoT will be a strong driver for developing AI use cases. Tech Mahindra is now being asked to perform machine-to-machine performance engineering work, where volumes of data and transactions far exceed what it has done in the past. AI will be a requirement for handling such complex projects. The world of performance engineering is on the verge of becoming even more complex.
]]>
In the past five years, NelsonHall has observed software testing services vendors adapting their portfolio around digital testing, focusing initially on agile and DevOps, and with a sense of urgency given the accelerating adoption of agile development methodologies. The transformation towards DevOps/continuous testing is ongoing, with most vendors now having their DevOps testing building blocks in place.
Another aspect of the digital testing journey has been around UX testing. Digital testing is no longer restricted to mobile apps/responsive websites and dealing with the multitude of device/OS/browser combinations; now, the focus has shifted to UX testing activities. This brings new challenges to the way IT departments conduct testing: testing tools are different from those used in functional and non-functional testing, the tool landscape is very fragmented, and the automation level is much lower.
In this blog, I look at what TCS’ Quality Engineering & Transformation Group (QET) is doing in the digital space with its newly-launched CX Assurance Platform (CXAP), which is combining a focus on digital testing along with security and performance testing. CXAP is focusing on a web application’s five key attributes: compatibility, usability, security, accessibility, and performance (CUSAP).
TCS has structured its CXAP offering into four components, focusing on CUSAP:
Dipstick assessment
With dipstick assessment, TCS QET assesses the five CUSAP attributes, provides a quantitative score, and recommendations for removing technical issues that were identified:
The assessment is essentially based on a sample approach, covering 10%-20% of a web application’s pages or mobile application’s screens.
KPI-based assessment
The KPI-based assessment relies on the analysis of data captured by several web analytics and application performance monitoring software tools, e.g. Adobe Analytics, Google Analytics, AppDynamics, and Dynatrace.
A KPI assessment provides a short analysis identifying any potential issues with a web application and recommending next steps. An example of this approach is for a merchant site: understanding how many customers engaged in a transaction, how many completed the transaction, comparing this number against the projected number of transactions, and understanding the difference based on a CUSAP analysis.
Sentiment analysis
With its sentiment analysis, TCS QET is expanding its sources of data further to social media and forums, still with its five CUSAP attributes in mind. QET points to causes other than IT, e.g. lack of product availability, or uncompetitive pricing that influences KPIs such as conversion rate. This service relies on the traditional analysis of Twitter, Facebook, Google +, and user forum data.
Execution services
Execution services are essentially an extension of the ‘dipstick’ assessment explained earlier. While dipstick is based on samples, execution services aim to test the entire application. An implication is the level of automation required: TCS QET is investing in automation, planning to achieve ~50% of testing automation in execution services in its next release this year, eventually reaching 100% automation.
Towards a non-linear business?
Three evident features of CXAP are:
TCS says it has about twenty-five clients for CXAP, most of whom are opting for bundled dipstick, KPI assessment, and execution services.
The ambitions of QET with CXAP go beyond having a central UX testing IP aggregation point that is subscription-based. In the short-term, TCS wants to develop a self-service, where prospects and existing clients will register, select the service of their choice (based on a service catalog approach), request a quote, and get the service provided. Some of the underlying technology for this self-service portal is already in place. Also, CXAP will expand to functional testing.
And QET’s ambitions do not stop there. QET also wants to add AI/cognitive capabilities to CXAP, conduct an automated root cause on defects, and predict defects, based on past release data. This is a bold ambition and we will monitor developments with interest.
]]>
Tech Mahindra recently briefed NelsonHall on its new model-based testing (MBT) offering, Automated Test Assurance (ATA). To date, enterprises have shown interest in MBT technology, but take up has been low, partly because they had already invested in creating test cases and test scripts and were reluctant to make a further investment in creating a new a set of process models/diagrams. Also, the expertise and time needed to create these models make it an expensive activity.
However, Tech Mahindra argues that with ATA it has an offering that overcomes these challenges through easier creation of models/diagrams and some reuse of existing test artefacts.
Easier creation of models/diagrams
ATA relies on software products by ISVs such as Conformiq and CA (Agile Requirement Designer). Tech Mahindra argues that creating models based on these software tools is easier and faster than in the past, and therefore reduces the initial investment. The company estimates that the creation of basic models can be done in just three to four days, with more complicated ones taking up to two weeks.
Once the model is created, ATA relies on a standard MBT approach, with automated creation of test cases, and test scripts. Test execution can be done using a range of tools, including HP/Micro Focus UFT and Selenium, or manually, through the creation of test cases. A benefit of this approach is traceability, with the testing lifecycle now automated and documented.
Furthermore, the impact of MBT, and ATA, goes beyond creating a new type of artefact: the model/diagram becomes the primary test artefact. Any changes in client requirements need to be created at the diagram level, rather than in resulting test cases and scripts. The impact in terms of skills required for functional testing is therefore considerable, with less need for manual testing capabilities, also for test execution automation engineers.
Reuse of existing test cases & scripts
ATA also aims to address another factor that has inhibited enterprises’ adoption of MBT in that it reuses existing test case and test script assets. With ATA, Tech Mahindra believes it can reverse engineer the majority ofclients’ test cases and scripts and create models/diagrams. The approach works well, as long as the source artifact relied on standard approaches and tools. Test case reverse engineering is also possible for test cases based on standard languages, e.g. BDD’s Gherkin. Nevertheless, the reverse engineering approach should be considered as an accelerator rather than a 100% reliable automation tool.
Early client adoption is promising
Tech Mahindra has seen immediate client interest in ATA: it already has 20 accounts considering ATA, plus four projects underway. Sectors showing interest include communications, banking, and automotive.
One example is a test script migration for a large communication service provider looking to move away from HP QTP/UFT and adopt Selenium. This is a new trend: in the past, there was little major migration activity from HP/Micro Focus in functional testing. With its ATA offering, Tech Mahindra is targeting large estates of test cases and test scripts, where the high volumes of artefacts create a strong business case.
To facilitate the adoption of ATA, Tech Mahindra has created a repository of business models relevant to the banking industry for their core banking applications based on Temenos T 24. This repository of models is a starting point for adapting to the specificity of the client’s business processes.
Tech Mahindra highlights that it will be considering all industries with standard business processes. With this in mind, we think the telecom service provider industry (a sector that accounts for almost 50% of Tech Mahindra’s revenues) is a next step. SAP and Oracle are probably in line too, including Oracle’s Flexcube.
]]>
When NelsonHall published the first crowdtesting vendor evaluation in the industry in 2017, it was striking to see how the main crowdtesting players had different strategies and were developing their services in different directions. And crowdtesting continues to surprise by bringing novelty to the testing service industry – an industry essentially based on process and, increasingly, on automation. Here I take a quick look at recent developments at Testbirds.
Going beyond the lab
Testbirds’ positioning is around technology and automation of functional and unit testing. But the company has taken the unusual step of going beyond crowdtesting to offer access to virtual computers (operating on Windows, Mac OS, and Linux), as well as iOS- and Android-based devices. Along with the emulations/virtual computers, Testbirds also offers test automation services based on Selenium, Tricentis, Appium, Sikuli, and on Jenkins (CI), and Maven (unit testing). This offering is called TestChameleon.
In 2017, Testbirds also launched its GRDN offering, for accessing the mobile phones of its crowdtesters, competing with the offerings of mobile lab vendors such as Perfecto Mobile and Device Anywhere. However, rather than building a lab of devices, Testbirds developed an app that provides access to the smartphones of its crowdtesters. Crowdtesters download the app and then give permission to Testbirds to use their smartphone to run automated or manual functional testing during convenient time slots.
The result of GRDN is at scale, and well beyond what most mobile lab vendors can achieve. Testbirds estimates that, if all crowdtesters were to participate, it could provide access to ~450k devices globally in real-life conditions rather than in a lab.
GRDN’s progress
The past year has been one of commercial take-off for Testbirds, and the company has continued to invest in its offerings. It is finding that the complementary nature of TestChameleon and GRDN is working well, and is using GRDN for major releases and TestChameleon for clients’ minor release/daily build needs.
One major client of this offering is Deutsche Telekom, which uses GRDN for its news and email portal, T-Online in Germany. Testbirds conducted regression testing for four features added by T-Online, providing automated testing, exploratory testing, and usability testing. This is a recurring project, providing testing services during two days for every new sprint.
Testbirds has expanded GRDN’s capabilities, going from Android to iOS devices. The company highlights the expansion to Apple devices was a significant technical investment, involving as it does Apple’s proprietary approach.
And what lies ahead?
The immediate priority is now a commercial push. Testbirds believes that the GRDN offering has solid growth potential, especially now that it supports iOS devices. Enhancing the iOS offering is a priority, with Testbirds aiming to connect iPhones wirelessly, as the company does with Android devices, rather than via a cable.
In the longer-term, the future of GRDN will be around connected devices, with Testbirds exploring how to test a multitude of connected devices in real-life conditions. The company wants to avoid having to recreate an app for each new connected device product, and is looking to create an app that can be used across several different connected products. Testbirds is currently working with a major German automotive OEM on this, for its connected car testing needs, and NelsonHall will report on the progress of this new offering in due course.
To find out more about NelsonHall’s crowdtesting NEAT vendor evaluation, contact Guy Saunders.
In the past three years, testing service vendors have been looking at how to adopt more automation and apply AI to testing. Accenture, meanwhile, has been approaching things from a different angle, looking at how to test AI software itself rather than applying AI to the delivery of testing services.
Here I look briefly at Accenture's approach to AI testing, which used a two-phase methodology called Teach and Test.
Teach phase: Identifying biased data
The broad principle of the Teach phase is to make the AI learn, iteratively, to produce correct output, based on a ML approach. Accenture highlights that before proceeding to this phase, a key priority is around data validation, in particular making sure the data used for teaching is not biased, as this would, in turn, introduce bias to the AI outcomes. One example is text transcripts from contact centers, that may well include emotional elements or preconceived ideas. Another is a claim processing operation, where there may be bias in terms of gender and age.
To prevent potential bias, Accenture uses a ML approach for clustering terms and words in the training data, and identifying which sets of data are biased based on the word clusters.
Curating the data also has the benefit of bringing a process approach to ML – processes that can then be communicated to regulatory authorities to prove how data is cleaned, how, and why.
One challenge in de-biasing data is that it reduces the level of data that can be used for training. So, Accenture differentiates between which parts of a data set need to be removed and which parts can be neutralized (i.e. by removing some of the bias). Accenture agrees with the client on which bias elements need to be removed and which ones can remain.
Test phase: Identifying defects
During the Test phase, Accenture’s priority is AI model scoring and evaluation. Here, the idea is that, even if Accenture does not know what the AI output of a single test input will be, it can still check the outputs of multiple inputs, taking a statistical approach.
This approach helps to identify the AI outputs that are materially different from the majority. If the AI model has defects that need to be fixed, Accenture will enhance the model and re-train it.
A first step for AI testing
Accenture formally introduced the Teach and Test methodology in Q1, and says it already has several AI testing projects in progress, an example being a virtual agent for the credit card processing activities of a financial services firm.
With Teach and Test, Accenture is providing a first step in the market for AI testing. And while the story of AI testing has yet to be written, two things are already clear:
First, manual testing will only be a marginal part of AI testing, as AI will require scale that manual testing cannot provide.
Second, with AI use cases such as virtual agents, the deterministic approach of test cases and test scripts is no longer relevant. In all likelihood, current testing software tools used in functional testing will not find their way into AI testing. This implies that the role of testers will continue to evolve, from functional testing towards technical and specialized expertise.
]]>
I have touched several times on how AI is being used in the context of software testing for reducing the number of test cases, optimizing coverage, and estimating the number of defects in an upcoming release of an application. And, as AI technology is becoming pervasive, we are expecting more use cases to emerge for software testing soon.
We recently had a discussion with Santa Clara-based Infostretch about how it is using AI in the context of test case migration, and how it is helping a large U.S. financial institution in optimizing the number of test cases, and porting test scripts from HP/Micro Focus UFT to Selenium.
This surprised us, as NelsonHall has not detected any major client appetite for test script migration in the past, with enterprises taking a bimodal approach, maintaining existing estates of test scripts in their current formats and using open source testing software or less expensive testing software products (than from mainstream testing software) for their digital projects.
Infostretch believes that with the emergence of AI, clients will reconsider their estates of test scripts and start porting them to open source software tools such as Selenium, the de facto standard in web application testing.
Infostretch in brief
Founded in 2004, Infostretch initially provided project testing and certification services around mobile apps, with a large U.S. communication service provider its first client. The company expanded its capabilities to testing commerce applications (Adobe, SAP hybris, Salesforce), and to DevOps/agile. QA/testing remains the core capability, although the company now also provides support in the development of websites, mobile apps, and software products, using an agile/DevOps approach. Infostretch today has 900 employees and services ~60 clients.
Helping a large U.S. financial institution to optimize its 150k test case estate
Infostretch’s’ AI testing activity began with a U.S. financial institution with a large estate of 150k test cases. Under a new CIO, the company decided to reduce its script estate while maintaining test coverage; in support of this, Infostretch used ML to identify redundant test cases, based on test case semantic similarities. The approach was complemented by a statistical combinatorial-based testing (CBT) engagement. Infostretch identified that 17% of the estate was duplicate test cases.
Infostretch then worked with the client on porting test scripts from a variety of test execution COTS (including HP/Micro Focus UFT) to Selenium, and also on automated manual test cases. Infostretch’s approach relied on identifying existing testing objects within UFT through a ML approach, and converting them into Selenium and Java objects. This initial effort then provided preliminary Selenium scripts, which test automation specialists complemented using QMetry.
The level of test conversion effectiveness varies as a function of the complexity of the test script. For simple ones, Infostretch argues that it can automate up to 90% of the test script migration. The proportion decreases with customized applications, and when objects are no longer standard.
In total, for this large U.S. financial services organization, Infostretch estimates that in the eight months this migration project has been going, it has helped the client increase its automation level from 45% pre-project to currently ~65%. The company has:
ASTUTE offering
Infostretch launched its AI-based Agility, Quality and Optimization in Software Testing (ASTUTE) offering in March 2018. With ASTUTE, Infostretch has grouped all its capabilities, from QA consulting, model-based testing, to test case optimization, to test case execution, test data management, and environment provisioning. ASTUTE includes several AI tools/bots:
Changes in testing industry dynamics
It is encouraging to see an expansion of the use of AI tools in testing, particularly to niche firms such as Infostretch. There is ample room for innovation in the testing industry, and firms that specialize around AI testing are likely to be at the vanguard of innovation.
Infostretch claims that client demand for test script migration is accelerating, and is involved in a number of large projects, including one payment processing client with a 15k test case estate. The use of AI and automation tools, and of open source software in migration activity, is triggering a significant change in the testing industry, and this will also have a significant impact on tester skills.
]]>
We recently spoke with QA specialist SQS about how its acquisition of Moorhouse Consulting and its own acquisition by Assystem Technologies (AT) are each supporting its management consulting (MC) services ambitions.
SQS continues to deploy its MC units across geographies
Firstly, Moorhouse Consulting. This U.K.-based company brings in ~140 consultants and £22m in annual revenue (2017). As well as bringing in scale, Moorhouse complements SQS’ program management capabilities, an area of strategic importance to the company. Following the arrival of Diederik Vos in 2012 as CEO, SQS identified its MC offerings as a priority for investment. With its MC arm, SQS offers change and program management services for large business and IT projects and programs, providing help in ensuring the success and quality of these projects. MC means that SQS becomes involved early in the project life cycle, and can thus position its QA/software testing capabilities at a more strategic level than just pure test execution.
This is where Moorhouse fits in: it delivers transformation projects in areas including change management, customer journey improvement, strategic design, digital and technology transformations, post-merger and acquisition integration, program management, and performance improvement.
SQS continues to execute on growing its MC strategy, something it has accelerated in the past three years with the acquisitions in the U.S. of Trissential (2015, $32m revenues in 2014) and more recently in Italy with Double Consulting (€7m in 2016 revenues). SQS believes it now has its major countries (U.S., Germany/Austria, Italy, and, with Moorhouse, the U.K.) well covered.
A priority is sharing country-based capabilities across MC units. For example, Trissential in the U.S. is deploying its agile development approach and lean methodologies in other MC units. And so is Italian unit Double Consulting, around its financial expertise. A next step is Moorhouse Consulting sharing its CX and post-merger integration capabilities.
Being part of AT means new opportunities
So, what does SQS’ acquisition by AT, an engineering and R&D (ER&D) services vendor headquartered in France, mean for its MC ambitions? SQS is now part of a €1bn group with 14k employees, triple its former size.
In the short term, MC will continue to expand its capabilities in its largest market, Germany. Short term growth there will be mostly organic, though it will be opportunistic with M&A.
In the longer term, AT (which itself is now owned by PE Ardian) presents new opportunities for SQS and MC both in terms of geography (expansion in France) and also in capability. SQS highlights that, though the nature of work in ER&D services is different from that in IT projects, the required program management skills are broadly similar in both areas. Hence, AT’s client base potentially represents a brand-new target market for MC. The obvious opportunities are in managing IoT programs, currently a high growth market. And of course, IoT sits nicely at the intersection of IT and ER&D, in the physical to digital convergence space, and will thus help MC in developing ER&D program management expertise.
]]>
We continue to assess the impact of AI on software testing services in its various forms (ML, NLP, deep learning), talking to the major suppliers in the industry. Vendors have been accelerating their investment in AI technologies to make sense of the wealth of data available in defect management tools, production logs, and ITSM software tools, creating use cases – mostly around the vast number of test cases, their optimization, and prioritization.
Sogeti, a subsidiary of Capgemini Group, recently briefed NelsonHall on its Cognitive QA offering for testing activities such as testing project management and governance, looking at test coverage, test script prioritization and management.
Sogeti puts this Cognitive QA approach in the context of agile and DevOps, highlighting that with the adoption of agile, testing-related data is becoming less accessible, and the quality of data is decreasing. For instance, with agile methodologies, developers are less inclined to enter user requirements into their systems: this makes understanding of user requirements, for testing purposes, less easy. Also, the increased adoption of open source software, and of COTS from small ISVs, away from the testing product suites of HPE and of IBM, means data is now distributed across different software with different data structures. Data traceability is becoming more difficult.
Accordingly, Sogeti initiates its Cognitive QA projects by auditing the data quality and usage by the client across its applications, through a series of workshops. Sogeti argues that this data quality and maturity phase is key for deriving relevant analytics/insights.
Once the quality of data has been assessed, Sogeti proceeds to the first phase of its Cognitive QA projects, ‘Predictive QA Dashboards’, which is reporting-related and uses a dashboard with drill-down capabilities. The main AI use case is around defect prediction, from analyzing data in previous releases and identifying (from changes introduced in the new release) how many bugs are likely, and where. This phase also includes an effort estimation.
In a second phase, called ‘Smart Analytics for QA’, Sogeti deploys its testing best practices, e.g. test strategies and risk analysis around test case selection, prioritization and coverage, into a machine-readable form. Sogeti currently uses IBM SPSS for structured data and is starting to use Watson for unstructured data.
The next two phases are more traditional.
The third phase, ‘Intelligent QA Automation’, uses IP and accelerators that Sogeti has developed, mostly around DevOps, focused on test execution and test support activities such as test data management, and test environment provisioning, as well as service virtualization.
In the fourth and final step, ‘Cognitive QA Platforms’, Sogeti’s consulting approach steps in again, looking at how AI will have a role in the future. Sogeti is envisioning instant testing and validation, self-adapting test suites, self-aware test environment provisioning, and automated risk management.
Sogeti has worked with several clients to date on Cognitive QA, across four industries: high-tech, financial services, public sector, and telecoms. In one client example, Dell EMC, Sogeti helped in test case prioritization: the client has 350 testers deployed on product engineering testing. Its challenge is that Dell EMC’s products all combine hardware and software, and it has to ensure they will work with the many different middleware releases and patch combinations.
Sogeti’s positioning of QA Cognitive is interesting. Agile and DevOps is bringing back disruption and software tool fragmentation into the SDLC after years of investment by enterprises in IBM and HPE/Micro Focus’ suites of testing products to reduce that fragmentation. With many enterprises still looking to become more Agile-centric, we may be on the verge of a data testing disruption that will reduce visibility into testing activities. And this is where Sogeti’s data audit function comes into play.
]]>
In something of a surprise move, Assystem Technologies (AT) recently announced its intention to acquire Germany-headquartered but LSE-listed SQS. AT is offering 825 pence per share, valuing SQS at £281m, and has secured (on an irrevocable basis) 31.4% of the shares of SQS from founders, executive management, and board members. This is a generous offer - 56% over SQS’ share price on December 14, 2017, and 31.5% over SQS’s highest ever share price - and demonstrates AT’s eagerness to acquire SQS and its appetite to become a “digital transformation” leader.
SQS Is Close to Completing its Portfolio Transition
SQS is the largest software testing pure-play globally - in H1 2017, it had revenues of €160m and an adjusted EBIT margin of 7.5%.
SQS is at an inflection point where Managed Services, its revenue engine over the past decade, is gradually slowing down, in line with the market, and impacted also by a few contract terminations. Meanwhile, SQS has been reducing its level of staff augmentation work: this transition is almost over. With the rise of digital testing and DevOps/continuous testing, SQS is also shifting its service portfolio to what it calls Management Consultancy, which includes both specialized testing services (non-functional testing, and other technical services such as continuous testing) and program/project services.
In pursuit of this, SQS has been looking at inorganic growth, firstly acquiring Thinksoft Global, which doubled its presence in India (to currently ~1.8k), and expanded its presence in Italy, and the U.S., while keeping its net debt under control (€32m in H1 2017).
Assystem Technologies Has Passed Recently Under PE Ownership
AT is a different type of specialist: it provides engineering and R&D (ER&D) services to the automotive and aerospace sector, in France, primarily, and Germany. AT was born in September 2017 from a 61% acquisition by French PE fund Ardian of engineering group Assystem units. Ardian recruited the former CFO of Altran, the largest ER&D vendor globally, to become AT’s CEO.
AT mostly provides two main types of engineering services: embedded system development, and mechanical engineering. The company also offers specialty services such as plastic engineering and PMO. The company had 2016 revenues of €578m, and a headcount of 10k. It has been in high growth mode in 2017 thanks to a buoyant automotive ER&D market in France; also in aerospace, from a successful transition from services around product design and engineering, to process manufacturing engineering, primarily with client Airbus, helping the client on ramping up its manufacturing of aircrafts.
At the time of the acquisition, Ardian mentioned its plan to scale up AT quickly and reach €1bn in revenues, expanding in the key German ER&D services market. With this announcement just four months later, the goal will be achieved, and AT will have a significant presence in the DACH region, U.K., and U.S.
IoT as an Immediate Priority
One of the big questions with this acquisition relates to industrial synergies: the activities of SQS and Assystem Technologies have little in common, although SQS does have some presence in embedded software testing.
We talked with SQS’ executive management, which will also be part of the senior management of the enlarged group. The immediate priorities for the enlarged group are to focus on:
In the short-term, SQS and AT will continue to operate as independent companies.
So, what will this mean for SQS? It will continue its M&A strategy of tuck-in geographical expansions, with Management Consultancy the primary focus; it also wants to continue expanding in the U.S., already its largest international geography.
In the longer-term, SQS intends to expand its service portfolio towards software design, in the context of agile projects, where demand for bundled software development and testing has been rising. This will significantly impact the positioning of SQS, which is known for its QA capabilities.
Another long-term priority for SQS is to support AT’s adoption of offshoring. While AT has some nearshoring in Romania servicing the French automotive sector, the company wants to accelerate its transition towards offshore. The experience of SQS will help, with its 1.8k personnel in India, and its ability to drive high margins on offshored projects.
Does this Make the Combination of AT and SQS a Global Digital Transformation Leader?
Not quite: the businesses of the two firms remain very different. Nevertheless, we are reassured that there are more synergies than we had initially thought. We will be monitoring their expansion in IoT testing services with interest.
]]>