NelsonHall: IT Services blog feed NelsonHall's IT Services program is a research service dedicated to helping organizations understand, adopt, and optimize adaptive approaches to IT services that underpin and enable digital transformation within the enterprise. <![CDATA[Cognizant Acquires to Scale Up & Specialize Salesforce Services]]>


We recently talked to Cognizant about two planned Salesforce-related acquisitions: Code Zero and EI Technologies, which the company announced in Q1 2020.

Cognizant Strengthens Specialized Billing Capabilities with Code Zero

Code Zero is a U.S. Salesforce CPQ and billing specialist with experience in providing services to manufacturing firms expanding their business from selling products to commercializing subscriptions. It has developed several accelerators in the form of SAP and Oracle Ebusiness Suite connectors. Code Zero brings in an estimated 60 personnel, most of whom are based in one of two primary locations: Atlanta, GA and Charlotte, NC.

For Cognizant, Code Zero expands capabilities it acquired in 2018 with ATG, a larger CPQ and billing specialist. Created in 2000, ATG initially serviced the communication service provider industry, providing complex services involving integration with multiple applications. In 2010, ATG expanded its target sectors to the manufacturing and high-tech sectors, helping clients transition to subscription-based and recurring revenues. ATG had developed connectors with ERP applications, a recent addition being a Netsuite integration tool for high-tech clients. The company also offers services around migrating data to CPQ and billing and post-implementation managed services such as application enhancements, training, and new Salesforce feature adoption.

The addition of Code Zero brings to ATG a complementary geographical presence with offices in Kansas City and Saint-Louis, MI, St. Louis and Missoula, MT, and Cincinnati, OH. NelsonHall estimates the combined headcount at around 375.

Looking ahead, ATG wants to deploy at Code Zero its methodology and dashboard for monitoring the health of engagements with clients, and also align with its project discipline. ATG is Cognizant’s CPQ and billing CoE, and Cognizant intends to leverage ATG to expand its capabilities organically in India and also in Barcelona, Spain.

With EI Tech, Cognizant Increases its Salesforce European Onshore Presence by 50%

Cognizant also announced earlier this year its intended acquisition of the domestic operations of French Salesforce service vendor EI Technologies. Where Code Zero has brought in specialized skills, EI Technologies will very significantly strengthen the European Salesforce services presence of Cognizant, particularly in France. NelsonHall estimates the pending acquisition will increase Cognizant’s onshore Salesforce services presence by over 50%, adding around 350 personnel based across France’s three largest cities: Paris, Lyon, and Marseille.

EI Technologies has capabilities in Sales, Services, and Community Clouds, taking an agile and iterative approach to projects. Its client base is primarily drawn from the manufacturing/CPG, retail, and insurance sectors. The company has developed several IPs, including accelerators for deploying Sales and Community Clouds, and templates for the insurance and retail industries. Cognizant highlights also the AppExchange understanding that EI Technologies has developed, navigating across Salesforce’s ISV partners and selecting the right solutions for clients.

EI Technologies has two other assets:

Cognizant Continues to Acquire With Lev

Cognizant continues to acquire Salesforce capabilities, announcing last month its acquisition of Lev, an Indianapolis-headquartered firm that brings in specializations in Marketing Cloud. Lev is a significant firm with 200 employees.

In total, Cognizant has acquired ~650 Salesforce services employees, showing confidence in the resiliency of the Salesforce ecosystem, despite the pending global economic recession.

NelsonHall is seeing, as one of the impacts of the COVID-19 pandemic, a short-term delay in Salesforce projects. Nevertheless, Salesforce is at the center of digital retail and marketing programs and our global survey of over 1,000 organizations indicates that in the mid-term these will be less impacted than other types of program; and the pandemic is clearly accelerating the move to cloud. Cognizant’s strategic rationale for scaling up and specializing its Salesforce expertise remains valid.

<![CDATA[Infosys Helps Utilities Navigate the Impact of COVID-19]]>


As with all industries, utility companies are responding to the ever-changing requirements related to COVID-19. In this short blog, we take a look at the macro and value chain impacts of COVID-19 on the utilities industry, how utilities are adjusting their business and IT priorities, and how Infosys is supporting utilities in their ongoing response.

Macro-level and value chain observations & impacts on utilities

Utilities are seeing noticeable demand reduction, led by the commercial & industrial sector, which has mostly gone into lockdown, resulting in load reductions of ~3-11% across most of the U.S. (half of which is due to COVID-19) and ~2-20% in Europe. This is impacting grid operations, as the usual seasonal load shapes of dips and surges in demand (i.e. in the residential sector due to increased homeworking) are changing, all of which are impacting revenues and marginal costs.

There are also further impacts in the rate case and regulatory space with a number of hearings and energy legislation being delayed or postponed, in particular across the U.S., resulting in capital spends tied to these being delayed. It is further impacting the smart meter installation 2020 target set by OFGEM in the U.K. and contingency planning around RIIO-2 price controls. However, major capital programs already approved across the U.S. and Europe are moving ahead.

Across renewables (solar, wind, and storage), sectors are facing supply chain disruption on goods for new installations and maintenance, and parts availability for grid components is further impacting networks. Global electric vehicle sales are expected to drop by ~40% in 2020, further limiting growth in power demand.

These supply chain and logistics issues are impacting transmission and distribution across field services, outage restoration, and preventive maintenance. Discretionary non-critical construction projects are being delayed, although RFPs across IT services are continuing. A major area of concern for utilities is workforce availability, where ~40-50% of employees are field workers supporting critical infrastructure. EEI has advised utilities to plan for up to ~40% absenteeism due to the COVID-19 pandemic.

Utilities are adjusting their priorities in response to COVID-19

To ensure business continuity, utilities are ramping technologies, infrastructure, and processes to enable work from home, and according to Infosys, utilities have enabled ~50% of employees to work from home (the remainder are field resources and employees in critical roles across their facilities). They are mobilizing and fine-tuning BCP actions and ensuring rapid logistics support in the supply of PPE for field force and looking to enable the field force with automated processes and technologies, including AR/VR. In addition, they are enhancing monitoring and alerting capabilities in response to cybersecurity threats. Utilities need to provide support for customers, including suspending disconnections, providing self-serve facilities, bots and web chat capabilities, and deploying analytics to track contact center and employee performance while WFH.

Utilities are further deferring discretionary non-critical projects and enabling rapid changes to systems to support COVID-19 response and assessment, with routine inspections and non-critical work assigned a lower priority.

Utilities will accelerate investment in digital technologies to be more resilient

Infosys sees utilities increasing investments in a number of key priority areas, including Digital Workplace, Cloud Computing, Cyber Security, Digital Workforce, Hyper Automation, and Smart Asset Management.

Focus will increase in particular across digital workplace to support WFH, across multiple types of devices, and enabling productivity and collaboration tools (i.e. Microsoft Teams, Zoom, Cisco WebEx, Skype for Business), and supporting virtual call centres. They are using gamification methods to drive employee engagement and enhancing virtual training platforms (e-learning/virtual assistants).

This is further driving cloud requirements to support VDI, IT infrastructure, and training platforms. Other key focus areas include secure connectivity of all devices and assets through unified endpoint management (UEM), and increased focus on data masking and data management.

Focus will increase in enabling the field force digitally, with additional advanced technology like AR/VR, and remote operations, including drones. Companies will also invest in cross-skilling staff across job functions, to be able to do more with less staff.

Utilities need to further streamline operations, including automation of non-decision-making operations, enabling more self-service, and industrial automation for daily operations support. In addition, they need to expedite the transition to next-generation asset performance management with IoT integration, remote sensing, and AI/ML-based predictive maintenance.

Infosys is working collaboratively with utilities to support COVID-19 initiatives

Infosys helps utilities Navigate the Future of Energy by transforming customer experience, modernizing infrastructure (grid infra and IT assets), and enabling future-ready business models. This is particularly key in its current initiatives to support utilities in their response to COVID-19.

Immediate priorities for utilities

While Infosys has done a very detailed analysis of challenges and opportunity impact of COVID-19 across the utility value chain, it is providing utility clients a prioritized view on what they should focus on now and what can be planned for the future. In terms of immediate priorities, these include six key areas:

  • Making employee safety and technology a top priority (i.e. IoT sensors to monitor health and geo-locations of field workers), and return to work protocols and mitigating workforce shortages
  • Enabling collaboration tools and best practices and remote working infrastructure for all employees and vendors
  • Ramping cybersecurity management for all IT systems accessed remotely
  • Personalized communication to all impacted utility end customers through smart video, digital billing, etc.
  • Enhancing mobile workforce management through GIS-based zoning and tracking (overlaying COVID data) for field services for essential and priority services
  • Deploying RPA and virtual assistants for repetitive tasks across call center and back office.

Infosys offerings & client examples

Infosys is supporting a number of global utility clients in their response to COVID-19 across a number of areas:

  • Digital Workplace: enabling utility workforces to work from home and remotely through Infosys’ Workplace Suite (collaboration), Modern Workspaces (VDI), Workplace Operations (automation, self-heal, analytics, virtual agents). In addition, enabling traders to access trading applications via VPN for day-to-day trading
  • Mobility & Field Force: created COVID geofencing solution and zones for field crew safety and protection, and COVID-19 impact GIS dashboard and real-time crew tracking, and work allocation and prioritization based on zoning
  • Customer Service: implementing COVID-19 energy support program and promoting digital billing, canceling service disconnects and waiving late payment charges.

It also sees further traction for its Wingspan open-source cloud-based IT skills training platform, where utilities are currently looking at creating digital CoEs in collaboration with Infosys. The platform now has multiple training courses, including technology, domain, and utility products developed by Infosys, that utility client accounts can use for digital capability build-out and cross-skilling. In support of hyper-automation, it is deploying its LEAP (Live Enterprise Application Management Platform).

Utilities smart bot and AI/ML use cases relevant to the grid, energy supply, and plant operations include:

  • Vegetation management and safety, with the ability to quickly do visual analytics through the use of drones for asset and field inspections, and using AI/ML to do predictive analysis, in partnership with third parties to identify where vegetation management needs to be done (i.e. where trees are close to transmission lines and need to be trimmed). The data is processed through the NIA IP layer
  • Infosys NIA-based chatbot for anomaly root cause and resolution
  • Grid analytics, using Grid 360 from Nexant, taking insights into the NIA common platform, and bringing out insights both at the planning end and operations end of the grid; DERM and urban grid offerings
  • AI-based RAMS leveraging KRTI 4.0. Infosys is integrating the Reliability, Availability, Maintainability, and Safety (RAMS) lifecycle services models from Pöyry with NIA predictive modeling and insights capability and tying this back to the asset management and work management systems to identify whether a specific process should be automated or kick-off a maintenance process.


Infosys sees continued traction in cloud adoption across utility enterprises, and a key IP includes its Polycloud (hybrid cloud orchestration) platform which is part of Infosys Live Enterprise Suite, to enable a utility to develop a new digital services platform and quickly launch new products and services. It effectively enables users to build vendor-agnostic solutions across cloud providers. It includes a vendor selection support framework, smart brokerage; self-service tools for server provisioning and deployment; and a governance framework.

Infosys expects further traction for remote surveillance (drone and robotics) for power infrastructure monitoring to reduce field visits, and AR/VR and smart glasses to enable remote field support backed by a central command team to mitigate potential staff shortages in the field.

Further traction in support of COVID-19 (and post-COVID-19) includes remote sensing technologies enabling touchless substations for device management and load control, and IoT, AI/ML-based analytics for planning and asset management. Infosys expects to see further traction for its Wingspan training platform as utilities seek to adapt to the ‘new normal.’

<![CDATA[everis: Digital Transformation Pivots from Discretionary to Fundamental]]>


Digital experience consulting has been a critical focus area for IT services clients over the last several years. NelsonHall estimates that digital experience consulting services revenues grew by 15.8% globally in 2019, and projects a CAGR of around 12% from 2021 through 2024, after the COVID-19 pandemic eases.

However, with COVID-19 still spreading, digital transformation projects with lengthy delivery timescales and indeterminate business value are currently vulnerable to deferment or cancellation. In a NelsonHall survey of over 1,000 CFOs globally, conducted in the early stages of the COVID-19 outbreak, they projected an average decline of ~2% in their companies’ spending on digital transformation initiatives in 2021.

Nevertheless, in an environment where their customers, employees and partners are forced to remain socially distant, connected to the outside world via digital channels, it is imperative for companies to continue to make investments in digital.

We recently spoke with everis, an NTT DATA consultancy focused on Europe and Latin America, to discuss its structured offerings for addressing customer relationships and internal service operations both during the lock-down and in the new world as we come out of it.

Customer Interactions Become Digital-First

everis highlights four offerings leveraging digital technologies where it can help clients quickly adapt their customer-facing operations in the current changed environment:

  • Customer support: B2C organizations in many sectors are challenged with increasing inbound calls in their customer service operations; to help clients expand their use of virtual assistants in these operations, everis is offering a conversational platform called eVA that can boost the adoption of this channel and simplify its management
  • Collections process:  with the economic disruption being caused by COVID-19, there are going to be customers unable to pay monies owed. For a company, the cost and effort of chasing after collections that will never be paid is sunk cost. The everis offering here is based on applying analytics and AI to collections data to help clients prioritize their collections effort
  • New business sales platform: offerings that use COTS products include Sales Commercial Cycle Optimizer solutions based on Salesforce and Microsoft, leveraging pre-built tools, including CPQ configurations and templates
  • E-commerce: everis has a number of offerings, including ones leveraging products from the likes of Adobe and Salesforce to support the expansion and modernization of clients’ e-commerce offerings, including ones to optimize the conversion rate.

Adapting Service Delivery for a Fluid Workplace

COVID-19 is also having a major impact on organizations’ internal operations; in the great shift to work-from-home, employee experience is evolving even faster than customer experience. everis has two tools to support employees operating in a work-from-home environment:

  • everis Knowler, which can be implemented with Office 365 and Azure, helps employees access the institutional knowledge required for them to do their job
  • TOGO, digital workplace offering to boost Microsoft Office365 and Teams adoption and improve collaboration and communication.

everis is also looking to help clients transform their service delivery in response to COVID-19 disruptions by:

  • Reducing the workload of remote workers through the expanded use of automation in business processes, leveraging its Clonika RPA PaaS offering and commercial RPA solutions such as Automation Anywhere that can be implemented based on the highest value use cases defined as part of a consulting and assessment phase
  • Working with clients to re-assess their digital properties and identify opportunities to transform these properties through standardization and integration on to a common digital platform. Developing a broader, cohesive approach across digital properties improves the ability to rapidly roll out new products in response to changing customer requirements. everis has developed a proprietary analysis tool, everis winder, that evaluates a digital platform to identify opportunities for transformation.

Digital Transformation as Response to a Crisis, Not a Victim of It

COVID-19 has caused global disruption. One of the ancillary impacts on enterprises has been a widespread re-assessment of investments to determine what is mission-critical in a time of economic uncertainty and what can be delayed until later. Digital transformation initiatives are often viewed as a discretionary budget item, to be undertaken only as time and money allow. However, with lock-downs temporarily pausing nearly all in-person and physical transactions, digital transformation initiatives should be viewed as a response to these challenges rather than budgetary line items that are victims of those challenges.  

everis appreciates that to help its clients weather this disruption, it needs to focus on offerings that address their most pressing needs, and that by tailoring digital transformation initiatives, it can enable clients to improve how they meet customer needs and how they perform their internal operations.

<![CDATA[Sogeti Updates TMAP: Going Beyond SDETs with Cross-Functional Teams]]>


Capgemini’s Sogeti recently introduced a new TMAP book, Quality for DevOps Teams., which is a direct successor to its TMAP NEXT book published initially in 2016. TMAP NEXT has remained one of the methodology bibles that guide QA practitioners in structuring their testing projects.

Sogeti has added regular updates around agile/scrum development, IoT, and digital testing that complemented TMAP NEXT. Now, with Quality for DevOps Teams, Sogeti highlights it has a complete revamp of TMAP in the context of agile and DevOps projects.

Moving to cross-functional teams

In writing the new book with agile and DevOps projects in mind, Sogeti has introduced a significant change in targeting the entire software development team and not just QA professionals. The company argues that in the context of DevOps, development teams need to go beyond having diverse skills (BAs, developers, testers, and operation specialists). The individual team members must be able to perform other team members’ tasks if required (which Sogeti calls cross-functional teams).

The impact of this approach is significant; it goes beyond tools and training and also includes change management. As part of this culture shift, team members have overall responsibility for their projects and are also required to learn new tools, and this might be outside of their comfort zone. To support this cultural change, program managers need to support team members and provide continuous testing tools and frameworks.

With this cross-functional team approach, Sogeti points to new practices in agile projects. Clients are currently implementing continuous testing strategies, and re-skilling their manual testers toward technical QA activities.

Despite its popularity, the adoption of SDET has remained a vision more than a reality: SDETs have remained focused on their activities to QA and are not able to swap jobs with other roles such as product owners, scrum masters, developers or business analysts.

Sogeti, therefore, points to an entirely new approach to agile and DevOps that will require further delivery transformation and investment among clients. The benefit of the Quality for DevOps Teams book, therefore, is in providing guidelines on how to structure delivery in the far future.

Aiming to make reporting easier

Another guiding principle of Quality for DevOps Teams is the VOICE model, which defines what the client wants to achieve with a project (value and objectives) and measures it through qualitative and quantitative indicators.

Sogeti’s approach goes beyond the traditional go/no-go to release an application to production based on UAT and improvement in KPIs, such as the number of defects found. VOICE also closes the loop with end-user feedback and experience and operations by incorporating their feedback.

Training at scale

Sogeti’s efforts around DevOps and continuous testing do not stop with the new book; it has relaunched its website which it wants to turn into a testing community site providing resources and knowledge for agile and DevOps projects, along with more traditional approaches such as waterfall and hybrid agile.

Alongside this effort, Sogeti has refreshed its training capabilities and designed three new training and certification initiatives, working with specialized examination and certification provider iSQI. The company has created a one-day training class for testing professionals already familiar with TMAP, and three other three-day specialized courses.

Sogeti is rolling out the training, targeting teams beyond QA, including business analysts, developers, and operations specialists. Sogeti is also rolling out the program internally with the larger Capgemini group, targeting organizations involved in agile projects. The initiative is of scale since Capgemini has a NelsonHall estimated 150k personnel engaged in application services.

Sogeti’s new book provides a view of what agile will look like next

Sogeti’s Quality for DevOps Teams book provides a long-term view of what agile teams and continuous testing will be like in the future. Its benefit is that the book offers a structured approach. It is reassuring to see Sogeti deploying it through certifications so that it shows that the transformation to cross-functional teams can be a reality. NelsonHall is expecting the rollout to bring feedback and fine-tuning of Sogeti’s approach. We will continue to report on how Sogeti implements Quality for DevOps Teams.

<![CDATA[Infosys Shifts its QA Portfolio in a Post COVID-19 World]]>


We recently talked to Shishank Gupta, the practice head of Infosys Validation Services (IVS), about how the practice is adapting to the COVID-19 pandemic and ensuing economic crisis. Of course, the initial focus has been on employee health, helping clients, and enabling its employees to work from home, getting access to tools, applications, and connectivity.

The QA practice is gradually moving on from this phase: Mr. Gupta highlights that clients are starting now to reconsider the contracts they have in place, discussing the scope and prioritizing activities across run-the-business and change-the-business. Unsurprisingly, new deals are on hold as clients lack visibility of the short-term future.

COVID-19 Accelerates the Shift to Digital

Infosys Validation Services is also busy preparing for the post-COVID-19 world and what that will mean in terms of clients shifting their QA needs. On the delivery side, the practice is expecting that client acceptance for home working and distributed agile will increase. This will drive usage of cloud computing, collaboration tools, and virtual desktops, along with increased telecom connectivity.

The pandemic will accelerate the shift in IT budgets to digital, particularly in retail, government services, and healthcare, the latter with renewed spending in health systems, telemedicine, collecting health data, and clinical trials (resulting from increased drug discovery activity). Shishank Gupta also expects that demand for UX testing will grow alongside the growth in digital projects. He also anticipates a continued acceleration of digital learning adoption that will create further testing opportunities across applications and IT infrastructures.

Infosys is Changing its Go-To-Market Priorities

Infosys’ IVS practice is realigning its go-to-market priorities and emphasizing existing offerings that had previously generated only moderate client appetite but have potential for growth in a post COVID-19 world. One example of such an offering is in the area of data masking that IVS had created several years ago for financial services clients for anonymizing their production data for usage as test data. IVS expects new delivery models to drive demand around capabilities such as data security and privacy, risk, and compliance audits.

IVS also expects accelerated adoption of cloud computing both in terms of testing applications to the cloud and SaaS adoption.

Finally, Infosys IVS is increasing its go-to-market effort around crowdtesting. The practice highlights that security concerns were a barrier to crowdtesting’s commercial development. Mr. Gupta now expects clients will adopt crowdtesting as a service and require fewer background checks on the crowdtesters.

And, of course, Infosys knows the world post-COVID-19 will also require leaner operations and lower costs: IVS is expanding its commercial focus on testing open-source software, test process re-engineering combined with RPA. Mr. Gupta highlights an ongoing project with an APAC investment firm where it is deploying RPA tools to automate the monitoring of applications in production and feedback to QA and business users.

QA Becomes Less Internally Focused and More Digital

NelsonHall expects that the role of testing will become less focused on internal transformation (e.g. test process standardization and TCoE setup) and become more integrated within digital transformation programs, where testing is part of the required services.

Currently, clients are continuing to focus on the immediate imperative of business continuity. NelsonHall expects that, in a post-COVID-19 world, clients will make strategic decisions, including accelerating their cost savings programs, driving offshore adoption and distributed agile, and also renegotiating their existing multi-year managed testing services contracts. In parallel, they will redirect some of their savings to the digital-led QA activities that Shishank Gupta has described.

In this new world, enterprises will need a QA partner that offers both onshore advisory capabilities to shift their QA spending to change-the-business, and further offshoring and automation to reduce their run-the-business spending.

<![CDATA[Indian-Delivered IT Services in 2020: Innovation in Many Forms]]>


NelsonHall was recently invited to present at NASSCOM Technology & Leadership Forum 2020, one of my last foreign trips for some time, I suspect. While in India, in addition to presenting, participating on panels and conducting interviews, we had an opportunity to sit down with leaders from across the IT services landscape as well as venture beyond the Grand Hyatt Mumbai to delivery locations in Pune and Bangalore. And in all our interactions, innovation was a key theme.

In this blog, we look at several examples of ways in which leading IT services providers are enhancing their offerings through innovation.

Expanding the remote delivery services offered

In its 50-building campus in Bengaluru, Infosys has stood up their Experience Design Studio to support the delivery of experience design services globally. In addition to its acquisitions of WONGDOODY and Brilliant Basics to expand its client-proximate creative design capabilities, Infosys has created this dedicated studio to deliver design services housing ~160 designers (other locations that house the Infosys XD studios, WONGDOODY and Brilliant Basics bring the total Experience Design team globally to ~600). Capturing all of the necessities of design space, including open meeting spaces, client collaboration areas and plentiful white boards covered in the most artistic notes you will ever see, this dedicated space is co-located with the broader Infosys global delivery capability while also being set apart in a standalone space.  The studio houses multi-disciplinary teams which aim to apply skills of traditional design to broad systemic challenges; questioning, reframing and addressing issues through the combination of design, technology, and industry skills.

Significantly, the Infosys XD studio does not necessarily play a supporting role to designers located at client sites. It has its own client relationships, in particular for long-term engagements, such as with a U.S.-based logistics company that in 2017 engaged Infosys to help reimagine its business model. Another example is the work done for a tennis governing body, designing and shaping a new understanding of the sport that benefits fans, players and the media. The group is also working with the Indian Income Tax with the aim of simplifying the filing of tax returns.

These design capabilities provide Infosys with the ability to deliver end-to-end services to clients rather than ceding up-front strategic and creative services to consultancies and agencies; particularly at clients where it already possesses strong relationships.

Narrowing the focus

Wipro, with several years of S/4HANA services under its belt, has centered its SAP offerings around enabling an enterprise’s digital transformation journey, at the same time prioritizing SAP services in industries where it can be a top-two provider. These include:

  • Leveraging its recent digital/design acquisitions, process and business consulting capabilities to facilitate clients on design-led transformation projects
  • SAP products: S/4HANA, in particular hosted on hyperscale cloud, provides AWS, Azure and GCP and SAP SaaS products SuccessFactors, Ariba and Cloud for Customer (C4C)
  • Industries: energy & natural resources, manufacturing, consumer industries, and retail. These are areas where it already possesses strong capabilities: it acted as SAP’s solution partner on SAP Model Company for Utilities and is now partnering with SAP on its food and beverage model company offering based, in part, on Wipro’s work with a large U.S. food manufacturer. Wipro is also a co-development partner for fashion & retail with SAP and the newly developed solutions will offer a range of functionality from fashion manufacturing to in-store merchandising
  • Geographic markets: Wipro is narrowing its focus to mature markets where large historic SAP clients have yet to migrate. In addition to the broader North American and continental European markets, primary focus areas include Germany (where Wipro acquisition Cellent provides a strong local capability), Japan and Saudi Arabia. 

This allows Wipro to continue focus on large transformation engagements while targeting its innovation, partnership ecosystem, and offering development where it feels is best positioned competitively.

Developing assets to enhance service delivery

Another approach, and one demonstrated by two other services providers we visited, is to apply innovative technical assets to enhance well-established service offerings.

LTI has developed a platform called METIS to improve the efficiency and effectiveness of the software development and testing process. METIS connects with underlying SDLC tools to automatically create a knowledge fabric across the IT landscape. METIS then applies machine learning to create relationships between business functions, actions taken, the calls to associated applications and APIs, defects, production tickets and performance logs.

This mapping allows for better visibility on how changes to business functions impact the IT and business landscape. This improved visibility improves the efficiency of test execution and the coverage available for automated testing while reducing defects by identifying related defects. METIS also helps in improving developer productivity by providing architects and developers with analytics on running code. In addition to using METIS to improve its own application development services, LTI is also offering it as licensed software to provide a parallel revenue stream.

A similar approach is being taken by CSS Corp. Its largest business segment focuses on customer experience and enterprise support and it is applying intelligence to enhance these labor intensive services. Positioned as the first touchpoint with customers has enabled it to capture significant data which it is looking to leverage to better align IT and business and improve its service delivery. As an example, CSS Corp is creating an integrated digital service management and support ecosystem for a networking company where it delivers customer support services. For this company’s healthcare clients, CSS Corp was able to use automated tools to streamline the process of assessing and replacing equipment that has failed, accelerating delivery and reducing ticket resolution.

Further aligning IT offerings with business value

Leading services providers are also looking to be innovative in directly aligning their IT services with the client achieving its business objectives.

As part of its ADMNext offerings, Capgemini has developed a model which aligns the delivery of its application development and maintenance services to specific technology transformation and business objectives. The approach places these services as the foundation of a client’s transformation, freeing up client resources (budget and employee time) through the application of automation to allow for greater focus on transformation initiatives that evolve the client landscape. This transformation is positioned across three levels of internal change: technological transformation from modernizing the IT landscape; business transformation through transforming service delivery processes; and disruptive services by applying new and innovative technologies to fundamentally change business models.

Innovative offerings

TCS has had a dedicated innovation facility located in Pune, housing ~300 dedicated researchers separate from its broad service delivery campuses (further details on TCS’ research and innovation function is described here). This center looks both at building assets directly applicable to service delivery and broader research topics. Examples of current research topics include:

  • Developing a digital twin for an enterprise that leverages machine learning to model how to optimize enterprise operations
  • Machine learning targeting workforce management, fraud and money laundering detection and information and knowledge extraction
  • Developing digital twins of human organs to eliminate animal testing
  • Real-time recipe analysis of steel content to ensure optimal strength
  • Cybersecurity for both cloud workloads and IoT
  • Applying cognitive capabilities to tailor media to customer needs.


In our visits to the India campuses of these IT services firms, we were impressed with the level of investment they are all placing on innovation, both in developing new offerings and in transforming their core services.

Leading IT services providers are taking a range of approaches; all have a clear sense of direction as to how they are building differentiation and also gaining credibility in positioning as thought leaders with their clients.

<![CDATA[The Changing Focus of User Experience Services (vlog)]]>


User experience and user interface (UX/UI) consulting and design has traditionally been focused primarily on external, customer-facing web properties. However, the scope and focus of experience design services has expanded as companies realize design thinking and experience-centric design has greater applicability than solely in interfacing with users. In this vlog, David McIntire presents at NASSCOM 2020 on the changing focus of user experience services.

<![CDATA[NTT DATA: Focusing on Scaling Up Salesforce Capabilities while Expanding Portfolio]]>


We recently had an update from NTT DATA regarding its Salesforce services activities across the different countries it operates in. At a global level, the company’s priority is to scale up its Salesforce services capabilities, with recruitment as a priority.

Salesforce keeps making horizontal M&As at a fast pace while verticalizing its software products. NTT DATA is mirroring this approach by strengthening its portfolio around Commerce and Marketing Clouds along with MuleSoft, while creating verticalized offerings. Finally, NTT DATA is working on better coordinating its various units and driving synergies between its Salesforce units and its other digital consulting divisions.

Scaling up Salesforce capabilities

NTT DATA has 1,000 Salesforce certified professionals currently, and is planning further expansion, prioritizing organic growth. Accordingly, the company is retraining some employees, for example Siebel and Microsoft Dynamics CRM consultants and J2EE developers, with Salesforce skills. And NTT DATA is also recruiting, targeting both developers with five years of experience and new graduates.

While NTT DATA has been one of the most acquisitive firms in IT services, in the field of Salesforce services it is taking a cautious approach. The company is no longer targeting Salesforce pure-plays, now considering these to be often expensive as well as niche. Instead, NTT DATA favors buying larger firms that bring a wider range of capabilities, such as Sierra Systems in 2018 in Canada which brought in around 700 employees offering IT consulting, systems integration, and application management services, including Salesforce capabilities.

Expanding horizontal capabilities, including around MuleSoft

Just as Salesforce is expanding its capabilities both horizontally and vertically, so is NTT DATA. Horizontally, NTT DATA is investigating broadening its expertise around Marketing and Commerce Clouds along with MuleSoft. Application integration and MuleSoft are a priority; NTT DATA acknowledges that enterprises now often use Salesforce’s Cloud as a platform for building larger systems and integrating with other applications. NTT DATA has set up Integration CoEs in the U.S. and India that group integration software capabilities from MuleSoft, Boomi and Informatica. Japan and Europe are next: each will have their own Integration CoEs in 2020.

Verticalizing the Salesforce portfolio

NTT DATA continues verticalizing its Salesforce portfolio. It has developed several industry-specific accelerators such as Telecom Lab, Field Service Lightning and CPQ for Manufacturing, Manufacturing in a Box, and Logistics in a Box, and the automotive industry.

One verticalization example is NTT DATA’s Digital Insurance Platform (DIP), which it deployed for a U.S. insurance client that wanted to launch a new life and annuity insurance product in just 45 days. DIP combines a reference architecture that integrates the Service and Sales Cloud, along with Vlocity, and relies on MuleSoft for integration with the client’s claims management and policy administration systems. Along with its reference architecture, NTT DATA brought its repository of business processes, and its investments in RPA, chatbots and AI.

NTT DATA highlights that DIP has high potential, and the company is targeting its 60 insurance and healthcare clients in the U.S. As part of its expansion plans for DIP, NTT DATA has so far created DIP versions for the life insurance, annuity/retirement and healthcare payer/insurance and P&C segments. Looking ahead, the roadmap includes public unemployment insurance, and automotive insurance.

With its verticalized Salesforce services, NTT DATA is targeting multi-year contracts: the one noted above with the U.S. insurance client is a five-year BPS deal where NTT DATA also operates the contact center and back-office services. The company is also looking at potential SaaS deals.

Internal coordination is a priority

With NTT DATA having a federal structure with geographies as key business units, the company’s challenge is to coordinate further its different activities across geographies and across service lines. An example is the 2019 investment in Star Global Consulting, a U.S. firm with 750 personnel that has brought in onshore strategy consulting, design consulting, mobile development, and marketing skills. NTT DATA highlights that Star Global brings skills such as digital consulting and digital marketing that are adjacent to the capabilities of Salesforce services. The challenge will be to drive coordination between its Salesforce and digital units across geographies. This is a priority for NTT DATA.

<![CDATA[Cognizant Automates Testing of Point of Sale Terminals]]>


In a recent blog, we highlighted how Cognizant approaches the testing of connected devices. Testing connected devices brings new challenges to QA at two levels: conducting hardware testing and automation. Cognizant’s TEBOT IP is based on a combination of traditional test automation (mostly based on Selenium test scripts) and hardware powered by a Raspberry Pi, triggering physical actions/movements.

The PoS Ecosystem is Becoming More Diverse

Cognizant has identified a new use case for TEBOT, targeting point of sale terminals (PoS). The nature of PoS has changed over the years, with the rise of self-checkout terminals and a greater diversity of hardware, particularly in peripherals (e.g. barcode scanners) and a growing number of authentication methods (e.g. e-signature, PIN code) and payment methods (NFC, insert, or swipe).

The proliferation of hardware peripherals is challenging QA teams, with few vendors providing simulation software for their peripherals, and with a rising number of PoS/peripheral combinations.

These developments make PoS testing a good candidate for test automation.

In looking to automate human activities through TEBOT, Cognizant has focused on customer touch points; for example insert or swipe a card, enter PIN code or sign electronically. It has developed test scripts based on Selenium and conducted tests in its labs in Chennai.

Cognizant has conducted PoS testing for several clients, including:

  • For a Canadian retailer, Cognizant ruled out conducting test automation using simulation software because of latency issues due to integration with third-party systems. The company used TEBOT, taking a payment transaction-based approach for PoS terminals, and identified 2,200 defects in its labs
  • With a large Australian supermarket chain, where testing was previously performed manually, the challenge was the need to conduct PoS testing for both its owned stores and those of franchisees. In total, the client faced combinations based on 40 PoS types and 350 peripherals. In addition to deploying TEBOT, Cognizant deployed AI to analyze defect logs in the 25 sprints of the past year and predict where defects were likely in upcoming releases.

The PoS Industry Continues its UX Transformation

Cognizant believes that the PoS industry will continue to invest in new equipment and peripherals: AR/VR and mobile PoS will become more prominent and drive further focus on UX. The number of installed PoS is expected to increase by 10% each year and this will require further investment in test automation.

Combining Robot-based Test Automation & Crowdtesting

We continue to explore how to best automate connected devices in various forms. The market is large and also quickly expanding from its IoT products niche to all devices and equipment that combine hardware and embedded software/firmware and are connected. In short, the testing market potential is huge and goes across industrial and consumer devices and equipment. Cognizant’s approach with TEBOT focuses on functional testing. Looking ahead, we think that Cognizant approach should be combined with crowdtesting and UX testing. The company has a crowdtesting value proposition with its fastest offering, which can also provide UX testing, complementing the functional test capabilities that TEBOT brings.

<![CDATA[test IO: Strategy Update on EPAM’s Crowdtesting Business]]>


We recently talked to test IO, the crowdtesting vendor that was acquired by EPAM Systems last April. We wanted to understand the crowdtesting positioning of the company, learn more about the User Story crowdtesting offering launched in December 2019, and understand how test IO fits within the larger EPAM organization.

Founded in 2011, test IO today has 200 clients, many in the retail, media, and travel industries, with a sweet spot around customer-facing applications.

test IO has positioned its crowdtesting portfolio in the context of agile

test IO has focused on functional testing (along with usability testing), targeting agile projects in recent years. Crowdtesting in the context of agile development projects (i.e. continuous testing) remains a priority and test IO recently launched an offering named User Story testing.

The crowdtesting industry has to date relied on two main offerings: exploratory testing and test case-based testing. Under the exploratory testing model, a crowdtester gets few instructions on what to test and goes through the application-in-test by intuition. With a test case-based approach, the crowdtester relies on detailed instructions on how to complete a task or a business process.

With its User Story approach, test IO is promoting a method that lies somewhere between exploratory testing and test case-based testing. In agile development methodologies, user stories are the instructions given to software developers in the form of “as a [user/admin/content owner/], I want/need [goal] so that [reason]”. The challenge from a testing perspective is to test these user stories by setting up acceptance criteria that are specific enough to be tested but still meet the spirit of agile that relies on loose requirements and iteration.

Speed is of the essence, as with agile projects. test IO argues it can achieve User Story testing in approximately two hours, from the moment it sends a mobilization campaign to its members to the moment it receives defects back from crowdtesters. The company highlights that working with a population of crowdtesters with the right level of testing skills helps in achieving speed. test IO believes it has enough members to react quickly to any project invite while providing the right defect coverage.

Exploring portfolio synergies with EPAM

test IO has also expanded its delivery model from public crowds and private crowds (e.g. employees of the client) to the EPAM internal crowd. test IO can rely on EPAM’s 6k testing practice members to expand its reach and bring to the client career testers with vertical experience while reassuring a client that its application-in-test will be exposed to EPAM personnel only.

User Story testing and internal crowds are just the beginning: test IO and EPAM intend, over time, to expand their crowdtesting capabilities to specialized services: performance testing will be the first of these.

AI is also on the agenda. One of its first AI use cases has been around crowdtester selection, based on the technologies and experience needed by the client. A current priority is defect review. For each crowdtesting project, test IO will review the defects logged by crowdtesters and remove duplicates. The company wants to automate most of its test duplicate activity to free up time and focus on data analysis.

In a later phase, test IO wants to run through the defect data it has built during its nine years of existence and identify crowdtesting best practices and application defect patterns.

test IO hints that EPAM is exploring how to best use its experience in crowdsourcing and expand it to other software services activities. We will be monitoring with interest how EPAM develops the crowdsourcing model of test IO. Despite claims of attracting young talent through different employment models, most vendors still rely on permanent or freelancer positions. With test IO, EPAM may be able to invent a new employment model that will expand from testing to other application service activities.

<![CDATA[Digital Experience Consulting in 2020: From Fishing to Teaching Clients to Fish]]>


In a recent NelsonHall survey of IT services buyers, the most highly-sought benefit of digital transformation engagements was improving customer experience and customer satisfaction (highly important to ~68% of buyers globally). This focus has driven IT services vendors to invest heavily over the last several years in expanding their experience consulting and design capabilities. In the same survey, ~75% of buyers identified an ability to provide UX consulting and design as a highly important trait sought in vendors.

A major shift in client attitudes

These findings underpin a major shift in IT service client attitudes toward experience consulting and design: it is no longer an optional or standalone activity; it is now a core component of IT. Historically, digital experience consulting and design services were focused on a sub-set of a client’s IT landscape –primarily customer-facing digital properties such as e-commerce sites and client portals. Experience design projects became one-off initiatives to drive the redesign and development of these external-facing applications.

Now, experience is a factor in all application development work. Clients are as likely to seek experience consulting and design services for employee applications as they are for external applications. It is increasingly recognized that employee satisfaction correlates to the applications utilized in their day-to-day jobs; employees don’t forget the ease-of-use of the Uber app or Amazon website just because they are sitting at their work desk.

Experience focus is expanding

The focus of experience is also expanding. Experience is no longer limited to the interface of an application; it now spans the entire service delivery lifecycle; a customer’s experience is as much defined by how quickly they receive an e-commerce order as the interface used to place that order.

Clients are also recognizing the iterative, ongoing nature of experience design. Developing a user-centric application is not a one-time process; it is ongoing and must constantly evolve as customer demands evolve. Building processes and tools that allow for the capture of user feedback to drive iterative application enhancements is as important, or more, as the research required at the onset of a digital experience design program.

All of these factors make experience pervasive in nearly all IT-focused work, and this is resulting in a change in the services clients are looking for from IT service vendors. Rather than solely looking to vendors for consulting, clients will increasingly look for vendors to inculcate these experience design skills in the client organization itself. The expectations of vendors will not just be to deliver a design thinking session and quickly develop wireframes and lo-fi prototypes to be tested, but also build the capabilities to do this within the client organization itself.

Vendor experience capabilities need to evolve

For vendors, this means that the digital experience capabilities they develop must also evolve. Having designers and design thinking specialists capable of driving client engagements is only half of the requisite offering. Vendors must also be able to build these capabilities in their clients while helping them understand the level of transformation required to maximize the value of building an internal experience design capability. The use of design thinking and rapid prototyping across the organization requires fundamental changes, including greater cross-functional collaboration, changing roles both inside and outside of IT, and expanded change management.

To facilitate this, vendors must be able to provide broad consulting that spans several components, including:

  • Training client personnel to conduct design thinking sessions and design experiences
  • Defining and implementing design libraries for use within the client
  • Delivering organization, culture change, and change management services.

In parallel, there will continue to be specific activities where clients won’t realize the value of building in-house capabilities. Conducting dedicated user research or assessing the applicability of emerging technologies (such as chatbots and AR/VR) may remain specialized functions that clients look for vendors to provide. But with the ongoing, iterative nature of digital experience design, vendors will be tasked with evolving offerings to more cost-effective delivery; for example, offering these through an as-a-service capability.

Experience design is evolving from a niche function to becoming a foundational aspect of nearly all application work. Client demands are evolving in parallel, and successful vendors will expand their capabilities and transform their offerings to focus on enabling their clients in addition to delivering outcomes.

<![CDATA[Digital Workplace Services is Enhancing Collaboration Across the Enterprise]]>

NelsonHall completed an in-depth analysis of advanced digital workplace services (DWS) in 2019. This blog looks at some of the key findings from this research, in which we spoke both to leading IT services vendors and clients of their services. We will also take a look at some of the drivers and trends we expect to see as we move into 2020 and beyond.

DWS is enabling the future-ready workplace

Organizations are placing greater emphasis on overall employee experience through the deployment of digital workplace services. In addition, the role of central IT is changing, adopting the role of a service broker to enable end-users to provision the services they need, when they want, and how they want. This is increasing the need for more personalized engagement models, including self-service (mobile support apps, virtual agents, chatbots, and knowledge articles). DWS is also driving the use of proactive and predictive engagements, including self-healing, AI and automation, and specialist onsite support through Tech Cafes and smart lockers, while utilizing AR/VR in the field for remote services.

A key development is the use of DWS tools and techniques across the entire organization, with examples including the use of chatbots and virtual agents in HR for onboarding and off-boarding activities. Gamification methods are being deployed across marketing and communications departments to drive engagement and adoption of services. In addition, there is greater integration with facilities management through the use of IoT-enabled devices and wayfinding solutions to drive smart office concepts.

Intelligent collaboration services & design thinking take personalization further

Vendors are developing social and collaboration platforms to integrate multiple platforms (including Microsoft Teams, WhatsApp, Workplace by Facebook, G-Suite, Skype for Business, and Yammer) into one. This is driven by organizational requirements to enable employees to collaborate more effectively on projects through the platform of their choice, and improving overall UX. It also enables targeted communications to specific user groups or personas. We expect activity will ramp in this area, in particular as vendors partner more with disruptors in the market, including Google and AWS.

Many vendors are further utilizing consulting and advisory services to drive a collaborative design thinking approach to client engagements, to develop the digital workplace user experience. They are further investing in and developing dedicated design and digital studios in support of DWS initiatives. This also includes the use of immersive technologies, including AR/VR, to showcase ‘smart office’ capabilities.

Analytics is playing an even more critical role across DWS

Vendors are increasingly looking to use advanced data analytics, NLP, and ML tools to manage and analyze data, including Hadoop and Kafka, and DataRobot to evaluate different ML algorithms.

They are seeking to better understand the big data generated in the end-user environment and act on this data to stop issues in the first place, working out what to automate to drive the best outcome. This also includes the creation of automation scripts or bots to improve service quality pre-emptively.

Another key focus area is the use of end-user analytics tools, including Nexthink and Systrack, to improve end-user monitoring and overall UX. Vendors are collecting data from log files across the different devices deployed across the workplace and aggregating this data to get a view of patterns in data. This is then used to trigger actions to propose preventative measures to improve configuration and to predict, prevent, detect, and fix potential issues before they reach the service desk.

AI-led service desk initiatives are increasing

Many vendors are expanding capabilities in support of AI-led service desk to facilitate the move to a fully automated ‘zero-touch’ service desk capability. This includes automation and self-serve capabilities (IVR, RPA, chatbots, auto-scripts, biometric password reset capabilities, including fingerprint and face recognition).

A key focus includes the development of AI-based virtual agents, using NLP and acting as an L1 agent, learning from past data, and improving through ML. These are invariably a mix of IP and third-party solutions. If the virtual agent is unable to rectify, it may log a ticket on behalf of the end-user (whether incident or request), passing the data and intelligence collected to a specific L2/L3 resolver group. Vendors are also integrating common AI interfaces into VAs, including Siri, Cortana, and Skype for Business, to improve UX.

Self-healing ecosystems will enhance predictive capabilities further

As vendors gain more insights across the end-user environment through analytics and AI, it is enabling greater adoption of self-healing technologies and auto-remediation capabilities. Typical toolsets deployed include Nanoheal and Nexthink, enabling self-heal frameworks that run interactively, helping end-users fix their own issues, or providing agent-assisted services (for example, through ServiceNow to remotely fix issues, or run silently to address issues proactively). Vendors are building libraries of self-heal scripts and self-help including one-click automated solutions, knowledgebase articles, and invariably targeting self-healing at L0, L1, and L1.5 incidents.

Future developments

The DWS market will continue to evolve with demand for even deeper personalization of services driven by increasing workforce expectations across the enterprise. It will also be key to attracting and retaining new talent.

AI-led service desk will expand

The propensity to adopt AI, ML, analytics, and self-healing technologies will increase to facilitate the transition to an AI-led, zero-touch service desk with greater predictive and preventative capabilities to further improve both the end-user experience and employee experience across the entire enterprise. This also includes AI-enabled virtual agents utilizing ML and semantic analytics and enhancing use cases to deal with more complex support issues (L3 and above), and expanding VA capability across the enterprise.

In addition, we expect to see further development in areas including proactive mass healing (L2/3), with super-users within the service desk resolving data corrections or data validation errors with site reliability engineers (SREs) approving solutions offered by self-healing, although we anticipate this will be across a more protracted timeframe.

Microsoft MMD will gain traction

Although end-of-life support for Windows 7 kicked in on January 14, 2020, we expect there will still be considerable migration activity for the foreseeable future, with laggards moving to Windows 10, which provides added security along with device flexibility and improved UX.

We also foresee more traction with Microsoft Managed Desktop (MMD), enabling organizations to allow Microsoft to manage their Windows 10 devices, providing the latest versions of Windows 10 Enterprise edition, Office 365 ProPlus, and Microsoft security services. We also expect to see more uptake for Windows Virtual Desktop on Azure, enabling Windows 10 virtual desktops to run on the Azure platform; these will also provide a real alternative to Citrix.

Other developments will include increased provision of ‘aaS’ offerings for Windows and devices, and Evergreen services for Windows 10; and also, EUC as a Service (providing Win10, 0365, DaaS, and unified endpoint management) on a price per-user basis.

IoT-enabled smart buildings will increase

We expect vendors will further enhance their capabilities in support of workplace IoT across the smart office (utilizing beacons, sensors and wayfinding solutions) for smart meeting rooms, reservations, facilities, space management; and expanding field services through AR/VR for asset tracking and worker safety, and remote technical support – in addition to using AR/VR for immersive learning, training, and development.

Greater focus on XLAs and business outcomes

It is likely we will also see greater adoption of business outcome-focused XLAs, which include end-user journey quality, zero-time-to-fix where incidents are avoided, measuring digital adoption (end-user satisfaction, engagement, omnichannel, number of liked and shared knowledge articles).

We anticipate vendors will focus on developing dedicated digital transformation centers and CoEs in areas including AI, ML, automation, data science, cognitive virtual agents, and NLP bots/chatbots – in addition to creating joint R&D capabilities and go-to-market initiatives with key ecosystem partners.

Market disruption

Finally, we expect Amazon and Google will continue to become major disruptors in the DWS market, already evidenced by a number of recent collaboration initiatives with vendors.

<![CDATA[TCS Positions Data Testing Capabilities Around Big Data & AI with BITS]]>


In the world of testing services/quality assurance, data testing has in the past been somewhat overlooked, still largely relying on spreadsheets and manual tasks.

While much of the current attention has been on agile/continuous testing, data testing remains an important element of IT projects, and gained further interest a few years ago in the context of big data, with migration of data from databases to data lakes. This renewed interest continues with focus on the quality of data for ML projects.

We recently talked to TCS about its activities for automating data testing. The company recently launched its Big Data and Analytics Test Automation Platform Solution (BITS) IP, targeting use cases, including the validation of:

  • Data across technologies such as Enterprise Data warehouse (EDW) and data lakes
  • Migrations from on-premise to the cloud (e.g. AWS, Microsoft Azure and Google Cloud Platform)
  • Report data
  • Analytical/AI models.

Testing Data at Scale through Automation

The principle of testing data is straightforward and involves comparing target data with source data. However, TCS highlights that big data projects bring new challenges to data testing, such as:

  • The diversity of sources, e.g. databases such as RDBMS, NoSQL, mainframe applications and files, along with EDWs and Apache Hadoop-related software (HFDS, HIVE, Impala, and HBase)
  • The volume of data. With the fast adoption of big data, clients are now transitioning their key databases, encompassing large amounts of data
  • The technologies at stake: in the past, one would have written SQL queries. Such SQL queries do not suffice in the world of big data.

To cope with these three challenges, TCS has developed (in BITS) automation related to:

  • Determining non-standard data by identifying data that is incorrect or non-compliant with industry regulations and proprietary business rules
  • Validating data quality, once the data transformation is completed, through identifying duplicates in the source, duplicates in the target, missing data in the target, and extra data in the target
  • Identifying source-target mismatches at the individual data level.

An example client is a large mining firm, which is using BITS for validating the quality of its analytics reports and dashboards. The client is using these reports and dashboard to monitor its business and requires reliable data that is refreshed daily. TCS highlights that BITS can achieve up to 100% coverage and improve tester productivity by 30% to 60%.

Overall, TCS sees good traction for BITS in BFSI globally, as the banking industry moves from EDWs and proprietary databases to data lakes. Other promising industries include retail, healthcare, resources and communications.

TCS believes BITS has great potential and wants to create additional plug-ins that can connect with more data sources, taking a project-led approach.

Validating the Data Used by ML

Along with data validation, TCS has positioned BITS in the context of ML through testing of ML-based algorithms.

The company started on this journey of ML-based algorithms, initially focusing on linear regression. Linear regression is one of the most common statistical techniques, often used to predict output (“dependent variables”) out of existing data (“independent variables”). Currently, TCS is in its early steps, and focuses on assessing the data that was used for creating the algorithm, identifying invalid data such as blanks, duplicate data or non-compliant data. BITS will automatically remove invalid data and run the analytical model and assess how the clean data affects the accuracy of the algorithm.

Alongside analytical model validation, TCS also works on linear regression-based model simulation, looking at how to best use training and testing data. One of the challenges of ML lies in the relative scarcity of data, and how to make best use of data across training the algorithm (i.e. improve its accuracy) and testing it (once the algorithm has been finalized). Overall, the more data is used on training the algorithm, the better its accuracy. However, testing an algorithm requires fresh data that has not been used for training purposes.

While the industry uses a training to testing ratio of 80:20, TCS helps in fine-tuning the right mix by simulating ten possibilities and selecting the mix that optimizes the algorithm.

TCS sells its data and algorithm testing services, using BITS, though pricing models including T&M and fixed price, and subscription.

Roadmap: Expanding from Linear Regression to Other Statistical Models

TCS will continue to invest in the ML validation capabilities of BITS and intends to expand to other statistical models such as decision trees and clustering models. The accelerating adoption of ML and also of other digital technologies is a strategic opportunity for TCS’ services across its data testing and analytical model portfolio.

<![CDATA[Accenture’s Industry X.0 Expands its Focus to Digital Manufacturing Use Cases]]>


Back in early 2018, we discussed with Accenture how the company had created its Industry X.0 unit to address client demand for product engineering services along with Industry 4.0 and emerging technologies such as AR/VR and digital twins. In Industry X.0, Accenture grouped its capabilities around digital manufacturing, embedded systems, PLM services, MES services, and other units.

Scale-Up Strategy

Accenture’s Industry X.0 unit has grown considerably in both size and capability, with several acquisitions and the set-up of Industry X.0 Innovation Centers.

In the past year, Industry X.0 has been strengthening its capabilities through various acquistions. Most of the acquisitions have been around product engineering and design, with a focus on the life sciences, automotive, CPG, and high-tech sectors; for example, London-based Happen brought in product design capabilities, Nytec in the U.S. strengthened the IoT product capabilities of Accenture, and Ziepuls in Germany brought in highly-specialized skills in areas such as automated car parking and ADAS architectures.

Accenture Industry X.0 has set up a network of Innovation Centers in key locations in major countries: to date, Detroit, U.S.; Sophia Antipolis, France; Bilbao, Spain; Essen, Germany; and Bangalore, one of its major global delivery hubs along with Shenghen, China; Tokyo, Japan, and Singapore. Some of these dedicated Industry X.0 centers are co-located with large Accenture Innovation Hubs; for example, its Houston Innovation Hub, which focuses on asset-intensive industries such as oil & gas, has specialist capabilities from Accenture Applied Intelligence (its analytics and AI unit), Industry X.0 and Accenture Interactive.

Digital Manufacturing Is Next Growth Area

Accenture Industry X.0 has been expanding its service portfolio beyond product engineering to industrial IT/digital manufacturing. The company is finding that client demand is shifting from brownfield to greenfield opportunities thanks to a resurgence in the opening of new plants in South East Asia. The phenomenon has accelerated, driven by the U.S.-China trade war leading some client organizations to invest in new plants outside of China. Accenture is also accompanying clients in setting up local factories close to customers, complementing mega-factories in South East Asia.

Accenture Industry X.0 is targeting large, transformational contracts where savings from operations (for example, by reducing energy costs from data center operations) are reinvested to fund digital manufacturing projects.

New Offerings and Change Management

Accenture Industry X.0 is going to market with offerings such as digital twins, automated visual inspections, and robotics. It is seeing increased client interest in the use of digital twins, and the number of use cases has expanded; for example, for the provision of training instructions and standard operating procedures to workers, as well as simulating the performance of plants. In parallel, Accenture Industry X.0 is investing in its visual inspection capabilities, driven by demand from discrete manufacturing industries in South East Asia.

Beyond the use of new and emerging technologies, Accenture Industry X.0 highlights that effective digital manufacturing requires a focus on organizational change. Accordingly, it highlights the change management capabilities in Accenture Consulting.

Accenture Industry X.0 is also increasingly helping clients discover innovations and best practices occurring in other industries. One example is the risk-averse chemicals industry, where some clients are looking to accelerate their digital transformation and are looking at use cases coming out of fast-changing sectors such as automotive.

Looking ahead, we expect Accenture to drive the coordination between Accenture Industry X.0 with other Accenture practise areas, notably the analytics and AI capabilities of Accenture Applied Intelligence, that will be key in making sense of the vast amount of available manufacturing data; the product design and UX capabilities of Accenture Interactive that will drive worker adoption of digital manufacturing; and Accenture Security, to help secure OT systems and equipment, some of which are of pre-Internet age. These other capabilities in ‘The New’ will be an asset for Accenture Industry X.0.

<![CDATA[Tech Mahindra Refreshes Functional Test Execution with AI & UX Capabilities]]>


We recently caught up with Tech Mahindra’s QA practice, Digital Assurance Services, to assess recent progress with their IP strategy.

Digital Assurance Services’ test automation strategy is based on IP and accelerators in its LitmusT platform. The company has been aggregating and integrating automation artifacts within LitmusT and intends to automate the full testing lifecycle, currently, across test execution (MAGiX), code quality, test support services with test environment, and test data management, analytics and AI, model-based testing, and non-functional testing.

Unlike some of its peers, Tech Mahindra is not looking to monetize LitmusT but is using the tools and accelerators it contains within its managed services engagements. The company is also relying mainly on open source software, using COTS only when no open source alternative is available. Just three of LitmusT’s modules rely on COTS; all the others use open-source software.

Tech Mahindra Continues to Invest in Functional Test Automation

MAGiX, which focuses on test execution, is a core module in LitmusT. Launched earlier this year as its next-gen test execution platform, Tech Mahindra was initially targeting SAP applications (ECC, S/4 HANA, and Fiori), rapidly expanding it to web-based applications, Java and .NET applications.

MAGiX aims to combine the ease of record-and-play first-generation test execution software tools with keyword-driven tools. In this approach, test scripts are created automatically during the process and software objects identified through its Object Spy feature. As a result, when the next release of the application comes, the test scripts are likely to still work. Test script maintenance activity is reduced. MAGiX also handles test data and integrates with test execution software such as Micro Focus UFT and Selenium, and with DevOps tools such as Jenkins.

Digital Assurance Services continues to invest in MAGiX, expanding it to API testing and database testing, and recently launching its Visual Testing offering.

Integrating Functional and UX Testing

Visual Testing expands the functional test execution approach of MAGiX to UX testing, focusing on automating image layout comparison, colors, and fonts across browsers and screen sizes. A major use case is apps for mobile devices where small screen sizes impact the layout. Potential buyers of Visual Testing go beyond the B2C industry and Tech Mahindra highlights that companies in regulated industries such as pharmaceuticals are interested in the offering for their data entry needs.

Visual Testing’s approach relies on having a screen baseline to compare with the screens of future releases. It highlights areas on the screen that have deviated from the initial screen baseline and identifies where the change came from in an icon (e.g. a change in the code). Tech Mahindra will go through all changes and decide if the bugs that Visual Testing identified are acceptable.

Visual Testing relies on scripts that are embedded in the main functional test scripts. The development of Visual Testing test scripts takes one to two weeks. Meanwhile, Tech Mahindra requires screenshots of all the different screen sizes.

Visual Testing uses several technologies, including for pixel-by-pixel comparison, open source software Sikuli and a COTS Applitools. Tech Mahindra has an agreement with Applitools and highlights that Visual Testing can be activated on MAGiX by pressing one button.

Adoption of UX Testing Will Accelerate

Quality Assurance continues to be an area of ongoing innovation in IT services. Tech Mahindra’s approach is attractive in that it converges functional and UX testing and allows simultaneous execution.

Despite all the discussions about UX in the last few years, we have not seen a broad adoption of UX testing, except in regulatory-led accessibility testing. By integrating Visual Testing in the DevOps testing tools, Tech Mahindra is making usability testing more automated and almost invisible. This automated approach is key to increasing the usage of UX testing.

<![CDATA[TCS Pace: Integrating Capabilities to Drive Innovation]]>


In NelsonHall’s 2019 survey of IT service buyers, when asked about the key capabilities sought in vendors, a significant majority cited a range of digital consulting capabilities. More than 65% of respondents placed high priority on capabilities such as the ability to take a business perspective to apply digital, provide a roadmap for adoption of digital, and undertake a digital maturity assessment. In parallel, clients are looking for vendors that understand the specific needs of their industry, including sector-specific applications or their digital platforms tailored to industry needs.

For IT service vendors delivering increasingly commoditized services using standard delivery tools, demonstrating these capabilities to clients can be a challenge.

In late 2018, TCS introduced a new brand and integrated service capability, TCS Pace, as part of a corporate thrust to differentiate its digital innovation capabilities from its core legacy services. To support the delivery of these capabilities, TCS has also begun rolling out specialized hubs called Pace Ports.

NelsonHall recently had the opportunity to visit TCS’ Pace Port New York and discuss the vision for the Pace organization and the Pace Port network.

TCS Pace

The TCS Pace brand was introduced in November 2018 as a consolidator of disparate capabilities: a brand identity encompassing its research, innovation and digital transformation capabilities, applied within a business framework. Beyond the brand, TCS is also integrating its delivery capabilities to offer a more seamless service for digital transformation engagements that span multiple offering areas. In a full-service play, TCS is also building close integration for capabilities that have not formally been brought under the Pace umbrella. And there are specialized CoEs dedicated to specific emerging technologies such as blockchain and IoT as well as TCS Interactive, where TCS’ core experience and design capabilities reside.

To ensure that it integrates a full suite of capabilities, stakeholders across TCS were engaged, including its CMO and CTO units, Business & Transformation Services delivery units, geography leadership teams, and vertical business units.

With a broad set of stakeholders, the performance measurement framework for Pace is primarily focused on outputs rather than financial results.

TCS Pace Ports

The Pace Port network has evolved from TCS’ innovation and co-creation location strategy. TCS’ first purpose-built space for client innovation discussions was its Executive Briefing Center in Mumbai, opened over a decade ago, followed by a customer collaboration center in Santa Clara, CA in 2012 which has since evolved into the TCS Digital Reimagination Studio.

TCS opened its first Pace Port in Tokyo in the fall of 2018. Why Tokyo? While Japan is relatively small in terms of revenue contribution, it is a priority country for TCS from a growth perspective, and one with unique market demands. The second Pace Port, opened earlier this year in New York, is clearly a major location.

Pace Ports are purpose-built facilities. These are dedicated spaces, co-located with one of two complementary functions:

  • With an existing TCS office to enable collaboration between the Pace Port employees and the broader organization. The Tokyo Pace Port is housed in the same building as the TCS Tokyo office, on a new floor
  • With academic partners to facilitate close collaboration with both academics and students in pursuit of specifically defined objectives. Pace Port New York is housed on the Cornell Tech campus on Roosevelt Island. While the Pace Port occupies a portion of a floor in the Tata Innovation Center, the remainder of the floor is populated with Cornell Tech graduate students, Ph.D. candidates, and teachers.

Pace Ports are a result of a collaboration across TCS. The local geographic unit acts as the owner of each location but partners with a lead vertical (manufacturing in Tokyo, retail in New York); however, Pace Ports are not exclusive to a single vertical. On the day of our visit to the Pace Port New York, they were preparing for a Life Sciences client event. It has an interactive display for identifying specific solutions based on business needs. Innovations that it showcases for the retail industry include ones for managing inventory and improving the shopping experience through the integration of digital and in-store capabilities.

In addition to the geography and vertical organizations, Pace Ports are also incorporating other capabilities. For example, they will have a rotating team from TCS Interactive for conducting design thinking and deliver experience design services.

Pace Ports are designed to be modular with movable walls that can be used to cordon off work areas, be opened up for a collaborative design thinking session, or removed for presenting to clients. There is also an informal area with couches, and in New York, chairs facing floor-to-ceiling windows overlooking the Manhattan skyline.

TCS has identified seven components that comprise the Pace Port network, with each location housing at least four. These are:

  • TCS Digital Library: an interactive digital display that enables accessing knowledge captured globally
  • TCS Rapid Labs: an innovation factory for quick turnaround proofs of concepts and MVPs
  • TCS Think Space: design thinking space aligned to TCS Interactive
  • TCS Agile workspace: modular, agile development working space
  • TCS COIN Accelerator: processes and tools for collaborating with start-ups and partners
  • TCS academic research lab: collaboration with local academics
  • TCS innovation showcase: space for demonstrating TCS IP to clients.

Each Pace Port will have a mix of these features tailored to the local market; not many will have all. Features in the New York Pace Port include the innovation showcase, agile workspace, academic research lab, COIN accelerator, and conference space.

The Pace Port Tokyo was designed to be a physical manifestation of client innovation processes. Clients begin in an executive briefing room to enable problem definition, then move into the innovation showcase and IoT lab to see TCS offerings. This is followed by a TCS Think Space for design thinking sessions to develop solution ideas. The outcomes of these sessions are addressed by the development of PoCs leveraging the COIN accelerator before being handed over for MVP development in the agile workspace.

With Tokyo and New York up and running, TCS will continue to expand the network in 2020. Openings will include:

  • Amsterdam, TCS’ first European Pace Port will be based on a new floor in the TCS offices and is envisioned to become the Digital Innovation hub for the region
  • Toronto will be the largest Pace Port opened to date and will include all seven components, including collaborations with the Universities of Toronto and Waterloo
  • Pittsburgh, located on the campus of Carnegie Mellon University in TCS Hall, a building built with an endowment from the Tata organization. The primary focus is research.

Other future locations include London, Sydney, Paris and others. Ultimately, we expect TCS to open Pace Ports in all its major markets.

Rapid Labs in Pace Ports act as innovation factories that deliver PoCs or initial MVPs using emerging technologies in an accelerated timeframe. A unique feature of the labs is their focus on leveraging new joiners on a rotational basis, thus enabling them to gain practical experience. Pace Ports’ collaboration with academic partners provides a strong pipeline of talent including, for example, the ~65 Ph.D. candidates and ~300 graduate students located literally across the hall from the TCS Pace Port New York at Cornell Tech which can be tapped for expertise in addressing client needs. Rapid Labs at Pace Ports follow the TCS Incubation Rapid process that has been used to produce innovations for several years.


IT service vendors with a large legacy services footprint have been investing heavily in recent years to develop their capabilities in digital offerings and demonstrate their innovation capabilities. Accordingly, many have been opening facilities that often have the words ‘innovation’ or ‘digital’ in the nomenclature, and most have used small acquisitions in this drive. In a few cases, the approach has looked somewhat piecemeal. TCS’ approach has been slightly different: it has taken a considered and organic approach in developing a full-service play, including strengthening its positioning around innovation with Pace and in setting up Pace Ports: 2020 will see an acceleration in the opening of these facilities. With the Pace initiative and its concept of ‘Business 4.0’, TCS has clear ambitions to be seen by its major clients as a full-service partner capable of supporting them in their digital transformation journeys.

<![CDATA[Amdocs Simplifies the Test Case-to-Test Script Process with Ginger]]>


We recently chatted with Amdocs about how the company has been progressing with its automation framework, ‘Ginger by Amdocs’. Amdocs launched Ginger four years ago, initially as a test automation framework designed for use by non-automation engineers. Since then, Amdocs has aggregated some of its test IP around Ginger and has become its main test automation platform, with functionality ranging from test automation to mobile testing.

In the world of functional testing, automation remains centered around the creation of test cases in natural language, and then its transformation into automated test scripts that are run by a test execution engine. The test case-to-test script process is well-defined and has become an industry standard. A challenge is that it requires test automation specialists for both the creation of artefacts (e.g. test scripts) and their maintenance an ongoing basis.

In the past two years, Amdocs has been working on two initiatives to make the test case-to-script process easier.

Easing the Creation of Test Scripts

Amdocs has worked on making the creation of test scripts from test cases accessible to non-automation engineers. Its approach has relied on decomposing test scripts into smaller elements that Amdocs calls ‘automation nuggets’ (e.g. UI objects for handling APIs or database validation) and that are stored in a repository. Test case designers can then use these nuggets, via a drag-and-drop approach, to create scripts.

A key element of this approach is the creation of nuggets repositories specific to each client’s application landscape. Amdocs relies on its Ginger Auto-Pilot feature to automate its creation. Auto-Pilot goes through the pages of a website (or a java-based UI) and identifies objects and their properties and values, creating a model of each page and corresponding objects using a POM approach. Also, Auto-Pilot employs a similar approach in modelling REST API-based applications by creating a model of the APIs and their input and output parameters.

Helping to Maintain Test Scripts

Another benefit of Auto-Pilot is that it is also useful in the maintenance of test scripts. Amdocs will run Auto-Pilot on a regular basis and discover changes in applications from the model, and identify which test scripts may fail as a result of the application changes.

Once Auto-Pilot has identified the changes from the model, it has several options, including the ability to automatically fix scripts that were impacted by the identified changes; for example:

  • Before execution: correct the test script automatically or alert an automation engineer before execution of test scripts (Auto-Pilot Preventive)
  • During Execution: re-run the test script a few seconds later or at a later time (using Ginger Flow Control). With this functionality, Ginger aims to mimic the behavior of human beings in fixing issues.

Looking ahead, Amdocs continue to invest in the maintenance of tests scripts, focusing on issues faced during execution. With Auto-Pilot Self-Healing capability, the company is focusing on automatically fixing issues during the execution phase, along with sending alerts to automation engineers on what scripts were changed and how. Amdocs plans to introduce this new capability in early 2020.

Amdocs continues to invest in Auto-Pilot and plans to introduce some level of AI to the tool to help it recognize changes in objects or fields. The company is training the software using ML technology; for instance, in identifying field names that may have changed (e.g. a field being changed from ‘customer’ to ‘client’).

While Amdocs has positioned Auto-Pilot in the context of test script maintenance, its relevance for agile projects also comes to mind. With agile projects based on two-week incremental changes/sprints, Auto-Pilot provides a starting point for maintaining test scripts.

Amdocs Releases Ginger as Open Source

Most testing service vendors tend to consider IP such as Ginger as a differentiator to their service offering, whether provided as part of the service or sold under a license fee agreement. Amdocs has done the opposite to this and made a bold move in releasing Ginger as open source under an Apache 2.0 license.

Amdocs emphasizes that the release to open source will not stop it from making further investments in Ginger. An example of a recent investment is its end-to-end (E2E) testing capability, where Ginger provides an orchestration engine for test execution tools across most OS (e.g. Windows, Unix and Linux and z/OS), programming languages (Java and .NET, web-based applications, mainframe applications), and other tools (e.g., SoapUI, REST Assured and Cucumber). Ginger’s E2E capability is particularly relevant to industries that operate on standard business processes (such as telecom service providers that still represent Amdocs’ core market) and retail banks.

Looking ahead, Amdocs believes that by releasing Ginger as open source software, it will gain further visibility of its automation capabilities, attract new talent, and derive revenues from adapting Ginger to specific client requests, along with driving interest from open source community developers in complementing Ginger’s capabilities.

While testing services rely on an ecosystem of open source tools, from Selenium and Appium to DevOps tools such as Jenkins and Bamboo, we have not previously seen any significant firm such as Amdocs giving back to the community their central IP. We welcome this bold move.

<![CDATA[Infosys’ Analytics Practice Aims to Bridge Business Needs & AI-Related Technical Services]]>


We recently talked to Infosys about its analytics and big data capabilities in its Data and Analytics (DNA) practice.

DNA is a significant practice within Infosys, which we estimate represents ~8% of Infosys’ headcount. It continues to enhance its portfolio, expanding from technical services to business services, and segments its portfolio around three themes: Modernize, Monetize, and Network.

Shifting the portfolio from technology services to new business models

Modernize offerings cover the IT side of analytics and big data, with most activities centering around big data, data lakes EDW and cloud (for both cloud hosting and PaaS), taking a brownfield approach and making use of existing client investments. DNA highlights that client demand is driven by use cases.

Under the Monetize umbrella, DNA is using analytics to help clients improve operations to grow revenues, drive operation efficiency and meet GDPR compliance. Most Monetize projects are based on more widespread use of analytics.

With its Network offerings, DNA supports clients in launching new services based on data, from both internal and external sources.

As part of this portfolio, DNA has launched its Infosys Data Marketplace (IDM) IP and services and is helping a medical device firm develop new health services using IoT-based data such as nutritional and fitness data.

With IDM, DNA highlights that it wants to democratize business models based on data.

Continued push in portfolio verticalization

In parallel, DNA continues to specialize its services and has structured its specialization effort around two areas: verticalization in the form of solutions & blueprints and AI.

Verticalization continues to be a priority, with DNA having created multiple software-based solutions for the different sectors it covers. DNA drives its verticalization effort with Infosys’ vertical units and produces either standalone accelerators or solutions (e.g. inventory optimization, supply chain early warnings) or embeds analytics within a larger Infosys platform (e.g. within the Finacle Universal Banking Solution).

A recent example of a standalone analytics solution is its Healthcare Analytics Platform, targeting the needs of service providers and bringing analytics to hospitals around members – e.g. in the field of precision medicine, disease analysis and utilization rates.

Investing in AI use cases

AI continues to be a significant driving force behind Infosys overall and DNA in particular.

An example of recent investment is Digital Brain for its HR needs, to match the needs of projects for specific skill profiles and identifying corresponding employees, along with selecting training sessions to help Infosys’ personnel to acquire digital skills, specifically for upcoming projects.

Infosys has positioned Digital Brain as part of its Live Enterprise initiative and is one of the cornerstones of how Infosys is becoming a digital enterprise.

In parallel, DNA is systematically mapping uses for AI. A core element of DNA’s AI portfolio addresses uses cases such as fraud management, product recommendation engines, and chatbots. Increasingly, DNA has worked with clients on image- and video-based recognition and has developed accelerators that include driver drowsiness detection (automotive), damage detection (insurance), and queue counter/customer wait line (retail).

To favor the spreading of AI across clients, DNA is pushing the notion of AI pods with the intent of making its clients more aware of AI possibilities. The company has structured AI pods in several forms, whether client-dedicated or working for several clients, and focused on AI technologies such as video analytics or specific use cases.

Analytics becoming more business-oriented

Looking ahead, outside of the IT department, clients are asking for further analytics across their operations, pushing their IT function to democratize analytics tools. Next-gen dashboards such as data visualization tools are helping here but are only the beginning of the answer. We expect vendors will further invest in this area with more vertically-aligned and non-technical user-friendly dashboards.

IT departments are still facing challenges of complex technologies and data migration to data lakes and big data infrastructures. Vendors are reducing this complexity by building platforms to collect and clean data and run analytics. Their next challenge is around AI: AI brings a new level of complexity that is further constrained by the small number of AI specialists globally. We are starting to see vendors investing in making the creation of AI-based algorithms more accessible to non-specialists.

Expect DNA to invest more in making big data and AI technology usage more accessible to non-specialists, while continuing to work with their business groups to make analytics more relevant to business needs.

<![CDATA[Atos North America: Back to Growth]]>

What a difference a year makes.

Atos recently held its second North America business event in Dallas, just under a year since completing its acquisition of Syntel. Last year’s event focused on the newly formed Atos-Syntel organization in North America, and also on how, with a new CEO in place, North America was starting to address some legacy problems and to manage a situation where a chunk of business had gone away with some contract non-renewals: we felt that Atos North America was on a more positive trajectory than it had been in 2017, also that the integration with Syntel was being done less hastily than some of its previous large-scale IT services acquisitions (see our blog on the 2018 event here). This year, some major strides appear to have been made in North America in GTM, account management, portfolio and positioning – and there are some areas of good practice that Atos North America could export to other regions in time.

One priority has been to improve service delivery; if a very considerable improvement in NPS is anything to go by, this has been addressed. And issues in one problematic contract (going back to the acquisition of the IT services business of Xerox) have now been resolved.

In terms of portfolio, there has been a significant hiring of new talent, including a North America Digital Transformation Officer and a new head for its SAP practice, both with a mandate for offering development and innovation.

The new Digital Transformation Office is working on simplifying and packaging offerings from across the portfolio so that these resonate more closely with clients’ digitalization priorities, which it categorizes as:

  • Being cloud ready
  • Enhancing CX
  • Improving innovation and agility
  • Securing the business
  • Using all their data
  • Scaling their business
  • Automating business processes.

That these are typically major priorities for enterprises today is undeniable. Overall, the messaging has come a long way from the product centricity of the Digital Transformation Factory; it is becoming more centered around use cases and on potential client benefits, though in some areas it remains a work in progress. Overall, there is an increasing emphasis on more flexible modular solutions from across the portfolio, and on flexible consumption models (the latter very different from some legacy infrastructure deals).

With SAP, Atos’ capabilities in North America have not historically been anywhere near as extensive as they are in geographies such as Germany and even the U.K., and they have focused on SAP BASIS ops. Initiatives in the last six months include setting up a small SAP consulting team as part of an ambition to target SAP transformation opportunities such as S/4HANA implementation/migration services; also, offering SAP HEC as a managed service on GCP. There are clearly strong ambitions here.

The fact that the global head of B&PS, Sean Narayanan, is based in New York indicates the importance being attached to Syntel. Benefits from the offshore delivery capabilities, the intelligent automation tools (the Syntbots platform) and agile delivery capabilities that Syntel has brought in should become apparent fairly quickly. Atos has completed the reverse integration of its larger B&PS contracts in other English-speaking geographies in a timely manner. Taking certain Syntel portfolio capabilities and exporting and expanding these across the group will take longer, as will developing more industry-specific offers for sectors such as healthcare payer and financial services.

While the Business & Platforms Solutions (B&PS) division in North America has been transformed with the addition of Syntel, the region’s legacy Atos Information & Data Management (IDM) division has also been busy, including working on adapting its GTM strategy. There has been a shift from the former pursuit of large managed services deals: the focus now is getting in front of clients earlier in their cloud journey and targeting smaller deal sizes such as cloud assessment engagements through which it can develop the relationship with the client to become a partner of choice for cloud design, migration and operations services.

Atos has said all year that North America would be back to organic growth by the end of 2019: in fact, it has achieved this in Q3. Something that perhaps would not have been expected a year ago is that the growth has come from IDM, which has done well to catch up the lost business from last year so quickly, rather than B&PS, which saw negative growth in its two priority sectors of healthcare and financial services

So, what next for North America? Of course, the ambition is to cross-sell the portfolio: this is likely to take time for a number of reasons, including little sector overlap in the region between legacy Atos and Syntel and lack of brand awareness. As we noted last year, in the short to mid-term, Atos North America is more likely to win broad-scope (infrastructure plus applications services) deals with mid-sized enterprises.

We expect to see an increasing focus in 2020 on vertical-specific offerings, most obviously healthcare, financial services and insurance.

And Atos North America might be exporting messaging about certain areas of the portfolio to the broader group; for example, the concept of ‘Singular IT’ mentioned a few times at the event – watch this space.


NelsonHall recently published an updated Key Vendor Assessment on Atos that includes its Q3 2019 results: for details, please contact

<![CDATA[Cognizant Focuses on Growing its Salesforce Practice while Verticalizing Capabilities]]>


We recently talked with Cognizant’s Salesforce Consulting & Solutions Group (CSG) unit, recently set up in Europe. The unit reflects ongoing investment by Cognizant in its Salesforce capabilities, with a more vertical focus, accommodating Salesforce’s growing product portfolio.  

CSG complements the capabilities of Cognizant Interactive and Cognizant Consulting by bringing vertical knowledge and consulting capabilities relevant to Salesforce. CSG has been in hiring mode, recruiting business consultants with experience in banking, insurance, pharma, retail and CPG.

Pushing towards verticalized offerings

Along with this vertical recruitment, CSG is formalizing its vertical knowledge with the creation of Salesforce-related vertical-specific blueprints. The unit is systematically identifying areas within each vertical that are prime for digital disruption, e.g. in retail and CPG, processes that used to be customer high-touch (providing in-store cross-selling opportunities) and that are now occurring over the internet (aiming to help retailers to find new ways of maximizing cross-selling). In total, CSG now has around 25 blueprints that can help it rapidly engage in discussions with clients.

CSG is helping Cognizant’s Salesforce practice to further sharpen its vertical focus through the creation of solutions that cover functional gaps currently not covered by Salesforce’s Cloud products, building on four existing solutions in retail banking, wealth management, insurance and life science. One example is a solution for collections, aligning Service Cloud with different geo-based regulations. The creation of additional solutions is currently a work in progress.

CSG is expecting to provide these solutions as part of its service portfolio and is confident this investment will help it differentiate its offerings and align with clients’ expectations in bringing a vertical-ready capability. In due course, CSG will consider if it needs to turn several of the solutions into software products with license and maintenance subscriptions.

Making the most of current implementations

Along with its vertical and consulting push, CSG is also helping Cognizant’s Salesforce practice around aftermarket services. CSG recently launched its Good-to-Great assessment service. During a two-week engagement, Cognizant assesses how Salesforce Clouds have been implemented from a process, technical and functional point of view, looking to maximize usage of the client’s investment in Salesforce’s Clouds. Good-to-Great relies on the traditional approach of checklists, its outcome being a report deliverable that includes suggestions for improvement.

Matching Salesforce in its investments

The company continues to focus on Salesforce Sales (with CPQ), Service and Community Cloud, and B2B (CloudCraze) and is targeting two growth markets: Marketing Cloud and Commerce Cloud among its large corporate clients, focusing on a 360-degree customer view. Looking ahead, Cognizant wants to invest in its capabilities around Salesforce’s September 2019-launched CPG Cloud and Manufacturing Cloud.

Several acquisitions have helped Cognizant growth its Salesforce portfolio and footprint. In late 2018, the company acquired two Salesforce service partners: ATG, a U.S. vendor specialized in CPQ and quote-to-cash processes, and SaaSfocus, an Australian vendor of significant size (~350 personnel at the time of the purchase), with a significant footprint in India.

In parallel, Cognizant has also adapted the structure of its Salesforce practice to include its MuleSoft practice (Salesforce acquired MuleSoft in 2018), adding 1.7k consultants.

With Tableau Software now part of Salesforce, Cognizant will have to consider if it should merge the two practices or keep its Tableau capabilities separate. Like other vendors, Cognizant is likely to face more similar challenges: Salesforce has given guidance that it will be a $16.9bn firm by the end of FY20 (ending January 31, 2020) and it continues to have appetite for M&A, even after its recent $15.7bn Tableau acquisition. This signals that Cognizant will have to further adapt its capabilities in the year to come.

<![CDATA[Infosys Innovation Centers: Localizing Innovation & Talent]]>

Infosys is undergoing an internal transformation to become what it calls a Live Enterprise with the objective of accelerating its service delivery and adaptability to changing client needs. The core of Live Enterprise entails the expanded use of data and automation to support an evolving workforce. Infosys has introduced a number of tools and accelerators to orchestrate its services and facilitate these changes both internally and for clients, while in parallel, it has placed equal focus on transforming its workforce and workspaces.

In May 2017, Infosys announced plans to hire 10k American workers by the end of 2020 through the establishment of U.S. development centers/innovation hubs. And since then, Infosys has moved fast, opening five centers and announcing a sixth in Phoenix, Arizona. The centers (in Indianapolis, IN; Raleigh, NC; Hartford, CT; Providence, RI and Richardson, TX) are located outside the largest U.S. tech hubs (e.g. New York, Silicon Valley) in moderate cost states with strong educational institutions. And last month, Infosys says it achieved its 10k hiring target well ahead of schedule.

These facilities reflect growing demand for IT service vendors to have showcase facilities within their client’s geographies. For Infosys, these centers serve a dual function: providing a rural shore delivery option and acting as innovation hubs where Infosys can conduct collaborative sessions with clients. This does not change the offshore/onshore delivery mix of 70%/30% (publicly reported onshore-offshore effort mix has remained flat over the last year) but it changes the onshore delivery model from being totally on-site to 50% onsite/50% in local hubs (resulting in a 15%/15%/70% mix).

NelsonHall visited two of these locations, in Indianapolis and Raleigh, to gain a better understanding of how these centers fit into Infosys’ strategy, specifically:

  • Providing U.S. hubs to grow local service delivery talent while maintaining cost competitiveness and offsetting any visa restrictions
  • Bringing innovation thinking and delivery to client-proximate locations to extend the ability to collaborate directly with clients on shaping digital transformation initiatives.

Localized Talent

Having achieved its milestone of 10k hires, Infosys is continuing to hire locally. It is taking three paths to do so: lateral hires, new graduates, and rebadging of client employees.

To build a talent pipeline of new grads, Infosys is partnering with academia to develop curricula for relevant skillsets. For example, the Indianapolis innovation center is partnering with Purdue University to develop 8 to 12 week cybersecurity courses with the goal of producing 400 cybersecurity skilled resources by the end of the 2020 fiscal year. Infosys has similar programs with Trinity College, to develop business analysis skills for Liberal Arts majors; with North Carolina State, for building data science skills; and with Cornell for building IoT skills.

Once hired, Infosys is also localizing the initial training. Rather than initially sending all employees to its training center in Mysore, India, Infosys is partnering with Udacity to develop online courses that can augment centralized training for new hires. It is also looking to replicate its centralized training facility in the U.S., building a campus outside Indianapolis, to be opened by the end of 2020.

In addition to working with universities, Infosys is partnering with local community colleges, which it views as a relatively untapped resource, recognizing that a four-year degree isn’t necessarily required for every role. It is also an area in need of investment, as just 1% of total endowment dollars in the U.S. are for community colleges, despite ~45% of college students attending those colleges. The roles Infosys envisages being filled by community college graduates include CX and creative; BI and data support; BPM and helpdesk; network administration; and application support. Rather than a specific capability, the overall objective is to develop ‘Z-shaped skills’, where employees have the ability to learn one skill and then pivot to another skill area.

These initiatives are bearing fruit, attracting local clients as well as talent: for example, Infosys claims that the Indianapolis center is serving 50 local clients.

Localized Plus Globalized Innovation

These centers are acting as hubs in Infosys’ global innovation network and a sub-set of the innovation center houses Living Labs, facilities for direct client interactions, including collaborative brainstorming sessions, prototyping and showcases for innovations already built. These modular spaces can be adjusted to pod structures or open spaces for design thinking. They have 3-D printers and can house mock-ups of a client environment (e.g. a retail space or bank). The Living Lab teams focus solely on prototypes and non-production products rather than managing a product all the way through production. By leveraging this core group, Infosys is able to better industrialize the innovation process from idea to prototype.

For client collaboration, Infosys provides structured innovation models, including governance and templatized assets to help clients drive innovation through idea generation sessions. Infosys offers innovation sessions with clients as a value-add to broader engagements. It can also mirror its living lab set-ups at client sites. It has engaged in innovation programs with clients including a large aerospace manufacturer where the companies have jointly developed 40 PoCs with twelve of these going into production.

Infosys takes a dual approach to identify potential innovations to pursue: using Living Labs to jointly develop innovations with its client base and the Infosys Center for Emerging Technology Solutions (iCETS), an internal R&D function based in India focused on identifying applications of broader, longer-term technical advancements.  

iCETS identifies Horizon Three innovations where it should be investing. As an example, blockchain was identified as a key area for innovation exploration in 2017. Since then, Infosys has developed around 50 PoCs and use cases and conducted around 65 client workshops to explore how blockchain could be applied to their business problems. Infosys estimates its innovation function has to date produced around 25 IPs that it offers commercially.

The innovation practice across both internal R&D and client-facing Living Labs has around 500 employees globally, of whom around 400 are based in India and 50 in the U.S. This group can work with other internal groups as necessary for specific skill needs it may not possess. For example, the Living Labs team in the Providence Innovation Center worked with WongDoody, a design agency acquired by Infosys in 2018, to innovate with a Rhode Island bank on a Bank of the Future concept.

Infosys also has a $500m ventures fund for working with start-ups, and to date has invested in nearly 15 start-ups. Beyond financial support for start-ups, Infosys can provide scale to more quickly produce PoCs and can provide access to clients looking for specific emerging technologies.

Infosys has made a significant investment in its innovation center network, and the initiative is expanding from the U.S. hubs to other key markets, including Bucharest, Berlin, London and Romania.

Innovation centers have been a key focus area of investment for Infosys. Over the last two years, it has been able to achieve its local hiring targets while building broad relationships across government, education and industry in their chosen locations. These hubs are key to Infosys’ ongoing evolution, helping in local resourcing and client proximity, and in demonstrating capabilities as a technology and innovation thought leader, both critical attributes in positioning Infosys as a partner across the spectrum of digital transformation initiatives being pursued by its clients.

<![CDATA[Deutsche Telekom Continues to Realign the Scope of T-Systems]]>


Deutsche Telekom (DTAG) recently announced it is moving T-Systems’ communication services activities with enterprise clients (TC division) and Government clients (Classified division) to Telekom Germany. Accordingly, T Systems’ TC and Classified divisions will be folded into Telekom Germany’s B2B Communications division that currently services small businesses and mid-sized enterprises in its domestic market.

DTAG posits that the new structure will help it better coordinate activities and eliminate internal charging mechanisms along with simplifying portfolio management, sales, and operations. It will also set up T-Systems’ IoT and security activities as standalone companies, still reporting under T-Systems.

T-Systems’ CEO, Adel Al-Saleh, continues to reshape the company: last year, he introduced a significant thee-year restructuring program, impacting 10k positions (including 6k in Germany), targeting ~€60m in savings, with around half of this to be reinvested in the business, prioritizing digital, security, cloud, SAP, toll collect, and ICT services to the public sector.

We estimate that TC and Classified generated ~ €2.5bn in revenue in 2018, around 35% of T-Systems’ overall revenues. The news of their transfer to Telekom Germany is thus a significant move, and one that has several implications.

Further complexity at the sales level

We expect dis-synergies at the sales level. Telekom Germany’s B2B division and T-Systems will target similar clients in Germany and in international markets. The new structure may simplify the organization of Deutsche Telekom at the delivery level but complicates it at the sales level.

With this new sales organization, DTAG is setting up an organizational structure that differs from those of competitors. Competitors such as BT and Orange target enterprise clients as part of dedicated enterprise units, respectively BT Global Services and Orange Business Services, and it is not clear why this organization with large enterprises serviced in a separate unit does not work at DTAG.

Rationalizing the classic IT portfolio will take time

The new organization leaves T-Systems with an eroding business – ‘classic IT’ – that includes application and IT infrastructure services (including private cloud hosting), and its ‘growth portfolio’, now without Classified. NelsonHall estimates that classic IT represents approximately ~€2.1bn in revenues, while the growth portfolio is a €1.7bn business. T-Systems is acting on divesting parts of its classic IT business.

T-Systems’ classic IT business includes mainframe and desktop services – and recent efforts to divest these activities have been thwarted. The German cartel office has refused permission for IBM’s acquisition of T-Systems’ mainframe services business (400 personnel). IBM had offered to buy the unit for €860m, a significant amount for T-Systems. T-Systems is now back to the discussion table for its mainframe business with another vendor. And German media has reported that T-Systems and Atos could not agree a price for T-Systems’ end-user computing activities.

Scaling the growth portfolio is the priority

Al-Saleh is taking actions to scale the growth portfolio. However, IoT and security, both of which are high-growth potential businesses, are still small units within T-Systems.

IoT and security sit at the intersection of telecom & network services and IT services, with IoT requiring investment in NB and 5G networks and helping to sell M2M SIM cards. Security, with its network security element, is also critical for both DTAG’s telecom and IT service units.

Setting up IoT and security as independent organizations to accelerate their growth and benefit from commercial opportunities from the entire DTGA group thus makes sense.

There remain two significant units within T-Systems’ growth portfolio, Digital Solutions and SAP services (with NelsonHall estimated revenues of ~€0.5bn for each unit).

In 2018, T-Systems grouped its digital unit into a formal unit, Digital Solutions, combining consulting units (mostly Detecon), its web agency (T-Systems Multimedia Solutions) and several systems integration and agile development units. Digital Solutions has a headcount of ~5k and is therefore of significant scale. T-Systems’ challenge with SAP services is to accelerate its portfolio transition to SAP S/4 HANA and new SAP cloud offerings.

The transformation of T-Systems will take time

Al-Saleh has a three-year horizon, with DTAG and T-Systems keen on making T-Systems’ transformation socially acceptable to its personnel and client base. We expect T-Systems to make acquisitions to scale up its Digital Solutions business, provided T-Systems gradually improves its financial performance. The good news is that with the acquisition of Sprint now proceeding, DTAG has, with T-Mobile U.S., a significant profit driver, which should provide some relief.

<![CDATA[Tech Mahindra Introduces Performance Testing Platform Based on Open Source Software]]>


NelsonHall has commented several times about the role of platforms in quality assurance (QA) and how these are playing a central role in functional testing in the world of agile methodologies and continuous testing. Platforms take a best-of-breed approach to software components and rely, by default, on open source software, sometimes including expert functionality from COTS.

In this blog, I look specifically at Ballista, a performance testing platform launched by Tech Mahindra.

The rise of functional testing platforms

The initial purpose of QA platforms was really to integrate the myriad of software tools required in continuous testing/DevOps with Jenkins at its core. This has evolved: IT service vendors have been aggregating their IP and accelerators around their continuous testing platforms, which become central automation points. The value proposition of platforms still centers on automation, but it is expanding to other benefits such as reducing testing software licenses.

Software license and maintenance fees represent approximately 35% of testing budgets, with the rest being external testing services (~40%) and internal resources (~25%). While software-related expenses represent an essential source of potential savings, few IT service vendors have yet invested in creating full alternatives to COTS in the form of platforms.

Tech Mahindra launches Ballista performance testing platform

Tech Mahindra has started with its performance testing and engineering activities. This makes sense: performance testing and engineering is more reliant on testing software tools than functional testing is: ~70% of performance budgets are spent on software license fees.

Tech Mahindra designed Ballista, relying on open source software, with JMeter (performance testing) at its core, along with Java-related frameworks and reporting tools such as DynamicReports and jqPlot, along with Jenkins (CI), in the context of DevOps.

The value of Ballista comes from integrating the different tools and avoiding tool fragmentation and silos, and from its functionality, which ranges from performance testing to monitoring of production environments (e.g. systems, application servers, and databases), bridging the worlds of testing and monitoring.

Ballista also has stubbing/service virtualization capabilities to recreate responses from a website (HTTP).

Tech Mahindra has created dashboards across the different tools, creating reports from tools that did not previously communicate. It has also worked on improving the UX for accessing the various open source tools.

Tech Mahindra’s approach to performance QA differs somewhat from functional testing platforms mainly in two areas:

  • Ballista uses open source software and Tech Mahindra is not licensing for its IP. Thus, Ballista has no testing software license or maintenance fees attached to it. The IP helps Tech Mahindra in differentiating its performance testing and engineering capabilities
  • When functional continuous testing platforms are implemented, the existing software investment of an enterprise is taken into account. By contrast, Ballista is offered as a complete solution, thus replacing some of the tools already present at the client.

Tech Mahindra will continue to enhance Ballista. Features on the horizon include:

  • More comprehensive stubbing capabilities, e.g. middleware (IBM MQ Series)
  • Production data masking for performance testing
  • Further investment in UX, for instance, installing an agent on users’ browser, and collecting insights about topics such as e-commerce website conversion rates, along with performance issues.

Platforms will automate the full SDLC

The traditional boundary between software tools and IT services has become more porous thanks to IT service vendors and QA pure-plays investing in testing platforms. The functionality of platforms has been mostly functional, expanding into UX testing selectively, and now into performance testing. This is good news, as platforms are helping traditional test automation, expanding from its test script silo right across the SDLC. We will be monitoring the client response to Tech Mahindra’s no-cost performance platform along with market reaction to this “as-a-service” innovation.

<![CDATA[Tieto Acquiring EVRY: Both Local Champions in the Nordics, but Very Different Firms]]>


Tieto recently announced its intention to acquire EVRY. The company is offering 0.12 Tieto share and NOK 5.28 in cash per EVRY share. EVRY shareholders will receive €0.2bn in cash and 37.5% of Tieto’s capital, valuing EVRY to ~€1.4bn ($1.6bn). Also, Tieto is taking over the net debt of EVRY (NOK 6bn/$0.7bn).

The acquisition will give birth to the largest Nordics-headquartered IT services vendor, with ~24k employees and pro forma 2018 revenues of €2.9bn, and an adjusted EBIT margin of 11.1%.

Tieto and EVRY management have highlighted how complementary they are: EVRY has its background in Norway and has very little presence in Finland (~100 personnel), where Tieto has its headquarters; Tieto is little-present in Norway (10% of its revenues).

Where the acquisition will be making an impact is in Sweden, Nordics' largest IT services market. TietoEVRY will derive €1bn revenues from Sweden, which will be its largest geography, slightly ahead of Norway (€0.95bn) and Finland (€0.7bn) (all NelsonHall estimates).

Tieto and EVRY have focused for years on consolidating the Nordics IT services market, aiming to gain scale in their domestic market, and with IT giants such as IBM, Accenture, and CSC as key competitors. The Nordics IT services market has almost completed its consolidation, largely thanks to Tieto and EVRY, but also to CGI/Logica (WM-data) and NEC (KMD).

The competitive landscape has changed in the Nordics, with Indian service providers such as TCS and HCL Tech having become key competitors, and with increasing presence from Infosys and Wipro.

Tieto and EVRY have made different choices

Tieto and EVRY have very different profiles. Tieto has a service line-based structure, while EVRY has a geographical approach (Norway and Sweden) co-existing with its Financial Services solutions business. EVRY has very much focused on onshore delivery, while Tieto was embracing offshore and nearshore. Approximately 50% of Tieto’s headcount is now in its global delivery network, compared with 30% for EVRY, in India, Ukraine, Poland, and Latvia. The potential for further offshoring at EVRY is, therefore, very significant.

There have also been differences in strategy with regard to the two companies’ management of their IT infrastructure: whereas Tieto decided to keep its data center services business, standardize its offerings and focus on private cloud, EVRY outsourced all its infrastructure management to IBM SoftLayer in 2015, in a ten-year deal worth $1bn. Over the past three years, EVRY has migrated 5k servers and 3k network equipment to IBM SoftLayer’s data center in Oslo and now sells standard services across datacenter and workspace services.

Tieto and EVRY both have a solutions business. Tieto’s Industry Solutions software product business, which caters to multiple sectors (including payment, utilities, healthcare & welfare, hydrocarbon management and pulp & paper) accounts for 30% of revenues. EVRY has focused on the Financial Services sector, on open banking, cards, and ATM services.

Tieto and EVRY are, therefore, significantly different businesses. However, with digital, the two companies have announced strategy changes (with, at its core, digital services) and geographical convergence.

During its Q1 2019 earnings, Tieto unveiled its new strategic plan. The plan combines cost savings (€30m) and shifting from its service line structure to become more geography-centric. Tieto wants to provide more power to the geos and accelerate the shift to digital services in each country, with a focus on UX, analytics, and AI.

EVRY, as part of its strategy unveiled in late 2018, wants to become more centralized. The company has restructured its consulting and SI business in Norway around digital experience and business consulting, AI & analytics, business platforms, and ADM, security, and cloud. The company is rolling out the new structure in Sweden.

With their software products businesses:

  • Tieto is rejuvenating its Industry Solutions unit, which has not been in growth mode for some years, to focus on financial services, public sector, and industrial sector, targeting R&D synergies, architectures, and program management synergies
  • EVRY continues to focus on financial services. However, the company wants to verticalize more offerings, targeting initially public & health and SMEs. It is looking at providing personnel with industry skills, such as nurses and doctors in the healthcare sector. EVRY also targets solutions by supplying bundled services. We think that Tieto, with the broad portfolio of the Industry Solutions unit, especially in lifecare and welfare, local government, and case management, will help here.

A complex integration

The integration by Tieto of EVRY will be complicated, mainly because the two companies have made materially different choices in their operations in terms of structure, offshoring, and IT infrastructure. For this reason, the integration will take time.

Also, Tieto and EVRY need to transform the Swedish operations where EVRY has struggled for the past two years, with several contract losses, high attrition, recruitment difficulties, and reliance on subcontractors.

Increasing emphasis on digital offerings

TietoEVRY will focus on digital, security, and cloud as its growth engines. The company will have 5k digital personnel, representing ~20% of the total headcount, and will need to accelerate its portfolio transformation towards digital and towards consulting, leveraging its onshore background.

NelsonHall also expects TietoEVRY to accelerate its offshore movement in its Application Services units. This move is a must if TietoEVRY wants to effectively compete against Indian vendors and accelerate its growth.

Will TietoEVRY now expand its presence outside of Nordics? In the short term this is not likely, apart from its products business and its E&RD services – the company still needs to grow in Denmark, where NNIT, Norvo Nordisk’s former captive, operates.

<![CDATA[Not Just About the Crowd: Crowdtesting Embraces Automation]]>


In the world of testing services, crowdtesting stands out. While efforts towards automation are accelerating in testing services/quality assurance (QA), the perception that crowdtesting is labor intensive and relies on communities of tens of thousands of testers seems at odds with this. This perception is no longer valid: the crowdtesting industry has changed.

Managing crowdtesting communities

A core element of the activity of crowdtesting vendors is around managing their communities. In this industry, one-off events (e.g. where participants join for an event like a hackathon) are uncommon. Quite the contrary, crowdtesting vendors provide their activities on an ongoing basis, increasingly as part of one-year contacts.

Crowdtesting vendors have focused their effort for the past two years on getting a detailed understanding of their communities, on the skills and capabilities of crowdtesters, and helping them enhance skills through online training.

Crowdtesting firms continue to invest in their communities, focusing on two priorities: to accelerate the identification of relevant crowdtesters for a given project’s requirements, and to increase the activity level of communities to make sure crowdtesters with specific skills will be available when needed. Speed is now of the essence in the crowdtesting industry, primarily because of agile development.

Agile & crowdtesting

Crowdtesting vendors have largely repositioned their offerings to accommodate fast testing turnaround times. This makes sense: most of the technology tested by crowdtesters is mobile apps and responsive websites, which have driven the adoption of agile methodologies.

Crowdtesting vendors are now often providing agile testing during the weekends, making use of the availability and distributed location of crowdtesters.

Automation increasingly takes a primary role in crowdtesting.

Automation is changing

Initially, the crowdtesting industry relied on the comprehensiveness of its software platforms, focusing on three personas: the crowdtester for reporting defects, the crowdtesting vendor for managing errors and providing analysis and recommendations, and the customer for accessing the results and analysis.

Crowdtesting vendors continue to invest in their platforms, but that investment is not enough anymore. AI and enhanced analytics have made their way into crowdtesting. An example of an AI use case is defect redundancy identification: cleaning the raw defect data and eliminating, for instance, duplicate defects. In the past, this defect analysis was done manually; increasingly, it is done using ML technology to identify bugs that have similar characteristics.

Another example is emotion analysis where, in addition to their defect reports, crowdtesters also provide videos of their activities. In the past, emotion analysis required human video analysis; in the future, AI will help identify the emotions, negative or positive, of the crowdtester. This will help both the crowdtesting vendor and the client in knowing where to look within a given video.

The crowdtesting industry is also pioneering other AI use cases. AI use cases have expanded from enhanced analytics to automation. The most advanced crowdtesting vendors are looking to create test scripts automatically, based on the gestures of the tester, or use web crawlers. Over time, the crowdtesting industry will combine manual activities and automation.

IP protection

This is for the future. For now, crowdtesting is still a niche activity and needs to gain scale. A significant challenge for scaling up is IP and privacy protection. Clients still fear IP may leak to competitors or consumers during the testing cycle, and the crowdtesting industry is trying to address this fear by relying on ‘private crowds’.

We think that the crowdtesting industry will become more successful when clients recognize that not all their mobile apps or websites are strategic differentiators and that the impact of an incremental new feature in a mobile app being leaked in the open will be limited. And clearly, agile is promoting this incremental approach, which makes crowdtesting more acceptable over time.

<![CDATA[DXC Supports BMW in its Autonomous Vehicle Journey]]>


We recently talked to DXC Technology’s Analytics business line about its work with German premium car OEM BMW, its positioning in autonomous vehicles, and its differentiation strategy.

BMW recently provided some light on the dynamics behind its Vision iNEXT autonomous car due to launch in late 2021. Vision iNEXT, BMW’s first autonomous vehicle, will offer Level 3 autonomous driving as an option. Drivers of Vision iNEXT will be able to surrender control of their vehicles for “long periods” of time and at speed of up to 80 mph.

To support its Vision iNEXT program, BMW started work on its High Performance D3 (Data-Driven Development) program two years ago. D3 reflects that the program is based on the collection of vast amounts of data. BMW is gathering data from its fleet of 80 test 7 Series cars operating in the U.S. west coast, Germany, Israel, and China. BMW is planning to ramp up the number of vehicles in the test fleet to ~140 by the end of 2019, looking to capture vast amounts of data to understand a wide number of traffic scenarios.

The project is about scale. Through its fleet of test vehicles, BMW believes it will have in-car and out-of-the-car sensor data collected across 3.7m miles. However, not all data is relevant and BMW believes it will need to extract 1.5m miles’ worth of data out of the 3.7m. Thus, BMW will need to complement in-field data with simulation-based synthetic data, representing the data collected by autonomous vehicles across 150m miles. The required IT infrastructure shows the scale of the project and requires a storage capacity of 320 petabytes. Over time, BMW will need to reprocess the data and this will demand massive computing power.

This is where DXC Analytics is helping: the unit has been involved at the IT infrastructure and data lake level and designed and deployed the big data solution close to the headquarters of BMW in Munich. To support the project, DXC has used Robotic Drive, a big data platform it has customized to the specific needs of the automotive industry. Robotic Drive combines open source software (e.g. the Apache Hadoop ecosystem of software tools) and several accelerators (e.g. analysis of data in vehicle native format, reference architectures relying on clusters for AI training purposes).

From DXC’s perspective, Robotic Drive, which the company provides as part of its services, is important in differentiating its autonomous vehicle service portfolio. DXC wants to address the high demand for data analytics. As a result, DXC Analytics is focusing on commercial expansion, with Robotic Drive having several clients in Europe, and DXC Analytics is now looking to expand in the U.S. through the creation of a CoE. The unit is also investing in its sales force, recruiting pre-sales consultants, and ramping up its factory-based delivery presence in India and Poland, as well as exploring the Philippines and Bulgaria.

Internal collaboration will also play a role: DXC Analytics is increasingly working with other DXC units, notably around IoT and DXC’s Enterprise Cloud Applications units. An example of joint work is around SAP HANA. Another key event that should accelerate the growth and expand the capabilities of DXC Analytics is Luxoft, whose acquisition has just been finalized.

Luxoft will help expand DXC Technology’s automotive-specific offerings towards digital cockpits, autonomous driving and ADAS. With Luxoft, DXC Analytics gains technical and business expertise. This should help DXC Analytics expand from its big data positioning and gain a stronger footprint on the data science and algorithm side.

<![CDATA[Capgemini Jumpstarts its ER&D Activities for €5bn!]]>


This week Capgemini announced its intention to acquire Altran in an all-cash transaction of €14 per share. The €5.0bn transaction, which includes €3.6bn in cash and Altran’s net debt of €1.4bn, provides a 30% premium over the last month. Capgemini has secured the 11% stake from Apax Partners and expects to close the acquisition by the end of 2019.

The acquisition provides significant scale to Capgemini. The combined organizations will have (2018 data) revenues of €16.1Bbn, an adjusted EBIT margin of 12.1%, and a headcount of 258k.

Altran’s troubled Aricent acquisition

Altran is the largest ER&D service vendor globally. Its 2018 revenues of €2.9bn (~$3.3bn) were ahead of Alten (~$2.6bn) and HCL Tech ($2.1bn).

Altran has undergone a lot of change since Dominique Cerutti became its CEO in June 2015. Under his leadership, Altran has changed its business model based on a strong agency presence in each country and:

  • Developed its presence in India and nearshore countries
  • Expanded its presence in the U.S.
  • Shifted its portfolio towards specialized capabilities, mostly in digital.

A key development in Altran’s history was its March 2018 acquisition of Santa Clara HQ’d, India-centric Aricent. Aricent was a significant acquisition that brought in $0.7bn in revenues (mostly in the U.S.), strong capabilities in telecoms sector R&D, and a large delivery network in India (8.5k engineers).

Aricent had been struggling from a heavy client concentration in the telecom sector and, over the years, had diversified its telecom ER&D client base to the semiconductor, technology and ISV industries, as well as striking an IP partnership with IBM (not dissimilar to that of HCL Tech).

However, the integration of Aricent was not as smooth as Altran had expected: the company had a bumpy year in 2018, its share price impacted by the discovery of forgery in Aricent’s accounts representing $10m in revenue, investor concern about Altran’s net debt, and short sellers. Altran’s stock, which was trading at €14 per share before the Aricent acquisition announcement, went as low as €6 at the worst of the crisis. However, Altran had an excellent operational performance and finished 2018 with 8.0% CC/CS revenue growth and an adjusted EBIT margin of 12.1%. Its financial performance positioned Altran as the best performer among ER&D vendors with an onshore background.

Capgemini gets scale and several jewels

Capgemini has indicated for several years its strong interest in industrial and engineering services, having gained some modest presence in this market through acquisitions such Transiciel (mostly aerospace), Euriware (nuclear plants), and IGATE (medical devices) but remaining relatively small in this space, deriving around €0.3bn from ER&D revenues in 2018.

To a large extent, the acquisition of Altran will be a reverse takeover, with Altran being nine times larger than Capgemini’s Digital Engineering and Manufacturing Services (DEMS). The ER&D activities of the two firms represent €3.4bn in 2019 combined revenues and a headcount of 54k.

Altran has developed a significant offshore and nearshore presence, with 17.5k engineers. In total, Capgemini, after the Altran acquisition, now has an ER&D global delivery ratio of 39% in its ER&D activities, which is a good starting point.

Altran doubles the telecom, media and technology revenues of Capgemini and brings capabilities around 5G and next-gen telecom offerings. This is good timing, since telecom service providers in the U.S. and also in Europe are accelerating their 5G investment.

Altran also brings a software engineering capability servicing U.S. internet, cloud and collaboration giants in the U.S., along with large ISVs. Capgemini CEO Paul Hermelin highlights how attractive this capability and client base are to the company. Indeed, the FANGs and other internet/cloud firms dominate worldwide R&D spending, ranking ahead of the pharmaceutical sector, IT, aerospace and automotive.

Also, Altran brings two jewel organizations within its portfolio: with the acquisition of Aricent, Altran gained a product design firm, frog, which is known for having designed Apple products in the 70s-80s and has market recognition. Frog, with a NelsonHall estimated headcount of ~800, is similar to Capgemini’s Idean product design subsidiary, whose headcount we estimate at ~750.

In addition to frog, Altran brings in Cambridge Consultants, an innovation consultancy based in the U.K. that has 800 personnel, and one of the main firms operating in this space. Cambridge Consultants is good at identifying technology opportunities and has spun-off several technology firms; some of which were listed on the U.K. stock exchange.

Capgemini wants to lead in digital manufacturing

The ambitions of Capgemini go beyond the leadership in ER&D services that Altran brings. The company is targeting the increasing adoption by manufacturing operations of technology including IoT and digital twins initially, and then the refresh of manufacturing systems such as MES and SCADA, some of which have been operating for 30 years and will need to go through an upgrade cycle. Hermelin believes Capgemini is well equipped for leading the market; indeed, it is the largest by revenues and by portfolio and has the right delivery network to balance its client’s needs for offshoring and onshore delivery.

All in all, with Altran, Capgemini is making a bold move that positions the company ahead of competitors in the digital manufacturing space. The downside is that Capgemini is adding €5bn in net debt to its €1.8bn net debt at the end of 2018. As a result, Capgemini will refrain from making acquisitions for the next two to three years.


NelsonHall is currently working on a major Digital Manufacturing Services project as part of its IT Services research program. To find out more, contact Guy Saunders.

<![CDATA[IT Service Buyers in 2019: Four Key Trends]]>


NelsonHall has recently completed compiling and analyzing data from a survey of over 1k IT services buyers globally. This research is being published in a series of sector reports (all are available to subscribers here), spanning the 18 sectors analyzed, with geographic breakdowns incorporated within each.

While completing these individual reports we noted several themes that were fairly consistent regardless of sector or geography represented. In this blog, I look at four key trends pulled from this data that IT service vendors can use to better position their offerings to these buyers.

Digital Initiatives Get Specific

Finding: Sixty-nine percent of respondents globally identified digital initiatives as highly important to their future IT strategies. As a data point this may not be surprising, given the ubiquity of ‘digital’ and ‘digital transformation’ in business today, but in the previous round of this research, completed ~18 months ago, that proportion was 82%. Is digital really less important today to a significant proportion of IT service buyers?

NelsonHall’s perspective is that the overall aims and initiatives falling under the digital transformation umbrella have only increased in importance to buyers, but that there are two key take-aways from this change:

  • Buyers have frequently not seen the intended business value from early digital transformation initiatives. Broadly defined initiatives with indirect business cases may not have achieved the desired outcomes, particularly given the frequent investments required in foundational activities (such as data clean-up and digital core migration), thus leading to doubt as to the value of these initiatives
  • Buyers are more sophisticated and are focusing on specific digital initiatives and technologies rather than broadly defined ‘digital transformation’. As an example, manufacturing companies may not place as much priority on ‘digital transformation’ but they do place high priority on implementing IoT on the shop floor to enable predictive maintenance.

Vendor response: When communicating with clients and potential clients about digital initiatives, vendors should focus on specifics. This includes clearly defining the roadmap of initiatives to be pursued and quantifiable value that can be achieved.

Clouds Are Still Gathering

Finding: While ‘cloud’ is nearly as ubiquitous as ‘digital’ in today’s business lexicon, we estimate that only ~30% of large enterprise workloads reside in cloud environments. Despite the rapid growth of leading public cloud providers such as AWS, Azure, and Google, we further estimate that only 12% of large enterprise workloads reside in public cloud environments. When it comes to large enterprises, cloud adoption still has a long way to go.

This low adoption to date also translates into significant plans for investments in cloud adoption going forward. By 2020, companies globally are looking to increase the proportion of their workloads in clouds to ~35%. This will be primarily accomplished through greater public cloud adoption, projected to rise to ~17% of workloads in parallel. To fuel this rise, IT service buyers project their spending on cloud infrastructure to rise by more than 6% on average globally.

Vendor response: Vendors should not only recognize that their client base may be slower on the cloud adoption journey than expected (and tailor messaging to reflect this) but should also focus on helping clients understand the breadth of options for expanding adoption of cloud. This includes focusing on accelerators to simplify and de-risk cloud migration as well as building capabilities to support alternative paths to expanded cloud footprints: a majority of buyers place high priority on a vendor’s cloud-native development capabilities, and increasing use of SaaS was the most commonly identified sourcing change planned.

The Need for Speed

Finding: While digital transformation initiatives have a breadth of benefits highly important to companies, including reducing service delivery cost, increasing revenues, improving customer experience or improving competitiveness, the most commonly cited high priority benefit focuses on accelerating delivery of services. For the business side of companies this includes launching new products and services, a high priority for more than 80% of companies, and achieving levels of straight through processing and turnaround times, a high priority for ~60% of companies.

IT departments are also looking to accelerate services, with reducing new application time to market identified as a high priority by more than 90% of companies globally. To achieve this, companies are also prioritizing increased digitalization of operations and adoption or increased use of DevOps.

Vendor response: When defining the business case benefits for new initiatives, vendors should focus on speed, and how it fuels other benefits such as improved customer experience and competitiveness to drive incremental revenues. Initiatives and investments should also reflect this priority, such as focusing on end-to-end service design and expanding use of agile development and DevOps for both customer and internal-facing digital initiatives.

Know Me – or No You

Finding: When IT service buyers were asked what characteristics they most prioritize in vendors with whom they work, the answer was industry knowledge, a high priority to more than 75% of companies. Given the benefits being sought from digital initiatives, including improved customer experience and increased competitiveness, an understanding of the key service features unique to that industry is imperative. This prioritized knowledge extends to an understanding of sector-specific applications that can be applied to address particular needs.

Secondarily, buyers are looking to work with vendors who not only understand them but can also help them to understand their own customers. The next most commonly prioritized characteristic was UX consulting and design, reflecting the priority companies are placing on tailoring their offerings to the unique and changing demands of their customer base.

Vendor response: Vendors need to demonstrate their deep understanding of a client’s industry, including their business challenges and potential applications designed to address its unique requirements. Having a workforce that understands and can speak to client needs is important. Additionally, vendors need a dedicated UX capability to drive the process of understanding customer expectations and experiences and then tailoring offerings to those needs.


The priorities of IT service buyers are evolving, and vendors need to ensure their offerings and messaging align with these priorities. The clear take-away from our research is that vendors need to focus client messaging on the specific digital initiatives to be pursued; help clients in adopting cloud through a variety of avenues; focus on speed as a key benefit of digital initiatives; and ensure that they possess an understanding of industry imperatives.

<![CDATA[Digital Workplace Services Driving the Employee Experience]]>


NelsonHall recently completed an in-depth analysis of advanced digital workplace services (DWS). This blog looks at some of the key findings from this research, in which we spoke both to leading IT services vendors and clients of their services.

Changing workforce expectations are driving DWS transformation

Organizations are deploying digital workplace services to improve productivity and efficiency while also improving the overall employee experience with more self-service tools, more personalized support, and gamification methods. Offering a digital workplace is also key to attracting new talent.

Employees’ experiences in their personal lives in using mobile devices and AI assistants such as Alexa and Cortana are driving similar expectations at the workplace. There are three engagement models for offering personalized support:

  • Proactive & predictive (remote monitoring, self-healing, RPA, AI and intelligent automation, predictive analytics)
  • Self-service (portal-based, mobile support apps, virtual agents, knowledge items)
  • Specialist on-site support (e.g. Tech Cafes, smart lockers, IT vending machines, video support, smart FM).

There is also an increasing focus in contract agreements on business-aligned ‘XLAs,’ or experience-level agreements (i.e. on user journey quality including zero-time-fix, user hours saved and marginal gain methodology).

Maximizing value from DWS requires collaboration across the enterprise

The buying profile of organizations is evolving as they endeavor to enable more collaborative working by their employees. Where traditionally central IT would drive services as a means to improve cost, IT is now adopting the role of a service broker, offering self-serve capabilities to end-users to provision the services needed, when the end-users want, and how they want.

Recent developments include engaging with marketing and communications departments and using gamification as a means to improve adoption of self-service tools, and with HR for more efficient on-boarding and off-boarding of employees. In addition, collaborating with facilities management to drive the adoption of smart offices (smart conference and booking facilities) and intelligent space management and wayfinding solutions through beacons and sensors.

Design thinking takes personalization even further

Many vendors are now engaging with their clients through collaborative design thinking workshops to generate ideas to improve the end-user experience. This includes the use of ethnographers to understand the profile of target clients and their needs and priorities, including self-serve portal creation. We expect vendors will ramp their design thinking capabilities over the next 12 months.

Social collaboration tools are an important part of UX

There is also growing demand to enable workers to collaborate more effectively through tools such as Yammer, Workplace by Facebook, Slack, Hangout, G-Suite, Skype for Business, SharePoint, WhatsApp, and Microsoft Teams to drive better collaboration across projects and improve end-user experience. Vendors are developing social and collaboration platforms to integrate various social collaboration platforms into one and partnering with disruptors in the market, such as Google. It is evident some vendors are further ahead of the curve than others in this area, with some having already implemented dedicated social collaboration platforms to improve UX.

Windows 10 migration services will continue to ramp

Windows 10 migration has been high on the agenda for some time now, with the end of Windows 7 support in January 2020 forcing the move to Windows 10. Windows 10 provides added security, along with device flexibility and improved UX. There will be a significant uptick in migration rates for the laggards.

Recent developments have also seen the introduction of Microsoft Managed Desktop (MMD), which enables organizations to allow Microsoft to manage their Windows 10 devices. Microsoft also introduced Windows Virtual Desktop on Azure, allowing organizations to run Windows 10 virtual desktops on the Azure platform.

Field services will play an important role in targeting IoT-enabled workplace opportunities

The role of field services is evolving with an increasing deployment of field engineers on servicing IoT-enabled solutions such as wayfinding type ones for smart offices and smart facilities. Their activities include installation, management, and maintenance of sensors and beacons in support of these initiatives. We anticipate vendors will also develop further use cases for AR/VR services in the field for remote technical support and training.

Future developments

The DWS market has evolved considerably in recent years with changing workforce expectations driving a greater personalization of services with a higher propensity to adopt AI, cognitive, ML and analytics technologies through a collaborative approach to improve the employee experience.

This evolution continues, with greater use of models including zero-touch service desk enabled through AI, smart offices, and increased use of IoT-enabled devices, AI and AR/VR in the field.

Also expect to see new technologies such as Microsoft Managed Desktop (MMD) and VDI on Azure gain substantive traction, and Amazon and Google becoming major disruptors in DWS.

<![CDATA[TCS Research & Innovation: Balancing Horizons to Drive Client Value]]>


At the recent TCS Innovation Day held in New York, NelsonHall had an opportunity to talk to TCS about how it approaches its research and innovation function, balancing longer-term open-ended research and more immediately applicable innovations to evolve how it delivers services. (For more on the event itself, see the blog by Andy Efstathiou here).

TCS has developed an overall approach that balances research into cutting edge areas, innovations to its services, and applying learnings from its innovation network of academics and start-ups.

Research: Focusing on Broad Themes & Long Term

TCS has been conducting its own internal research for ~38 years. However, over time, its focus has evolved. Initially, the research was primarily focused on technologies directly applicable to the delivery of application services, for example, code generation; TCS used its research capabilities to play an advisory role to the application development and testing tool maker Rational throughout the ’90s.

Today, however, the focus of TCS research has broadened into nine areas that impact both its business but also the core businesses of its clients. These research areas are:

  • Physical sciences
  • Software systems and services
  • Life sciences
  • Embedded systems & robotics
  • Cybersecurity & privacy computing systems
  • Behavioral, social & business sciences
  • Data & decision sciences
  • Deep learning & AI
  • Media & advertising, computing foundations.

This work is conducted by a dedicated research team of ~1k employees, including 200 employees with PhDs and 400 with advanced degrees. This research is funded through a dedicated annual spend of ~1.5% of revenues. In 2019, that totaled ~$300m.

As an example of one of these areas of research, TCS has a dedicated team of ~100 employees working on genomics across several labs in India. The primary focus of this team is currently next generation diagnostic techniques. Areas of focus recently have been identifying early indicators for babies at risk of premature birth (which kills 1 baby every 30 seconds globally), as well as early screening for other rare genetic disorders.

These research areas may not have a direct impact on TCS offerings, but they could be applicable to client offerings; genomics is not only applicable for the pharmaceutical, life sciences and healthcare provider sectors, but even insurance. TCS is developing non-invasive diagnostic procedures for diseases such as diabetes, Parkinson’s and certain cancers as well as measures for wellness to enable early risk detection and better alignment of premiums with individual lifestyles.

In addition to being applicable to client business, this type of research helps position TCS as a thought leader in these industries, and NelsonHall research shows that 78% of IT service buyers globally place high importance on industry knowledge when selecting a vendor to work with.

For this research, TCS takes a three to four-year time horizon on research projects and prioritizes outputs such as patents and published research rather than the monetization of outputs. Any monetization is accomplished when research is moved into the innovation pipeline.

Innovation: Focusing on Short-Term Service Evolution

Whereas research takes the long-term view, TCS’s innovation practice seeks to convert research into innovations that can be applied directly to the services offered to its clients.

To identify the opportunities with the highest value potential, TCS has focused on building out a governance process to provide a detailed structure for determining which research can be converted into innovation programs. The innovation governance process comprises nine stages to develop, review, and approve new innovations. Within this process, there are three separate review stage gates to determine whether an innovation initiative should continue. This review process includes a stage in which new innovations are pitched to senior corporate leadership, including a roadmap to production readiness and a projected ROI.

TCS estimates that on an annual basis, of ~200 innovation ideas which begin the funnel process, 10-20% of them will make it through all nine stages. TCS is focusing its innovation on assets to support the delivery of services rather than products to be sold to clients, such as its BaNCS line of banking products. Given the amount of ongoing maintenance and handling required to keep software products relevant, the return on these is viewed as lower.

Assets that have been produced by TCS innovation programs include ignio, its cognitive platform, and Mastercraft, its delivery automation platform supporting agile development, DevOps, data quality, and application modernization.

COIN: Tapping External Innovation

In addition to its internal research and innovation efforts, TCS taps into its Co-Innovation Network (COIN), consisting of academics, venture capitalists and start-ups, to identify new opportunities. Academic institutions with which TCS partners include:

  • University of California, Berkeley
  • MIT Media Lab
  • University of Toronto
  • Carnegie Mellon
  • Cornell Tech     
  • The University of Tokyo
  • Indian Institute of Science.

To facilitate collaboration, TCS is even co-locating portions of its research within academic spaces. As an example, the soon to be opened PACE Port Innovation center in New York is located in Cornell Tech space on Roosevelt Island.

TCS has a dedicated team working with venture capitalists and start-ups to identify new capabilities, though it doesn’t take equity positions in the start-ups with which it partners. TCS maintains the objective of always working with best-of-breed partners, so it doesn’t want to be invested in one company if it feels another product is superior.

TCS works with start-ups in three ways. It supports start-ups developing a broader path to the market – partnering on engagements and helping with go to market. It also uses start-ups within the delivery of its services, filling gaps, or expanding its own capabilities. Finally, it uses start-ups to build out its own offerings. TCS is identifying sector-specific processes for which to build offerings and then acting as an orchestrator, stitching together individual start-up products into a cohesive end-to-end offering.

As an example, TCS was engaged by a client to build a mobile banking process in 12 weeks. It took four weeks to identify the component start-up products and eight weeks to integrate them into a cohesive, end-to-end offering.


TCS places significant priority on research and innovation with a dedicated budget and a team of specialists looking at possibilities in both long-term research and innovation directly applicable to its services.

Understanding that a research function can be applied to both its own offerings and client situations provides TCS with a broader perspective and positions it with clients as thought leaders in their industries in addition to evolving its services.

In late 2018, TCS launched its TCS Pace brand as a way to differentiate its innovative offerings and capabilities from some of its legacy core offerings for which it has long been known (such as application and infrastructure management). TCS has long focused internally on building a research and innovation engine, and now Pace gives those innovations a name.

<![CDATA[Accenture’s Zoran Tackles Digital Identity Failings]]>


NelsonHall recently visited Accenture at its Cyber Fusion Center in Washington D.C. to discuss innovations in its cyber resiliency offerings and the recent launch of its new digital identity tool, Zoran.

Failings of existing role-based access (RBA)

Typical identity and access management (IAM) systems control users’ access to data based on their role, i.e. position, competency, authority and responsibility within the enterprise. It’s a standard best practice to keep access to systems/information at a minimum, segmenting access to prevent one user, even a C-level user, from having carte blanche to traverse the organization's operations. Not only does this reduce the risk from a single user being compromised, it also reduces the potential insider threat posed by that user.

While these IAM solutions can match user provisioning requests to a directory of employee job titles to automate a lot of these processes, there can be a breakdown in the setup of these RBA IAM tools, with roles defined too widely as a catch-all, which in turn reduces the segmentation of the access. For example, if a member of your team works in the R&D department developing widget A, should they receive access to data related to widget B?

Likewise, another issue with these solutions is privilege creep, which is where an employee who has had several roles or responsibilities has retained previous permission sets when they have moved role. These and many more issues result in RBA systems being ineffective, as they are implemented as a static picture of the organization’s employees at a single point in time. In addition, recertification is a time-consuming and wasteful exercise.

Enter Zoran

Accenture developed Zoran in The Dock in Dublin, a multidisciplinary research and incubation hub. It brought in five companies to discuss the problem of identity management, two of which stayed on for the full development, handing over data to Accenture to be used in the development of Zoran.

Zoran analyses user access and privileges across the organization and performs data analytics to look for patterns in their access, entitlements, and assignments. The trends found in this patent pending analytics algorithm are used to generate confidence scores to determine whether users should have those privileges. These confidence scores can then be used to perform automatic operations such as recertification, for example, if a user’s details change after a specified period of time.

Zoran is not using machine learning to continuously improve confidence scores – i.e. if, for a group of users, an entitlement is always recertified, the confidence scoring algorithm is not updated to increase the confidence score. Accenture’s reason for this is that it runs the risk of being self-perpetuating, with digital identity analysts being more likely to recertify users because the confidence score has risen.

Currently, Zoran does not store which security analyst approved which certification for which user, although Accenture is in the process of adding this feature.

Will Zoran be the silver bullet for IAM?

IAM tools have been relatively slow to develop from simple automation to an ML/AI state, and this is certainly a step in the right direction. However, there will have to be some reskilling and change management around the recertification process.

While Zoran aims to reduce the uncertainty in recertifying permissions for a user, there is still a very limited risk of ‘false positive’ confidence scores being given which could automatically recertify a user, or that a security analyst could certify a user in something akin to a box-ticking exercise due to trust in the confidence score provided.

Accenture also needs to improve on developing the Zoran technologies with its other technologies; for example, its work with Ripjar’s Labyrinth security data analytics platform could yield some interesting results.

NelsonHall believes tools such as Zoran, combined with more traditional IAM solutions, are likely to be the current trajectory of the IAM market, with ML further segmenting groups/roles and providing increased trust in recertification processes.

<![CDATA[Integration & Autonomy: Common Traits of Successful Creative Acquisitions in IT Services]]>


In recent years, many IT services firms have been aggressively acquiring new capabilities in creative areas such as design.

One of the key challenges they face in acquiring such firms is managing what can be a significant culture gap between the flexibility of small creative firms and the regimented, industrialized nature of a global IT services firm. Successful acquisitions strike a balance between maintaining the autonomy of the acquired firm, keeping the talent engaged, and integrating capabilities into the broader company to realize the value being sought through the acquisition. Rather than allowing an acquired firm to continue running fully autonomously or being fully subsumed into the broader organization, acquirers are looking at a blend: combining a common foundation, direction and values while maintaining a level of autonomy in how services are delivered.

Based on our conversations with a number of IT services firms and representatives of acquired firms, this blog looks at some of the more effective approaches being adopted to balance autonomy and integration when assimilating newly acquired capabilities.

Maintaining Autonomy

Leveraging Existing Brands

Maintaining autonomy in branding of newly acquired capabilities signals to both employees and the market that this is a unique capability within the broader IT services firm’s delivery portfolio. This is a common approach for design agencies that have developed their own brand awareness. Examples where IT services firms have maintained the brands after acquisition include Infosys’ WONGDOODY and Brilliant Basics, TCS’ W12 Studios, Tech Mahindra’s BIO Agency, and Wipro’s Designit.

Acquiring firms are also frequently allowing acquired design firms to maintain their own go-to-market initiatives. While leveraging the acquired capabilities to support existing engagements, broaden relationships with existing clients, and reach new clients are the main drivers of these acquisitions, global IT services firms are recognizing that design agencies have relationships and visibility of opportunities they lack. In these instances, the design agency leads the client relationship while leveraging the broader IT services firm to offer a much broader portfolio of offerings.

Maintaining separate branding is particularly important for acquiring firms that lack a strong consulting heritage. While the companies above all come from an IT outsourcing heritage, competitors with a more consulting-led heritage have built distinct branded design business units integrated with their consulting capabilities rather than continuing stand-alone brands after acquisition. Examples include Capgemini Invent and IBM iX. Accenture’s Fjord sits somewhere in the middle: a part of Accenture Interactive with a number of design agency acquisitions rolled under the brand of its first major design agency acquisition, Fjord.

Maintaining Dedicated Delivery Locations

In M&A situations, the opportunity for cost synergies from consolidating real estate is a common cost saving measure. However, IT services firms are instead frequently maintaining dedicated design agency space to minimize the disruption to the existing employee base and help demonstrate the acquiring firm’s commitment to the acquired firm’s culture. When TCS bought London-based W12 studios last year it was already in the process of building out a creative design thinking space in London. Even with the space complete, the W12 team has so far remained in its small home-like Camden offices a few miles away. Infosys is building a design studio network, including a Design and Innovation Hub in Providence, Rhode Island, while WONGDOODY (Los Angeles, San Francisco, New York and Seattle) and Brilliant Basics (London, Dublin, Berlin and Amsterdam) has maintained its own studio locations.

Maintaining Career Trajectories and Expanding Opportunities

In order to retain their new creative employees, acquiring IT services firms are also often maintaining many of their former employer’s career development structures. Smart acquirers are actively seeking to minimize the impact on the processes and procedures that directly impact the career growth of their newly acquired talent.

Furthermore, they are leveraging their scale to emphasize to their new employees the range of new career development opportunities available in a global organization. These can include working in new geographies and/or new industries and adding adjacent skills. Allowing employees to see new opportunities to apply their expertise in new ways can provide an incentive to remain, post-acquisition.

Pursuing Integration

Building a Common Toolset

One of the key challenges of acquiring a niche firm is the application of its capabilities to a broader, global client base. This is a particular challenge when a large firm makes a series of acquisitions. Different groups across different markets, deploying different tools, processes and methodologies can mean inconsistent outcomes. Acquiring firms need to build a global toolset of common templates, assets and methodologies to act as a common denominator across geographies and delivery units.

Accenture has rapidly expanded its creative capabilities since its acquisition of Fjord in 2013. To ensure consistency of delivery across these geographically diverse units it has developed its product design kit (PDK), a bundled set of proprietary assets including style guides, front-end development accelerators, accessibility standards, design templates, code snippets, pattern libraries, and usage and content guidelines to support design engagements.

Leveraging the Best of Both Parties

As part of building that common foundation, it is important to identify and keep the best of what each organization brings. Imposing rigid structures onto an acquired firm may reduce the value that can be realized, and applying the processes of a small, nimble boutique in a global context may not work.

Successful integration requires taking the core capabilities from the acquired firm, industrializing them and then training the broader workforce of the acquirer on these new approaches. Following Tech Mahindra’s acquisition of the BIO Agency in 2016, it launched ‘BIO University’, with BIO employees delivering training to employees based in Tech Mahindra delivery centers on the skills, processes, and methods derived from those used by the core BIO team but tailored to a global organization. This initiative expanded the design-skilled resource pool from 150 employees based in London and New York to 650, including ~400 in India.

Accenture and Fjord undertook a similar global training program: Fjord estimates that, in addition to its ~1.1k dedicated design specialists, another ~50k Accenture employees have undertaken some training on design principles about which they can speak knowledgeably with their clients.

Building an Organization within an Organization

For serial acquirers, it may not make sense to let each newly acquired creative firm maintain its autonomy in branding and approach as described above. Rather, the acquirer can build a dedicated business unit to house each of these organizations together. This business unit balances global commonalities in delivery methodologies, tools and skills without imposing the greater standardization that comes with integrating creative groups into non-creative business units. IBM iX, Wipro’s Designit and Accenture’s Fjord all play this role as a landing spot for each new creative acquisition.

When Wipro acquired Designit in 2015, the design firm had ~300 employees based across seven countries. By 2018, Designit, now Wipro’s strategic design unit, had grown its headcount to ~500 employees across thirteen countries thanks to both organic and inorganic growth.


As IT services firms have looked to expand the breadth of services offered to their clients, they have frequently looked to grow through acquisition of niche capabilities, particularly creative firms. To make these acquisitions successful, firms must be mindful of cultural differences and understand how to balance integrating capabilities and allowing the acquired firms to maintain a level of autonomy.

<![CDATA[Cognizant Looks to Automate Testing of Connected Devices]]>


NelsonHall has been commenting recently on the future of testing, looking at how AI algorithms and RPA tools fit in the context of QA. This blog takes a different perspective by looking at how one of the tier-one software testing service vendors is approaching its testing of bots and connected devices. With the fast adoption of connected devices, automating testing of consumer or industrial IoT-based products has become a new challenge for QA.

We recently talked with Cognizant to understand how the company is addressing this challenge, and learned that the company already has several projects in process. One example is its work for a telecom service provider that sells home security services to consumers based on devices on sensors that trigger an alarm if someone attempts to effect an entry. The client has outsourced the design and manufacturing of the devices, working with around ten suppliers, focusing itself on developing algorithms running on the firmware/embedded systems of the devices. Cognizant has been helping the client on regression testing for each new release of the client’s embedded software on the security devices. Cognizant’s approach includes two fundamental principles: to conduct testing with a lab-based approach, and to leverage automation in the testing.

Cognizant is using its TEBOT robotic testing solution to test for interoperability between firmware, devices, and OS. TEBOT automates human-to-machine and machine-to-machine interactions at the physical-digital level for IoT. It takes a UI approach, using software tools such as Selenum and Appium for its test execution needs and for invoking API calls for Raspberry Pi, triggering physical actions/movements. For those readers not familiar with a Raspberry Pi, it is a nano-computer developed by the University of Cambridge’s Raspberry Pi Foundation to help spread IT skills. The main benefits of a Raspberry Pi are that they are very inexpensive, with prices starting as low as $35, and small (the size of a credit card). Cognizant has been able to test several scenarios for the client around detection of flood, smoke sensors, door opening, motion in the house, light, and temperature change. TEBOT also has reporting capabilities with a dashboard that displays results of tests.

Cognizant has also been using TEBOT to test how the client’s home security product reacts to human instructions. The company also uses TEBOT for recreating human voice and provide instructions to virtual assistants (e.g. Amazon’s Alexa) and get results.

Cognizant continues to invest in TEBOT. Looking ahead, a priority is to put TEBOT into the agile process, with continuous testing. Another key priority is to keep the price of TEBOT at affordable levels while being able to replicate it in other sites. The company is currently conducting TEBOT-based testing in its Bangalore lab for one of its clients and highlights that it can replicate the lab anywhere, given the low level of investment required.

With its TEBOT IP, Cognizant is providing a lab-based approach to connected device testing. Cognizant claims this automation-based approach can deliver 30%+ reduction in test cycle times compared with manual testing and 40% reduction in cost of quality around smart device certification. Cognizant also offers real-life testing for connected devices, here using its internal crowdsourcing capabilities with its fastest offering.

<![CDATA[Sopra Steria: Building on Big Data & Analytics Initiatives]]>


We recently talked with Sopra Steria about its work and capabilities around big data and analytics. The company has created a Data Science CoE (or Technology Stream in Sopra Steria’s terminology), which brings specialized services and expertise to the various in-country systems integration (SI) business units, focusing on its key accounts and bringing vertical knowledge.

Sopra Steria’s Data Science unit has been developing AI-based use cases, focusing on the analysis of unstructured data, including through the use of computer vision technologies.

Applying AI: client examples

An example of its work in applying AI algorithms to unstructured data is an aircraft manufacturer client that Sopra Steria has been helping by developing solutions to automate the inspection of aircraft using pictures taken by drones. This approach, which is much faster than manual inspections, uses drones to take pictures which are stored and compared in real-time with a repository of pictures showing anomalies.

The process started with the drone and AI identifying simple items such as missing paint or screws on the aircraft, and it is now getting more complicated as Sopra Steria grows its expertise. Sopra Steria estimates that it requires ~ 200 pictures to teach the ML algorithm to spot anomalies, and believes its approach is now mature enough to be applied to similar projects with other clients.

Another example is a project based on the use of satellite images. Sopra Steria has helped an electricity grid operator to analyze its network and identify where it needs to prune trees, and prioritizing them. Unlike the aircraft example, this approach does not rely on edge-based computing, as flying drones in areas with many trees is a challenge. The broad principles are the same, however, and the approach helps prioritizing trees which are mostly likely to create interferences in the electricity grid.

Creating IP from expertise

Looking ahead, Sopra Steria’s Data Science unit wants to create IP out of its expertise. The CoE acknowledges it is walking a fine line between AI cloud vendors that tend to offer vertical-agnostic micro-services and ISVs that are high specialized (e.g. Sensetime and Ntechlab in China and Russia respectively, both around video surveillance). The unit is adopting two main approaches:

  • A methodology approach for use case development. For example, in drone-based aircraft inspection, it knows what images of anomalies it needs to look at to optimize the learning process of the ML. And for email automation around language utterance, the unit created a repository of terms and jargon specific to the insurance industry
  • An internal focus. The unit has been taking part in Sopra Steria’s Inner Source approach (see blog here) and is creating AI and ML micro-services that it wants its developers to use. Indeed, it is finding that its software developers have an appetite for using AI/ML micro-services it has created. The CoE is now acting as a support organization for using these micro-services and applying them to projects. We view this approach as a positive step in Sopra Steria’s evolving IP strategy. While Sopra Steria has been investing in commercial software products (e.g. Sopra HR Software, Sopra Banking Software and real estate), the firm’s SI units have been less vocal about their IP creation. This is now changing, initially driven by Inner Source to provide software developers with the software tools and environments they require. Sopra Steria is now accelerating its IP approach.
<![CDATA[HCL Technologies’ RPA Initiatives for Software Testing]]>


NelsonHall has commented several times on how vendors have been introducing AI into their QA/software testing activities; for example, to enhance defect analysis and prediction.

We have talked less about the use of RPA because it did not seem to bring much innovation on top of what testing software products already offer. Testing software products have been around for over 20 years and are gradually expanding their automation capabilities from test script-based automation to new areas including service virtualization and test data management. In this context, RPA tools, which also tend to work at the same UI level as testing software does, seemed too generic and not specific enough for testing.

But the adoption of RPA in the context of testing is changing: we see clients experimenting with RPA workflow tools to complement or even replace test execution software. We recently talked with HCL Technologies about its RPA initiatives in the context of testing.

Using RPA workflows in the context of testing services has several prerequisites

HCL Technologies posits that there are prerequisites for using RPA in testing:

  • First, around volume. As in test automation, the cost of implementing automation only has a business case if the tasks are performed often
  • Second, around the nature of the tasks. HCL has identified two main options: labor intensive tasks or end-to-end testing services.

Also, it helps if the client already has license rights and can incorporate new bot usage in its existing license agreement.

Automating labor-intensive test activities

An obvious use case for RPA in testing is automating labor-intensive tasks such as test environment provisioning, job scheduling, test data generation or data migration. As HCL Tech highlights, data migration from one format to another, and subsequent testing, is a very good candidate for RPA-based automation, largely for volume reasons.

Some of these labor-intensive tasks can be automated using non-RPA workflows, and RPA is only one of the options for automating them. What matters is that the client can use its existing RPA license agreement and therefore automate these labor-intensive tasks at a limited extra cost.

End-to-end testing

A second use case is around end-to-end testing (E2E), also called business process testing. HCL Tech highlights that E2E testing often requires testing different applications based on different technologies (web, mainframe, client-server) and for which popular test execution tools such as Selenium won’t work. In this case, an important element of the HCL Tech’s automation strategy is around RPA software, initially looking at workflow use cases.

In one example of supporting a client that has business processes involving websites, client-server, and mainframe applications, the testing activities use two different test execution tools (Selenium, and Micro Focus UFT) and manual testing. HCL Tech implemented Automation Anywhere, taking a UI approach, for conducting business process test execution.

An added benefit of E2E testing is that UAT is another use case for RPA scripts.

Another example is for specific technologies, such as Citrix and virtual desktops.

HCL Tech will deploy RPA across testing clients

Looking ahead, HCL Tech wants to deploy RPA across its testing clients; it currently has about seven clients that have adopted RPA for their testing needs.

HCL Tech expects to develop further RPA use cases for test automation. A recent example is HCL Tech’s Zero Touch Testing (ZTT), which combines an ML and MBT approach. ZTT helps to convert UML diagrams into manual test cases and then test scripts using ML and RPA to capture objects from the applications.

Will RPA replace testing software products in the long-run? Probably not. Beyond the cost of license, what matters is the investment made in developing test scripts. Clients will need a very strong business case to scrap their test scripts and redevelop RPA scripts, unless vendors create an IP for automating the migration of test scripts into RPA scripts. The positioning of RPA tools therefore is in orchestrating different test execution tools across multiple applications in different technologies.

The tool ecosystem is also changing, with several RPA ISVs moving into the test automation space and test automation ISVs expanding in the high-growth RPA software market. The nature of tools in the testing space is likely to change and NelsonHall will be monitoring this space.

<![CDATA[Advance 2021: The Road Ahead for Atos]]>

It is a decade since Thierry Breton assumed the mantle of CEO and Chairman at Atos, his arrival marking the end of a troubled period for the company. In that time, the company has improved its profitability (when he arrived, its operating margin was just 4.8% and all service lines were experiencing a deterioration in profitability, so his ambition of a double-digit margin seemed ambitious). Also in that time, the company has grown in scale and geographic presence through a series of fairly significant acquisitions in both the Atos and Worldline businesses. These acquisitions have also led to some dramatic changes to its portfolio.

Atos recently posted its Q4 and full year 2018 results. These have been discussed in NelsonHall’s Quarterly Update on Atos, and a more comprehensive Key Vendor Assessment on the company will be published in the next few days.

But the real news had already happened two weeks before, at its Investor Day: this featured several major announcements, including the intended deconsolidation of Worldline, the next three-year plan for Atos, and also succession planning in place for Breton.

So why has Atos embarked on its next three-year plan one year before the current three-year plan is due to complete? In short, the time is right: two acquisitions, both completed in the last quarter, have provided heft and scale to both Atos and Worldline. With the additions of Syntel and SIX Payment Services, each business is now in a stronger position to pursue its own ambitions.

Five years after its carve out, Worldline becomes a standalone company

The first major announcement at the Investor Day was Atos’ intention to reduce its stake in Worldline from 50.8% to 27.4% through a proposed distribution of 23.4% of Worldline shares to Atos shareholders (who will receive 2 Worldline shares for every 5 Atos shares held), thereby deconsolidating Worldline from the Atos Group from early May 2019 (assuming approval at the AGM on April 30). Atos will remain the largest shareholder, followed by SIX Group with 26.9%.

We were not surprised to hear of this development, coming as it does five years after the 2014 IPO of Worldline and at a time of newly expanded scale: there are clear benefits to Atos shareholders, and to both companies, in becoming standalone. Both Atos and Worldlines’ board of directors have unanimously supported the proposal.

If we include a full year’s contribution from SIX Payment Services, which has expanded Worldline’s revenues from merchant services by 65%, and its geographic presence in the DACH region (primarily Switzerland), Worldline generated €2.2bn pro forma revenues in 2018. It is now a major player in Europe.

Worldline has been very clear about its ambition to become the dominant consolidator in the European payment processing market. And here it is succeeding, despite the disappointment of its failed attempt to acquire Gemalto: since its IPO, revenues have doubled, through a combination of inorganic and organic growth, and adjusted operating margin has expanded from 18.7% to 21.2% (again, pro forma, including SIX PS).

As a standalone company, Worldline will have an enlarged free float (45.7% post transaction, which we think might increase) with increased stock market visibility and be in a stronger position to use stock for acquiring: its ambitions as a payment processing consolidator are if anything even stronger, with the focus moving next to potential opportunities in some of the larger European economies. Separation from Atos might also be helpful in discussions with some banking consortia over potential new outsourcing opportunities; Worldline CEO Gilles Grapinet alluded to some large deals on the horizon.

Worldline shared its 3-year financial targets ambitions for 2019 to 2021. We believe the topline targets to be modest in ambition, given the M&A and large outsourcing deal aspirations.

Atos and Worldline will maintain commercial, industrial and GTM relationships via arm’s length contracts between the entities. This will include joint R&D programs and purchasing agreements.

ADVANCE 2021: the road ahead for Atos

So, what about Atos on a standalone basis?

Firstly, scale: including a 12-month contribution from Syntel, Atos generated €11.3bn pro forma revenue in 2018, with an adjusted operating margin of 10.0%. It remains a double-digit margin business without Worldline.

Secondly, profile: as we have discussed before, Syntel has changed the profile of Atos in terms of both geography and portfolio. With Syntel, Atos becomes less dependent on IT infrastructure services and becomes more balanced both at a global level and in its North America business. The reverse integration of much of Atos’ global B&PS business into Syntel continues in 2019. In its next three-year plan, entitled ‘ADVANCE 2021’, B&PS becomes a more important pillar of Atos’ growth plans for the next three years.

As part of ADVANCE 2021, Atos has introduced a new initiative, RACE (Road to Agile Competitiveness & Excellence), essentially the successor to various TOP plans, with a stronger focus on reducing direct costs, rather than optimizing G&A, to achieve further margin expansion. RACE has 12 pillars. We feel that some of these, such as the Global Optimization through Automation & Lean (GOAL) initiative (which started in H2 2018 and includes leveraging Syntel IP, increasing near/offshore delivery, and setting up shared service centers for indirect functions), indicate Atos is in catch-up compared to some of its peers. In terms of divisional margin targets, the division targeting the greatest expansion is B&PS, primarily from leveraging Syntel to achieve a 60% off/nearshore rate by 2021.

The three-year plan includes just 1-2% targeted organic growth in 2019, while North America and Germany (its two largest regions) recover.

IDM will remain a flat business for the next three years

With its Information & Data Management (IDM) division, a key priority is to get back to growth following declines in 2018 in Germany and North America (primarily the U.S.), where, under its new management, the outlook for 2019 appears much better than it was a year ago.

At a global level, IDM is now back under the leadership of Eric Grall, who is also Atos’ COO. The focus over the next three years will be on hybrid cloud orchestration, and IoT/edge computing, these areas balancing revenue stagnation in other units and the ramp-down of its traditional data center service business.

IDM has more growth ambitions for its U.K. BPO unit and wants to expand its financial services BPO business into Europe. Again, we feel the targeted growth over the next three years, which we estimate at around 8% CAGR, is a modest ambition given the healthy growth in many areas of BPO.

Overall, IDM is set to remain stable at around €6.3bn.

B&PS to benefit from market momentum in digital

The Business & Platform Solutions (B&PS) division enjoyed an improved performance in 2018, benefiting from repositioning around Digital Factory offerings. Syntel brings a business growing at ~10% (NelsonHall estimate), provides vertical expertise in the U.S. banking and healthcare industry, and will help capture project and digital transformation services growth in the U.S. There is (at last, we feel), an increasing focus on developing industry-specific propositions in each of its seven targeted verticals, potentially also pulling through IDM in some opportunities. This will be an important element in the next stage of Atos’ evolution. We will be looking with interest at how Atos will harness the industry-specific capabilities it has gained in different regions and develop a stronger cross-regional industry play. Strengthening the GTM approach is a key part of this, but on its own will not suffice.

Overall, B&PS is targeting a 5% CAGR for 2019 to 2021, which, assuming no major changes in the macro-economic conditions, is in line with our predictions for overall market growth in these services.

Atos looking to replicate the growth model of Worldline at BDS

Perhaps one surprise at the Investor Day came from Breton’s comments as to possible intentions regarding the BDS unit. BDS comprises a range of businesses, e.g. security products and services, HPC and high-end servers, mission-critical systems for the defense industry, and secure communication devices and software. The positioned commonality across these different activities is security, AI, and big data/analytics. The division continues to enjoy double-digit organic growth (12.0% organic in 2018) and is nicely profitable (divisional operating margin was 15.4% in 2018). But Atos is unusual as an IT services company in having businesses like these.

It appears that Atos may look to replicate what it has done with Worldline at BDS. Breton alluded to the need for BDS to be listed if it is to be a consolidator in the cybersecurity market and afford the high valuation multiples currently used in security M&As. He did not indicate a time line; however, we would not be surprised to see a listing before end 2020. Again, this would make obvious sense.

Atos has been an active acquirer in the last decade; significant M&A activity appears to be over for a while at least, with Atos focusing primarily on organic growth. Atos in 2021 may not be a significantly larger business, but we think it will have evolved in its profile and positioning.

By Dominique Raviart and Rachael Stormonth


Details of Atos Q4 and full-year results and financial targets in the new ‘ADVANCE 2021’ three-year program are provided in NelsonHall’s Tracking Service, Quarterly Update and Key Vendor Assessment programs. To find out more, contact Guy Saunders.

<![CDATA[Expleo, the New Home of SQS, Invests in AI Offering]]>


NelsonHall continues to examine the AI activities of major testing service vendors. In the past 18 months, many testing service vendors have expanded their AI capabilities around analytics, making sense of the wealth of data in production logs, defect-related applications, development tools, and in streamed social data.

We recently talked with Expleo with regard to its AI-related initiatives. Expleo is the new company that has resulted from the acquisition of SQS, a QA specialist, by Assystem Technologies, an engineering and technology service organization.

Expleo highlights that its expertise lies around rule-based systems, an area that is now considered part of ML, and for the past 12 years it has created several rule-based systems use cases (e.g. defect prediction and code impact analysis). It now has around ten AI use cases, several of which are now in widespread use in QA (e.g. for sentiment analysis, defect prediction, and code impact analysis).

Other use cases remain specific to Expleo. One example of an Expleo-specific use case is related to false positives, identifying test failures that are not due to the application under test but caused by external factors such as a test environment not being available or a network suffering from latency, during test runs. Expleo has developed an IP, automatic error analysis and recovery (EARI), that relies on error classification and a rules engine. EARI will launch a remedy to the false positive by the applying a ‘last-known good action’.

Expleo continues to invest in developing AI use cases. The testing industry is mature in automation once test scripts have been created, but remains a manual activity before the creation of scripts. Expleo is currently working on creating test scripts from test cases written in plain English, using NLP technology. Another AI use case is a real-time assessment of software quality or script auto-generation based on user activity and behavior.

AI in QA is still in its infancy phase and many issues remain to be solved. Expleo is taking a consultative approach to AI and testing, educating clients about the ‘dos and don’ts’ of AI in the context of QA. The company has compiled its best practices and wants to help its clients redefine their test processes and technologies, taking account of the impact of cognitive technologies on organizations.

Data relevancy is another priority. Expleo points out that clients tend to place too little emphasis on the data being used for AI training purposes. Data may be biased, not relevant from one location to another, or just not large enough in volume for training purposes. Expleo has been working on assessing the data bias, based on data sampling and a statistical approach to AI outputs. Once complete, Expleo can identify the training data causing the output exceptions and remove it from the AI training data.

Expleo is also working on bringing visibility to how ML systems work, with its ‘Explainable AI’ offering. The company highlights that understanding why an ML system came to a specific outcome or decision remains a challenge for the industry. Yet, understanding an AI’s decision process will soon become a priority for compliance or security reasons. An example is around autonomous vehicles – to understand why and how vehicles will make decisions. Another example is for compliance reasons, being able to prove to authorities that an AI meets regulatory requirements.

With its new scope and size (15k personnel), Expleo is expanding its QA capabilities towards the engineering world, around embedded software, production systems, IoT and PLM, which will require further investment in AI. This is just the beginning of Expleo’s AI and testing story.

<![CDATA[Amdocs Automates Chatbot Testing with BDD]]>


NelsonHall continues to explore how cognitive is reshaping software testing services, and here I look at how Amdocs is automating chatbot testing.

The market is shifting from continuous testing to cognitive

Over the last year, for many testing services providers, the focus evolved from creating continuous testing platforms (in the context of agile and DevOps adoption), to incorporate AI with the intent of automating testing beyond test execution.

The next priority of the testing industry is going beyond the use AI to automate testing to testing AI systems and bots – which brings new levels of complexity. Some of this comes from the fact that AI and bot testing is new, with methodologies to be created and automation strategies to be formulated.

Chatbot testing is one new area where the industry is getting organized, with initiatives such as making sure the training data is not biased and creating text alternatives (“utterances”) to the same question to a chatbot: whatever way the question was asked, the response must consistenly be the same.

Amdocs’ approach to chatbot testing

To most readers, the name Amdocs is probably reminiscent of OSS/BSS and other communication service provider-specific software products, and rightly so. But Amdocs has also become an IT service company, providing services around its own products, non-Amdocs products, and custom applications.

Amdocs recently briefed NelsonHall on the work it does in chatbot training and testing. Its approach to chatbot training and testing relies on several priorities:

  • Understanding what the customer wants to achieve (the “intent”), e.g. to check the balance of its data plan
  • Finding out the context, or reason of a call, e.g. to understand what current mobile data plan the customer has
  • Keeping the “flow” of the chat relevant, e.g. making sure the chatbot’s response remains related to the context and the intent.

Automating chatbot training and testing

Amdocs uses several technologies to help automate chatbot training and testing, for example NLP for word clustering, and ML for classification to help in understanding the intent of a customer’s interaction with a chatbot. Amdocs highlights that it can achieve an accuracy level of ~96%.

Amdocs relies on integration with other applications and APIs for its context needs.

For its chat flow needs, Amdocs is using the BDD approach. Under BDD, testers or business analysts will write test cases (named feature files) in English (and then in the Gherkin language) and translate them into test scripts. With this approach, Amdocs creates series of scenarios guiding the bot step-by-step on how to react to the customer interactions.

Amdocs also uses open source software Botium, which relies on the same principles as BDD. The integration of Botium helps it testing chatbot technology from vendors, including Facebook, Slack, Skype, Telegram, WeChat, and WhatsApp.

Amdocs has integrated the BDD approach in its main testing framework, Ginger. Ginger integrates with CI tool Jenkins, which means the BDD scripts can be run in a continuous testing mode as part of an iterative approach to training and testing. The integration with Ginger also provides back-end integration testing, including API testing, notably for its context needs.

Testing AI systems

Amdocs’ approach brings some priority on what to test in chatbots, and some level of automation. This is the beginning: chatbots are currently relatively simple and, as their sophistication grows, so will their testing needs.

This, of course, raises further questions:

  • Further methodologies and governance. Testing data is usually scarce, and vendors sometimes use training data for testing purpose. Also, now, training and testing activities tend to be similar. This raises the question of the role of testing’s independence
  • Chatbots will evolve into voice bots. This raises the question of accents and local expressions
  • Chatbots are simple enough as they are deterministic, which means their output can be predicted. But how do you test AI systems where you cannot predict the output?

NelsonHall will be publishing its latest report on software testing services soon, focusing on next-gen services, including the role of AI and RPA in testing, along with mobile and UX testing.

<![CDATA[NIIT Technologies Adapts Data & Analytics Around Big Data & AI]]>


In a recent meeting with NIIT Technologies, we discussed how its Data & Analytics offering has been adapting, with a focus on developments around big data and AI. Here are the main talking points.

Overcoming tool fragmentation

One priority has been NIIT Tech’s Xpress Suite, one of Data & Analytics’ most popular offerings, which seeks to overcome the fragmentation of big data and analytics software tools.

Analytics & Data has taken a platform approach, relying on a standard architecture and pre-integration across software tools, using mostly open source tools. The approach is brownfield and Analytics & Data will integrate Xpress Suite with the client’s existing software investments.

Xpress Suite includes a series of accelerators around four main use cases: migration of data to the cloud, MDM, data lakes/big data, and insights/AI. The most popular to date have been around data migration to the cloud and data lakes/big data.

Data Lab as ideation and prototype centers

Analytics & Data’s second most popular offering is Data Lab, a set of virtual centers used for conducting ideation with clients, identifying use cases, and creating PoCs, within six weeks. The nature of projects varies significantly, with 360 customer data projects and social analytics being recurring themes.

Developing industry relevant use cases

Data & Analytics has also been working on creating industry-specific analytics use cases around customer data. The practice has adopted a start-up-like ‘fail fast’ approach and set up a dedicated team, Incubation Services, that creates reusable use cases, with ML playing an important role in these. The team talks with NIIT Tech’s industry units to identify appropriate use cases.

Development of each use case (not a product) takes around four to six weeks. Where there is client take-up, Analytics & Data will then customize it to that client’s specific needs. Most of these use cases are in the travel & transportation industry, with others in insurance and BFS.

Data & Analytics is also bringing its capabilities to enhance the analytics functionality of the industry-specific software products that NIIT Tech has created or acquired. Two examples are:

  • Airline sector revenue optimization software
  • Insurance sector products for underwiring and financial reporting.

The range of services and scope of projects varies extensively across technologies.

Example clients are:

  • A travel company that wanted to monitor the performance of its sales teams and marketing operations across geographies and across brands. Data & Analytics helped created a data lake for the company’s sales data, using its Data Lake Xpress IP, and then deployed analytics. The PoC lasted six weeks and the implementation about six months. The client now has a modernized platform integrating data from various sources, providing better visibility of its sales, customers and marketing activities
  • A U.S. insurance firm, where Data & Analytics was involved in a project for image recognition of property damage, based on images taken by the client’s property damage assessors, and using AI to process the images and identify the nature and extent of the damage
  • A European wealth management firm set up a data lake using NIIT Tech’s DLXpress IP. The solution ingests financial data, masks it for data security purposes, and provides analytics. It is used by portfolio managers for their fund allocation needs, to understand and explain their investment strategies and risk exposure.

Data & Analytics highlights that it has set up an effective structure and intends to continue investing in use cases via its ‘fail fast’ approach.

<![CDATA[The Move to B2B Platforms: Q&A with Manuel Sevilla, Capgemini CDO]]>


Platforms have been increasingly important in B2C digital transformation in recent years and have been used to disintermediate and create a whole raft of well-known consumer business opportunities. B2B platforms have been less evident during this period outside the obvious ecosystems built up in the IT arena by the major cloud and software companies. However, with blockchain now emerging to complement the increasing power of cognitive and automation technologies, the B2B platform is now once again on the agenda of major corporations.

One IT services vendor assisting corporations in establishing B2B platforms to reimagine certain of their business processes is Capgemini, where B2B platform development is a major initiative alongside smart automation. In this interview, NelsonHall CEO John Willmott talks with Manuel Sevilla, Capgemini’s Chief Digital Officer, about the company’s B2B platform initiatives.


JW: Manuel, welcome. As Chief Digital Officer of Capgemini, what do you regard as your main goals in 2019?

MS: I have two main goals:

  • First, automation. We’re looking to automate all our clients’ businesses in a smart way, transforming their services using combinations of RPA, AI, and use of APIs to move their processes to smart automation
  • Second, to build B2B platforms that enable customers to explore new business models. I see this as a key development in the industry over the next few years, fueled by the need for third-party involvement in establishing peer-to-peer blockchain-based B2B platforms.

JW: What do you see as the keys to success in building a B2B platform?

MS: The investment required to establish a B2B platform is significant by nature and has to be seen in the long-term. This significant and long-term investment is required across the following three areas:

  • Obviously, building the platform requires a significant investment since, in a B2B environment, the platform must have the ability to scale and have a sufficient number of functionalities to provide enough value to the customers
  • Governance is critical to provide mechanisms for establishing direction and priorities in both the short and long-term
  • Building the ecosystem is absolutely critical for widespread platform adoption and maintaining the platform’s longevity.

JW: How do the ecosystem requirements differ for a B2B platform as opposed to a B2C platform?

MS: B2B and B2C are very different. In B2C environments, a partial solution is often sufficient for consumers to start using it. In B2B, corporates will not use a partial platform. For example, for corporates to input their private data, the platform has to be fully secured. Also, it is important to bring a service that delivers enough value either by simplifying and reducing process costs or by providing access to new markets, or both. For example, a B2B supply chain platform with a single auto manufacturer will undoubtedly fail. The big components suppliers will only join a platform that provides access to a range of auto manufacturers, not a separate platform for each manufacturer.

Building the ecosystem is perhaps the most difficult task when creating a B2B platform. The value of Capgemini is that the company is neutral and can take the lead in driving the initiatives to make the platform happen. Capgemini recognizes humbly that for a platform to scale, it needs not only a diverse range of partners but also that Capgemini cannot be the only provider; it is critical to involve Capgemini’s partners and competitors.

JW: How does governance differ for a B2B platform?

MS: In a fast-moving B2B environment, defining the governance has to proceed alongside building the ecosystem, and it is essential to have processes in place for taking decisions regarding the platform roadmap in both the short and long-term.

B2B platform governance is not the usual two-way client/vendor governance; it is much more complex. For a B2B platform, you need to have a clear definition of who is a member and how members take decisions. It then needs enough large corporates as founder members to drive initial functionalities and to ensure that the platform will bring value and will be able to scale. Once the platform has critical mass, then the governance mechanism needs to adapt itself to support the future scaling of the platform, often with an accompanying dilution of the influence of the founder members.

The governance for a B2B platform often involves creating a separate legal entity, which can be a consortium, a foundation, or even multiple legal entities.

JW: Can you give me an example of where Capgemini is currently developing a B2B platform?

MS: Capgemini is currently developing four B2B platforms, including one with the R3 consortium to build a B2B platform called KYC Trust that aims to solve the corporate KYC problem between corporates and banks. Capgemini started work on KYC Trust in early 2016 and it is expected to go into scaled production in the next 12-24 months.

JW: What is the corporate KYC problem and how is Capgemini addressing this?

MS: Corporate KYC starts with the data collection process, with, at present, each bank typically asking the corporate several hundred questions. As each bank typically asks its own unique questions, this creates a substantial workload for the corporate across banks. Typically, it takes a month to collect the information for each bank. Then, once a bank has collected the information on the corporate, it needs to check it, which means paying third-parties to validate the data. The bank then typically uses an algorithm to score the acceptability of the corporate as a customer. This process needs to be repeated regularly. Also, the corporate typically has to wait, say, 30 days for its account to be opened.

To simplify and speed up this process, Capgemini is now building the KYC Trust B2B platform. This platform incorporates a standard KYC taxonomy to remove redundancy from, and standardize, data requests and submission, and each corporate will store the documents required for KYC in its own nodes on the platform. Based on the requests received from banks, a corporate can then decide which documents will be shown to whom and when. All these transactions will be traceable in blockchain so that the usage of each document can be tracked in terms of which bank accessed it and when.

The advantage for a bank in onboarding a new corporate using this platform is that a significant proportion of the information required from a corporate will already exist, having already been supplied to another bank. The benefits to corporates include reducing the effort in submitting information and in being able to identify which information has been used by which bank and when, where, and how.

This will speed up the KYC process and simplify data collection operations. It will also simplify how corporates manage their own data such as shareholder information and information on new beneficial owners.

JW: How does governance work in the case of KYC Trust?

MS: A foundation will be established in support of the governance of KYC Trust. The governance has two main elements:

  • Establishing the basic rules, in particular, defining how a node can be operated and specifying the applications that can be run on top of the platform to create questionnaires and how the platform will integrate with banks’ own KYC platforms
  • Providing the means for corporates to submit information, enabling the mixing of data from multiple countries while respecting local regulations. This includes splitting the information submission between the various legal entities of each corporation with data potentially only hosted locally for each legal entity.

Key principles of the foundation are respect for openness and interoperability, since there cannot be a single B2B platform that meets all the business needs. In order to build scale, it is important to encourage interoperability with other B2B platforms, such as (in this case) the Global Legal Entity Identifier Foundation (GLEIF), to maximize the usefulness and adoption of the platform.

JW: How generally applicable is the approach that Capgemini has taken to developing KYC Trust?

MS: There are a lot of commonalities. Sharing of documents in support of certification & commitments is the first step in many business processes. This lends itself to a common solution that can be applied across processes and industries. Capgemini is building a structure that would allow platforms to be built in support of a wide range of B2B processes. For example, the structure used within KYC Trust could be used to support various processes within supply chain management. Starting with sourcing, it could be used to ensure, for example, that no children are being employed in a factory by asking the factory to submit a document certified by an NGO to this effect every six months. Further along the supply chain, it could also be used, for example, to support the correct use of clinical products sold by pharmaceutical companies.

And across all four B2B platforms currently being developed by Capgemini, the company is incorporating interoperability, openness, and a taxonomy as standard features.

JW: Thank you Manuel, and good luck. The emergence of B2B platforms will be a key development over the next few years as organizations seek to reimagine and digitalize their supply chains, and I look forward to hearing more about these B2B platform initiatives as they mature.

<![CDATA[DXC Bets $2bn on Recovery of Luxoft to Scale its Digital Capabilities]]>

Yesterday morning, DXC announced its intended acquisition of Luxoft in an all cash transaction of $59 per share, around $2bn. This represents a 48% premium over Luxoft’s average closing share price over the previous ninety days (and ~86% premium on Friday’s closing price). The deal is expected to close by end June 2019.

In recent years DXC (including as CSC) has made a number of acquisitions that have expanded its ServiceNow, Microsoft Dynamics, and recently Salesforce capabilities and formed the bedrock of its Enterprise & Cloud Apps (ECA) practices. This is different: the Luxoft transaction is closer in feel to its 2016 acquisition of Xchanging, which brought in Insurance sector capabilities, or the more recent acquisition in the U.S. of Molina Medicaid Solutions. In all three cases, DXC is acquiring a company that has specific issues and challenges but that also expands DXC’s own industry capabilities; Luxoft will in addition expand DXC’s capabilities around Agile/DevOps.

Luxoft is a company in transformation

With revenues of $907m in FY18 (the year ended March 31, 2018) and nearly 13k personnel, Luxoft is a mid-sized firm. DXC is presenting Luxoft as a “digital innovator”, but it is a company that is grappling with significant client-specific and market challenges. Until FY17, it was highly successful, enjoying revenue growth in the range of 20% to 30%. FY18 saw a slowdown, still to a very solid level of 15.4% (of which we estimate ~7% organic), but FY19 has seen flat growth.

In particular, Luxoft has been hit hard by its dependency on the investment banking/capital markets sector, in particular on two clients: UBS and Deutsche Bank. Back in FY15 they accounted for over 56% of Luxoft’s total revenues (~$294m). Since then, Luxoft has been growing its share of wallet in other key accounts, and the combined revenues from clients 3 to 10 have increased from $123m in FT15 to ~$208m in FY18, a CAGR of ~19%, with clients 5 to 10 growing at nearly 30%. In FY19 Luxoft is expecting around 13% revenue growth from these accounts (to, we estimate, ~$235m).

But while it has been very strong growth in its other top 10 accounts, Luxoft has since FY18 been impacted by declining revenues at both UBS and Deutsche Bank (the later by 13.4%). H1 FY19 saw a 11% y/y decline and these two accounts now account for just over 30% of total revenues. Both have been insourcing some talent. While Luxoft believes that the UBS account is now stabilizing, Deutsche Bank is more challenged, and the account remains an issue: revenues are likely to decline by ~44% in FY19 to ~$90m, or <10% of total revenue, with a further contraction in FY20.

Outside these two, Credit Suisse is also a major client and Luxoft is clearly exposed to the slowdown in the European capital markets/investment banking sector. But elsewhere in financial services, there are much stronger opportunities in the near-term in the wealth and asset management sector, particularly in the U.S. and there is the potential for DXC to help Luxoft expand its presence in the Australian banking sector.

Luxoft has been looking to diversify its sector capabilities in recent years, in particular beefing up its offerings to the automotive sector, developing relationships, mostly in Europe, with tier-one OEMs and suppliers such as Daimler, Continental, and Valeo. Automotive & Transport is a hyper growth business for Luxoft, delivering nearly 43% growth in FY18, but for a company the size of DXC, this is a small business it is picking up: FY18 revenues were $158m. (FY19 revenues are likely be ~$220m, boosted by Luxoft’s acquisition of embedded software specialist Objective Software, which has brought in some U.S. client relationships. Some of these are large accounts (four of the top 10 accounts are in the automotive sector.  And one is a common account to both DXC and Luxoft.

In its Digital Enterprise unit, which is servicing all other verticals, Luxoft has been driving its offerings to more digital offerings, at the same time looking to reduce its exposure to low-margin work. Revenue performance in the Digital Enterprise Unit has been erratic with a strong performance in FY18 followed by a 13% decline in H1 FY19 though Luxoft claims to be confident that it has completed the transformation of the unit.

In brief, among the capabilities that Luxoft will bring to DXC we see:

  • Significant agile development capabilities, enhancing DXC’s application services business.
  • Some analytics capabilities
  • Some product engineering services capabilities in the automotive sector, plus some experience in IoT-centric projects
  • Offerings around UX design (in June 2018, Luxoft acquired Seattle-based design agency Smashing Ideas from Penguin Random House).

Luxoft has also been developing its capabilities in blockchain, an area where we suspect DXC has little experience, with pilots in the healthcare, government (evolving in Switzerland) and automotive sectors.

And, of course, Luxoft has a sizeable nearshore delivery capability in Eastern Europe. Luxoft’s delivery network has its roots in Ukraine and Russia. In reaction to the 2014 Ukraine-Russia crisis, the company initiated its Global Upgrade program with the intent of de-risking its profile and increasing its presence in other nearshore locations, in particular in Romania and Poland. Since FY14, Luxoft has decreased its headcount in Ukraine from 3.6k to 3.1k and in Russia headcount from 2.3k to 1.9k.  In parallel, Luxoft has significantly increased its presence onshore with now 1k personnel in North America and made its delivery network far less risky for clients. DXC highlights that it will be able to help Luxoft scale its delivery footprint in The Americas and India.

DXC is betting Luxoft will help accelerate its topline growth

While Luxoft has been grappling with declining margins – partly, but not solely due to the declines at Deutsche Bank and pricing pressures in other accounts – DXC is emphasizing the topline opportunities, rather than cost synergies. Given DXC’s track record in stripping out costs, we imagine Luxoft employees will be glad to hear this.

DXC is targeting revenue growth from:

  • Luxoft achieving 15% revenue growth over the next three years
  • Revenue synergies of $300 to $400m over this period, representing 1% to 2% of additional revenue growth for DXC

To achieve this, DXC is looking to cross-sell, for example, the:

  • Product engineering capabilities of Luxoft to North American and Asian automotive clients and other sectors, e.g., high-tech, manufacturing and healthcare in priority
  • Digital capabilities of Luxoft into DXC’s client base. DXC claims that all of Luxoft’s business is, by its definition, digital, thus adding nearly $1bn in revenues to DXC's own $4bn digital business, and expects to grow this $5bn business by another 20% annually
  • Managed cloud and digital workplace capabilities of DXC into the Luxoft base (where, however, there are typically well entrenched incumbents).

DXC is also looking to broaden the use of Luxoft assets, taking FS and automotive capabilities and applying these to industries where Luxoft has not historically had a large presence. As an example, Luxoft has developed data visualization assets for FS clients, capabilities it believes that could be applied to other sectors.

How will DXC and Luxoft Integrate?

One key question is how DXC will manage the integration. In the short term at least, Luxoft will remain an independent company, retaining its brand and senior leadership (DXC intends to have retention plans in place for key Luxoft execs). For DXC to ultimately position as an end-to-end and global IT services organization, able to offer clients a full spectrum of services ranging from digital transformation advisory and concept testing through to IT modernization in all its key geographies and target markets, there will need to at least appear to be an integrated go-to-market and also a standardized global delivery operation that leverage this newly acquired assets.

David McIntire, Dominique Raviart, Rachael Stormonth

<![CDATA[Wipro’s Topcoder & The Future of Crowdsourcing]]>

We recently had a briefing with Topcoder, an entity acquired by Wipro in 2016. Topcoder is known for its crowdsourcing capabilities and its extensive community network and recognized for the wide range of services it offers across several areas of technology services, with a focus on agile development, digital design, and algorithms/analytics.

Topcoder’s four main operational principles

Topcoder has based its operations on four broad principles, which it adapts to its various activities. They are:

  • Scale. Topcoder has ~1.4m members in its community, and highlights that it needs a large community for accessing the right talent and for fast provisioning for the ~200 projects it launches per week
  • Breaking projects into pieces, largely for sourcing quickly and awarding the project to many members. The company highlights that this approach works well for application services in the context of agile projects, utilizing DevOps techniques to help with integrating the different pieces
  • Internal competition/gamification across members. This approach relies on rewarding members based on the quality of their work. To assess the quality of work (in application development and maintenance), Topcoder relies on tools such as SonarQube for static code analysis and Checkmarx for static security analysis, along with Black Duck software for open source IP infringement management. For digital product design activities, quality is assessed by the client based on its personal choice. Finally, for complex projects such as AI projects and algorithms, Topcoder will define, sometimes with the help of universities (e.g. Harvard University) what the success factors and correct level of incentives are
  • Driving work to the community. Topcoder has adopted a light structure and has a headcount of ~110, mostly focused on R&D, sales and community management. The company relies on this small team along with its community for managing projects, through its Co-Pilot approach, where co-pilots are essentially project managers. Topcoder is also using its community for development work.

Synergies between Wipro and Topcoder

Commercial expansion remains a priority and Topcoder continues to evangelize about the benefits of crowdsourcing. In response to one area of resistance to client adoption around security, Topcoder points to the work it has done for NASA as an example of its security capabilities.

Cross-selling into Wipro’s clients should help Topcoder in its commercial expansion. It is targeting the large BFSI client base of Wipro, with the intent of overcoming the traditional resistance to change in the sector. The alignment with Wipro goes beyond sales: Topcoder is using Wipro personnel for its crowdsourcing activities, targeting the 75k or so Wipro employees that are relevant to its activities. The company estimates it derives 45% of its revenues from Wipro’s clients, with Topcoder complementing Wipro for reaching out to talent, for instance. Also, Topcoder and Wipro have aligned their offerings in certain areas, notability in crowdtesting/QA as a service (QaaS). See this blog to learn more.

Managing the community

Topcoder feels it has the scale and is finding it easy to grow its community, currently by ~50k members per quarter. It is looking to promote the rise of crowd members to specialized tasks, such as co-pilot (i.e. project manager), for helping to break down projects into pieces or setting up the success criteria for a given project. Topcoder is creating a career path to help.

Further investment in IP and enabling tools

Topcoder’s most significant IP is its marketplace, which connects clients with the community and is used for managing projects, clients, and members. It currently spends ~30% of its revenues on R&D, and highlights that it needs to maintain this level of investment in R&D to stay abreast with technology innovation. Examples include application containerization to distribute applications across members, the use of AR/VR, and even quantum computing.

In the mid-term, Topcoder is looking at ML: the company has 15 years of data and 4m software assets it intends to analyze and start creating algorithms to help automate parts of the software development lifecycle. This should bring Topcoder to a whole different business model and bring IP to its human-intensive service.

We couldn’t agree more with Topcoder’s vision. The future of crowdsourcing vendors lies in bringing automation to their service activities. Automation is already there for activities such as project management, crowd member sourcing, and work analysis. Looking ahead, the future lies with ML to analyze the mass of data collected through years of work. This is an exciting time for crowdsourcing in IT services.

<![CDATA[BearingPoint Looks to Evolve Advisory Model Under New Managing Partner]]>


NelsonHall recently attended BearingPoint’s analyst event in Lisbon. As it starts its second decade with a new Managing Partner (Kiumars ‘Kiu’ Hamidian, only the second in the company’s history), the strategy that has served BearingPoint well in its first ten years is now evolving in ways that reflect significant developments in the nature of the consulting market.

In its first decade as a company since the 2009 MBO, BearingPoint has been something of a success story in the European management and IT consulting market, achieving sustained topline growth supported by geographic expansion, and steady improvement of its EBIT margin. 2017 revenues were up 13% to €712m, with growth in all geographies and service lines, and the firm is well on its way to achieve its targeted €1bn revenues by 2020.

Key elements of strategy

Elements of BearingPoint’s strategy in recent years that remain key pillars going forward include:

  • The ‘One Firm’ mindset, with a common set of offerings and consistency of delivery methodologies across geographies
  • The focus on clients headquartered in Europe, achieving a ‘global reach’ to be able to support them in projects outside Europe through an alliance ecosystem (West Monroe Partners in the U.S., ABeam Consulting in Asia, Grupo ASSA in LATAM)
  • The business model, comprising:
    • Strategy, made up of four service lines: digital & strategy, finance & regulatory, operations, IT advisory
    • Solutions: the Solutions unit, launched in 2015, has three product lines: IP in regulatory technology, in particular fintech (e.g. its Abacus suite); advanced analytics; and digital platform solutions for the CSP and entertainment sectors (based on Infonova R6, now offered on AWS)
    • Ventures, a more recent capability; e.g. an investment in Norwegian insure-tech start-up Tribe in April 2017. Also includes employee ventures, typically coming from its ‘Be an Innovator’ initiative, and client ventures, emanating from consulting projects with start-ups
  • Selective acquisitions, for example in 2017 of retail supply-chain specialist LCP Consulting in the U.K., and an automotive consulting unit in Italy
  • An increasing emphasis in recent years on innovation, e.g. the introduction of the ‘Be an Innovator’ process and of shark tank events.

Forward-looking priorities

While BearingPoint’s next five-year plan has yet to be finalized, Hamidian outlined four priorities in the following dimensions:

  • Markets
  • Portfolio
  • People
  • Culture.


BearingPoint is looking to build up capabilities in several European countries, including the U.K. (where the practice is relatively small, focusing on sectors such as financial services) and the Netherlands. In terms of headcount, BearingPoint remains very focused on Germany and France, and has product units in Austria (ex-Infonova) and Switzerland (Abacus): the ambition is to have a minimum of 300 people in each of the major European markets. Outside Europe, BearingPoint is also looking to work with its partners to expand its presence in the U.S. and China, including Singapore, where it has a joint hub with ABeam Consulting in Asia focusing on IP-based reg-tech projects.


There is a very clear drive to shift from the classic process redesign work of traditional consultancy services and focus much more strongly with clients on projects that leverage IP assets, and are more transformational in nature (for example, looking at new business models). The role of the Solutions unit is critical in this. Since January, the unit has had its own P&L and regional managers, encouraging, inter alia, entrepreneurialism in both product development and GTM.

In addition to some well-established assets around reg-tech (for which it is best known), the unit has also developed IP such as its Factory Navigator, which simulates production and logistics processes; LOG 360 vehicle emissions calculation, built on SAP HANA; and Active Manager, used for coaching and training front-line managers, e.g. in call centers, to be more active/effective. All are SaaS-based offerings. One of the clients presenting to whom we spoke is a very strong advocate of Active Manager, having implemented it at a major telco and subsequently introduced it in his next role in a different sector.

Expect to see further developments to the portfolio, including industry-specific solutions. But the strategic element lies in the intersection between Solutions and Consulting – the aim is for consulting projects and also managed services increasingly to have embedded IP. 

As well as its own IP, BearingPoint is looking to increasingly position around its abilities to orchestrate an ecosystem of technology partner alliances: having started with Salesforce (now a Platinum partner), the emphasis has expanded to RPA and AI and emerging technologies such as blockchain. The last two years have seen a large increase in the number of technology partnerships, and more are to be expected.

The role of the Ventures unit is also important here. While BearingPoint also refers to employee ventures, most coming from its ‘Be an Innovator’ initiative, and to client ventures, emanating from consulting projects with start-ups, the primary focus is on market ventures. It is working with incubators such as LeVillage in Paris and weXelerate in Vienna (see our 2017 blog here) and hosting events like the BearingPoint Insurance Dialog in Cologne that offer speed dating opportunities for early stage start-ups. A recent investment was in Insignary, a South Korean startup with a binary level open source software (OSS) security and compliance scanning solution, BearingPoint’s first investment in an Asian start-up. BearingPoint is leveraging Insignary’s Clarity solution to offer a managed SAST (static apps security testing) binary scanning service in Europe.

The expansion of IP-based services is a key element of BearingPoint’s Digital & Strategy (D&S) offering, which we note has new leadership.


BearingPoint’s new Managing Partner has spoken repeatedly about his desire for the firm to provide a very positive employee experience, an important element in both the recruitment and retention of younger talent. Other priorities he has expressed include increasing the firm’s diversity, of generation as well as of gender (one target is 20% female Partners by 2020), and talent development. We do not know the age or experience profile of BearingPoint personnel, but we do detect a desire to have a workforce that is perhaps more balanced in terms of age and experience, and a slight shift away from a traditional consultancy profile.

We also note an evolution in leadership style with a stronger emphasis in transparency and communication: several personnel mentioned in conversation that Hamidian encourages colleagues to email him and is responsive when they do.


As part of its ambition to change the nature of much of its consulting work beyond operating model improvement to projects that have more radical transformation in mind, BearingPoint is looking (like many consulting and IT services firms) to nurture a culture where entrepreneurialism and innovation are encouraged (for example through initiatives such as shark tank events), and overall to become a more agile organization.

Hamidian is also looking to develop partners’ management and team leadership skills through initiatives such as new partner training programs.


In its first decade since the MBO, BearingPoint has succeeded in putting in place a strong foundation of an integrated European consulting firm that can claim, through its strategic partnerships, to have a more global reach. The next five years will be marked, not by global expansion, but by an evolution in positioning, with an increasing emphasis on services that leverage its own and partners’ IP to assist clients in their digital transformation, potentially also boosting margins. Expect to see more partnership announcements around IP-based offerings; shortly after the event, for example, BearingPoint announced its regtech product unit and IBM is partnering to offer a BPO service around regulatory reporting to smaller institutions in the DACH region.

Expect also to see an increase in tuck-in acquisitions of small firms operating in its target geographies (including the U.K.) that bring in industry domain and or specialist capabilities. Again, shortly after the event, BearingPoint announced its acquisition of Inpuls, which brings in capabilities in data governance and analytics and also doubles its headcount in Belgium.

As a final note, there were several aspects of the analyst day that stood out from other vendor events we have attended recently:

  • The total absence of PowerPoint presentations, with a heavy focus instead on clients telling their stories and describing how BearingPoint has supported them
  • The level of female representation (roughly 50% of the speakers) – an all-too common experience is that the only female speakers at analyst and advisory events are those from clients. Large organizations in Europe and the U.S. are increasingly demanding a level of female representation from suppliers bidding for work in certain areas of professional services; for a variety of reasons, lack of gender diversity in the talent mix will increasingly be an impediment in IT and consulting services). The level of female representation was doubtless a deliberate move; gender diversity is clearly a high priority.
<![CDATA[Atos-Syntel: Boosts Atos North America Portfolio; a Transformation Lever for B&PS Business Globally]]>

L to R: NelsonHall's David McIntire, Rachael Stormonth, Andy Efstathiou, and Dave Mayer 

NelsonHall recently attended an Atos North America event in Dallas which focused on the newly formed Atos-Syntel organization in North America. Earlier this year we noted that Atos in the U.S. was still a work in progress (see here). The event was held just days after its acquisition of Syntel had closed and we were keen to learn about the integration plans and the strategy for future growth in North America.

We came away assured that, with a new CEO in place, several problem contracts no longer an issue, and an enlarged set of capabilities, Atos North America is in a very different position from what it was at the beginning of the year. And looking more widely, Atos can now position as offering scale end-to-end services across infrastructure and applications in all its key geographies. We also note that this is an integration that is being done with perhaps less speed than some of its previous large-scale IT services acquisitions.

The significance of the Syntel acquisition

The event was held in Atos' regional U.S. HQ in Irving. Opened last year, the facility is also its first Business & Technology Innovation Center (BTIC) in North America. A clear emphasis throughout was that Atos in North America is now in a stronger position in terms of resourcing for a broad range of application services, including developing cloud-ready applications, as well as being able to support enterprises with reducing their infrastructure spend to invest in digital. It was also apparent that, in the short term at least, the growth opportunities are in mining Syntel’s client base rather than with acquiring new logos.

In July we wrote a short piece on the significance of the Syntel acquisition both to Atos North America and to its Business & Platforms Solutions (B&PS) business globally (see here). As a reminder, among other things, Syntel brings to Atos:

  • Increased presence in North America (adding 4.5k employees and ~$825m in regional revenue, expanding it by around a third, and means that Atos North America has a broader set of capabilities it can offer to clients in the region
  • A business that will be margin accretive to Atos
  • Three large accounts: Amex, State Street and Fedex (which were ~45% of Syntel total revenues)
  • A boost to its BFS and Insurance sector businesses (approaching $420m and $140m in revenue in 2017 respectively), also a significant U.S. application services practice in the Healthcare/Life Sciences vertical
  • A large Indian delivery capability, augmented by its SyntBots Intelligent Automation platform
  • Capabilities in apps development, testing and application modernization services (‘digital’ areas of application services)
  • Its 'Customer for Life' ethos, which has been a significant factor in client loyalty.

We also noted that, given the level of reverse integration that is happening in B&PS, and the fact that Syntel had a larger presence than Atos in the U.S., the role of Syntel senior management is critical to the success of the integration. And the transition so far has been seamless: former Syntel CEO Rakesh Khanna, for example, remains as CEO of Atos-Syntel, which now operates as a unit within the B&PS division, and is on Atos’ Executive Committee. He presented alongside Sean Narayan, who heads B&PS globally, and Simon Walsh, the new head of Atos North America (an external appointment) about the capabilities of the combined entities.

Portfolio: with applications plus infrastructure services capabilities in North America, Atos can now position in the region around digital transformation

Atos freely acknowledges that until now, the only examples it could provide where its services were evidently supporting clients in their digital transformation were from Europe. It was not by accident that the event opened with Rakesh Khanna providing some case study examples of recent Atos-Syntel projects with clients outside its top 3 (AmEx, State Street, FedEx) where its services have helped the client play catch up with large digital disruptors in their respective industries. Other examples included a blockchain initiative and supporting an online insurer impacted by a high level of significant technical debt by migrating ~880k lines of code from COBOL to Java.

Three insurance sector clients presented: all are mid-sized organizations and have been clients of Syntel for many years. Common strands were consistency of (quality) delivery and proactivity, e.g. in one case approaching the client with a proposition around the transformation of its underwriting process. One of the three is also a new Atos IT infrastructure services client from earlier this year, having switched from an incumbent provider after 15 years: this client referred to the relative ease and speed of sourcing, appreciating having fresh eyes looking for new opportunities, and an outcome-based pricing model (based on net new premiums) that had been agreed.

Delivery: integration of B&PS into Syntel delivery model already in progress

While little was said about the reverse integration of Atos’ large B&PS accounts into the Syntel delivery model, or of Atos’ India delivery centers into Syntel’s, work on this has already started. The integration includes:

  • Transfer of Atos’ North America and large global India-delivered B&PS contracts to Syntel, representing around $1.25bn, roughly one third of Atos’ overall B&PS business, of which $160m is from legacy Atos
  • Alignment of Atos’ B&PS India-based delivery with Syntel
  • Folding of some Atos delivery operations in Pune, Chennai and Mumbai into the larger Syntel facilities.

Any new B&PS deals incorporating global delivery will be pursued under the Syntel model.

The use of the Syntbots platform is expected to play a significant part in the ongoing delivery transformation in the RISE 2.0 program of the B&PS unit (which in our opinion had been in catch-up mode in the application of automation and AI). Atos is also assessing how and where Syntbots can play a part in its Infrastructure services business, e.g. in applying ML to incident management.

Improving sales execution & delivery performance in I&DM in North America

Three former problem contracts were terminated or expired earlier this year. The remaining few have been or are being addressed; one large problem contract has been reset and the new North America CEO holds a major incident review call every morning: there is evidently close attention being paid to improving delivery execution, also in staying close to other I&DM clients.

Following a period of disappointing sales performance, Atos is refreshing its I&DM pre-sales and sales personnel and architects in North America. There have been some new wins recently and the net new business is apparently strong.

Syntel clients happy with the larger scale of Atos

In the two weeks following the acquisition, Atos CXOs (Thierry Breton, Sean Narayanan, Eric Grall) managed to visit all Syntel’s key clients, representing ~70% of its total revenues; most were positive in that, as part of Atos, they can potentially look to Atos-Syntel for support in other geographical operations or in other services.

Future growth: farming rather than hunting; mid-market the primary focus

Among the attributes of Syntel emphasized by clients at the event were its effectiveness in forging deep relationships with them over the years and its consistency of delivery. Nearly all of Syntel’s revenue was through its existing client base and it brings to Atos strong account management and significant presales and solution architecture capabilities in North America, albeit for relatively (for Atos) small engagements.

Atos North America intends to leverage Syntel’s model and look primarily for smaller deals to grow wallet share in existing accounts. This is a significant change in emphasis in the GTM: both cross-selling and targeting smaller engagements are new areas of emphasis for Atos. An integrated approach into the Syntel client base has already commenced. Syntel’s 'Customer for Life' ethos brings in a new and improved approach to managing customer relationships; at the event there was a clear emphasis on client-centricity and on selling to specific client needs with a strong awareness that their appetite for the pace of change may differ significantly.

We note that in North America there is little sector overlap between Atos and Syntel: for example, Atos will have few local client references in financial services that it can draw on, though for smaller opportunities this will not be as critical a factor in vendor selection as it is in large deals.

Expect to see more vertical-specific offerings mid-term

Before Syntel, Atos’ portfolio in North America was primarily horizontal IT infrastructure services, though its earlier acquisitions of Anthelio Healthcare and three small healthcare consulting firms (two from Conduent) had indicated an intention to expand its presence in the U.S. healthcare sector. Syntel now brings in some application services business in the payer sector. Developing an integrated end-to-end portfolio for targeted segments of the healthcare sector remains an ambition.

We also expect to see a stronger play in the longer term in specific sectors within FS&I, also in manufacturing & retail.

Outside its top three clients, Syntel’s client base is typically drawn from mid-sized organizations, which is not where Atos has typically played.


The integration of Syntel immediately improves Atos North America’s ability to speedily resource B&PS deals without having to use resources from other regions, something which has at times been a competitive impediment. A large deal team remains in place and the legacy Atos North America focus on larger-sized enterprises for I&DM services remains. The ambition is also to cross-sell legacy Atos services into Syntel clients and to make a broader move overall into the mid-size market, and it is here that Atos is more likely to win broad-scope (infrastructure plus applications services) deals in the short to mid-term.

The increased emphasis on client intimacy in North America is also becoming more evident in the larger I&DM business in the region, where, with a new CEO in place, we also note a stronger focus on improving delivery reliability.

As well as having an immediate impact on Atos North America's offerings portfolio, Syntel is also a powerful boost to the B&PS RISE 2.0 initiative.

<![CDATA[TCS Business 4.0: Emphasizing Location-Independent Agile & Machine-First Delivery Model]]>

A year ago, following a TCS analyst event in Boston, the theme of which was Business 4.0: Intelligent, Agile, Automated, and on the Cloud, we wrote about how TCS’ new service line structure has been designed to support the company’s emphasis and positioning around Business 4.0 (see the blog here)

Underpinning its positioning around Business 4.0, over the past year TCS has been emphasizing two key capabilities:

  • Location-independent agile
  • Machine first delivery model (MFDM).

These two provided the core themes at the company’s recent analyst event in London. In our discussions with TCS execs, we were impressed by the speed and determination with which the company is moving to achieve the bold ambition it shared in 2017 to become 100% “enterprise agile” by 2020, also by its consciousness of how the nature of software engineering services will transform in the longer term.

Location-independent agile

With its stated target to be “enterprise agile” by 2020, TCS firmly placed a stake in the ground: to the best of our knowledge, no other IT services vendor has made a similar claim. TCS seems to be well on the way to achieving this objective with an estimated 250k of its 410k+ strong workforce already ‘agile ready’.

So, what is TCS doing to make this happen?

Unsurprisingly, there has been a massive retraining drive, accompanied by various efforts to nurture a culture where employees expect to continually learn (“making learning addictive”), something that becomes increasingly important as more roles become inter-disciplinary. A key asset underpinning its micro-learning platforms is its Karma gamification framework which has analytics and event-driven digital ‘nudging’ capability, used when an employee has been inactive.

Another core belief is that an employee’s contextual knowledge is more important than any specific technology skills – the aim is to train employees to become generalists rather specialists, particularly from roles that are next to be automated, by broadening the bar in their T-shaped skills profile to a V-shaped profile, e.g. training database managers on Hadoop, machine learning, and/or cloud systems administration. TCS estimates that its associates have on average four skills.

In line with this, performance management has moved from yearly assessments to micro targets.

Enabling tools that TCS has developed include:

  • Jile, a cloud-based agile DevOps framework-agnostic product designed to help scale agile at the enterprise level. TCS launched Jile as a commercial product back in January (priced at $9 PU/PM)
  • An Agile maturity model, an ‘agility debt’ framework with 27 characteristics codified from its experience with 300 clients, which it is now using to assess an organization’s agile readiness levels across the dimensions of structure, workforce, technology, and culture.

But enterprise agile also demands a transformation of the workplace: Krishnan Ramanujam told us that TCS is indeed transforming some of its larger delivery centers in India, including consolidating six campuses in Mumbai; removing cubicles in existing centers and installing large screens for interacting with team members based in other locations. We were assured that there is a “significant” investment in thus workspace transformation, but that there is no pressure on margins.

TCS’ emphasis on its capabilities in location independent agile is unsurprising: distributed agile is obviously important for large enterprises, and for many, their agile teams remain pockets of excellence. But getting distributed agile to work effectively is absolutely critical for those application services providers (by far the majority) that have an off/nearshore-centric global delivery model. Offshore delivery is not going away.

Machine first delivery model (MFDM)

As we have noted before in our Quarterly Updates on TCS, MFDM is not (just) using automation and AI for operations optimization. It is about giving technology the “first right of refusal” to sense, understand, decide, and act within a networked environment equipped with analytics and AI. The human interface is used for exception handling, training the machine to reduce exceptions, and for the application of contextual (often industry) knowledge.

The emphasis in the ‘Machine-first’ philosophy in the interplay between people and technology is how augmenting human capability can help unlock exponential value. The positioning is that MFDM can enable transformation in the client’s businesses (and, thereby, growth) for example through STP, new business models, increased speed to market, transformed CX, etc.

MFDM is thus a key element in TCS’ efforts to gain mindshare with stakeholders outside the CIO. We think there is more to be done in the articulation of the philosophy, as some clients (and, we noted, analysts) are honing on the automation aspect rather than the more disruptive business transformation play.

Ignio: in or out of TCS?

TCS describes its intelligent automation platform ignio as “the intelligent machine” behind MFDM. As with last year, most activity to date has been around IT operations (we estimate around 75%) though there is beginning to be increasing use of ignio to support applications development activities. The application of ignio in TCS’ Cognitive Business Operations business is in its infancy: obvious use cases include working on increasing the level of STP in activities such as finance, and accounting, supply chain, claims or mortgage processing. So, there are still considerable opportunities to leverage ignio across the portfolio.

At the same time, there is increasing traction for ignio as a commercial product, sold at times by other systems integrators. TCS highlighted that in FY18, its third year of operation as a commercial product, ignio achieved revenues of $31m, substantially more than many other SaaS enterprise products and also commented that it is looking to achieve >$100m in annual revenues in the next two years.

There is some tension between these priorities, and in discussions with execs, we noted some uncertainty as to the optimum model for ignio: as one of several product units within TCS, a separate subsidiary, or a standalone ISV which can sell more easily to other IT services providers.

The service portfolio revamp is helping drive digital; work to be done on full stakeholder play

In the last 12 months, TCS’ revenues from ‘digital’ services and solutions have increased by 49% to nearly $5bn. And the rate of growth has been accelerating: in Q2 FY19 revenue from digital services and solutions was up nearly 60% and accounted for >28% of the quarter’s total revenues (see TCS Quarterly Updates for more information). Moreover, this is organic growth. There is, of course, the caveat that there is neither commonality nor clarity as to different vendors’ determinations as to what classifies as digital, and in TCS case it has won some very large platform-based outsourcing wins that it would classify as digital. Notwithstanding, we are not aware of any other IT services vendor enjoying this level of organic growth at this scale, in what is primarily a services, rather than solutions, business. All IT services providers are undergoing a process of reinvention. Among the larger players, TCS is unusual in succeeding so far in achieving this through internal transformation.

Among the new standalone practices within the Digital Transformation Services (DTS) group, IoT and the larger Analytics & Insights unit are enjoying very strong growth. Taking longer to take off is Blockchain, with most activity still at PoC stage, a reflection of where the market is currently, but the pipeline is building up.

Among other things, the new service delivery structure has worked in helping advance TCS’ full services play and support its ambitions for its offerings to be more business outcome focused, and “to address issues of board-level significance”. However, this is not quite the same as having direct access to CxOs. While we appreciate that TCS does have direct access in some service areas and in some clients to stakeholders such as the CFO, we think there is more work to be done around the full stakeholder play and in elevating the TCS brand from technology to business partner, both in increasing thought leadership and also in the portfolio.

What next at TCS?

We have been aware that TCS’ messaging around Business 4.0 is resonating well with clients, but, as noted above, feel that the business transformation potential of the MFDM philosophy is less well understood. Expect to see more messaging in 2019 which provides specific client examples demonstrating the benefits realized from MFDM, and how the value proposition coming from the MFDM approach supports the delivery of specific services.

While TCS has focused on organic growth in recent years, we were keen to know whether there might be any tuck-in acquisitions to augment, for example, the design capabilities of TCS Interactive (and increase TCS’ access to client’s marketing budget stakeholders), or perhaps its managed security services capabilities. In short, we think that there is a possibility of both, that the recent acquisition of a design studio in the U.K. will be followed by other tuck-ins in other geographies, also that a cyber specialist asset, perhaps in the U.S. is attractive.

In terms of target markets, also expect to see an increasing focus on the U.S. public sector, for example, that builds on its experience in state Unemployment Insurance platform modernization.


A major focus of the event was to show why and how TCS has been redesigning and adapting organizational structures, facilities, processes and policies, and also its workforce culture to align with location-independent agile and with the MFDM. In our discussions with execs, we also picked up that the thinking is looking further ahead, to a time when there is virtually no coding and computer science skills become less relevant, and a primary key skill is data science.

One presentation referred to the great Wayne Gretzky's (dad Walter’s) advice to “skate to where the puck is going” (rather than where it is). This may have become an over-used aphorism in the corporate world, but, like all good aphorisms, is effective in neatly capturing a concept or principle. In our discussions with execs, we were convinced that TCS has a clear vision of the future of information technology, and it is investing to make sure that it will remain relevant even when the nature of IT services has changed dramatically from what it is today.

<![CDATA[Infosys Testing: ML Initiatives & Autonomous Testing On the Horizon]]>


Software testing services continue to show vitality in their adoption of tools to increase automation. One of the areas in which vendors are investing heavily is AI, not just leveraging AI to increase automation through an analytics approach, but also in testing of AI systems. Back in November 2017, we reported on how Infosys was expanding its testing service portfolio in the areas of AI, chatbots, and blockchain. Infosys had created several use cases for ML, focusing on test case optimization and defect prediction. It was also exploring how to test chatbots and validate the responses of a chatbot.

We recently talked to Infosys about the progress it has been making in this area over the last year.

Adding more use cases for ML-based automation

Infosys continues to invest in developing additional AI use cases, and has expanded from use cases around test suite optimization, defect prediction, and sentiment analysis into three new areas:

  • Test scenario mining, identifying areas of applications that are most used to help prioritize testing activities
  • Defect analysis, to understand how a defect impacts functionality and identify areas of an application/code most affected by defects; also, to assess how defects may impact several applications. Infosys is also looking to identify defects that are similar and understand at what release the defects have occurred
  • Traceability, linking testing requirements with their use cases to estimate test coverage.

Infosys is now systematically introducing these ML use cases with clients, and is currently at the PoC and implementation stage with 25 clients.

Infosys continues to work on additional AI use cases such as:

  • How a new release impacts test cases
  • Optimizing coverage
  • Finding performance anomalies in response time of APIs, script execution, or websites.

Testing ML systems

Testing ML systems is also a priority. Infosys is initially focusing on several use cases across deterministic MLs (i.e. that will always produce the same output) such as bots, and on non-deterministic systems (e.g. defect prediction).

For bot testing, Infosys has been working on making sure a chatbot can provide the same response to the many different ways a question can be asked. For one client, it has created an algorithm for generating text alternatives around a question. It then validates that the chatbot’s response is consistent for all question alternatives, using Selenium.

In addition, Infosys is working on voice bots, and image and video recognition. For image recognition, it creates alternatives to an image and validates that the ML system recognizes items on the image.

Testing MLs is only just beginning; vendors such as Infosys are working on the challenge, and in use case after use case, are creating comprehensive methodologies for ML testing.

Some autonomous testing is on the horizon

Infosys is working on several pilots around autonomous testing. This is a bold ambition. Infosys has based its first autonomous testing approach on a web crawler. The web crawler has several goals. It scans each page of a website to pick up defects and failures such as 404 errors, broken links, HTML related errors. More importantly, the web crawler will create paths/transactions across one or several screens/webpages, and then create Selenium-based test scripts for these paths/transactions. This is the beginning, of course, and the first test use cases are simple transactions such as user login or order-to-pay in an online store.

NelsonHall will be monitoring the development of autonomous testing with interest.

<![CDATA[Sogeti’s Strategy for AI & RPA in Testing]]>


We recently caught up with Sogeti, a subsidiary of Capgemini, to discuss the use of AI and RPA in software testing.

In testing, AI use cases are focusing on making sense of the data generated by testing and from ITSM and production tools. For RPA, adoption of RPA workflows and chatbots in automating testing services has to date been minimal.

Continued investment in Cognitive QA through AI use cases and UX

Earlier this year, we commented on the 2017 launch by Sogeti of its Cognitive QA IP. Sogeti had developed several AI use cases in areas including test case optimization and prioritization, and defect prediction. Sogeti continues to invest in Cognitive QA to gain further visibility around test execution. Recent use cases include:

  • Improving test coverage through mapping test cases with test requirements, and defects with test cases
  • Predicting performance defects based on a release analysis, and identifying the code that is impacting performance
  • Conducting what-if analyses to assess the impact of a risk-based approach to defect levels and test coverage
  • Managing test projects by gaining productivity data, e.g. how long it takes to identify a defect, to share it with the development team, and to fix it.

Sogeti, like most of its peers, also continues to invest in sentiment analysis capabilities. The principle of sentiment analysis tools in the context of testing is to cluster data across several keywords, e.g. functional defect, UX defect. Sogeti is working on translating its sentiment analysis into actionable feedback to developers.

The company is finding that these AI use cases are a good door-opener with clients and open new broader discussions on test & development and data quality: with the increased adoption of agile methodologies and DevOps open source tools, the fragmentation of tools used in the SDLC is impacting the quality and comprehensiveness of data.

Bringing UX to AI use cases

While we were expecting Sogeti to maintain its focus on AI use cases, we had not expected that Sogeti is also focusing on the UX. The first step in this journey was straightforward, with Cognitive QA being accessible on mobile devices and going beyond a responsive website approach, e.g. creating automated alerts for not meeting SLAs, and automatically setting up emergency meetings.

Sogeti is also bringing virtual agents into Cognitive QA. It offers access to the IP through both voice and chatbot interfaces. With this feature, test managers can access information, e.g. number of cases to be tested for the next release of an application, which one, and what level of prioritization. The solution handles interaction through the virtual agents of Microsoft (Cortana) and Skype, IBM (Watson Virtual Agent), AWS (Alexa), and Google Home. Sogeti has deployed this virtual agent approach with two clients, with implementation time taking between two to three months.

Another aspect of Sogeti’s investment outside of a pure AI use case approach is its project health approach. Cognitive QA integrates with financial applications/ERPs. The intention is to provide a view on the financial performance of a given testing project and integrate with HR systems to help source testing personnel across units.

Deploying RPA workflows to automate testing services

The other side of automation is RPA. We have mentioned several times the similarities between RPA tools and test execution tools, and the fact that they share the same UI approach (see the blog RPA Can Benefit from Testing Services’ Best Practices & Experience for more information). The world of testing execution software and RPA workflow tools is converging with several testing ISVs now launching their RPA software products. Several of Sogeti’s clients are using their RPA licenses to automate testing. The frontier between testing execution and RPA is about to become porous.

We have not historically seen extensive use of RPA tools to automate manual testing activities. To a large extent, the wealth of testing software tools has been comprehensive enough to avoid further investment in software product licenses. Sogeti is indicating that this is now changing: with several clients it is using RPA to automate activities related to test data management, test environment provisioning, or real-time reporting. This is all about a business case: these clients are making use of their existing RPA software licenses rather than buying additional specialized software from the likes of CA. To that end, Sogeti has been building its RPA testing-focused capabilities internally, and has ~100 test automation engineers now certified on UiPath and Blue Prism.

AI and RPA are only one of the current priorities for testing services

The future of testing services goes beyond the use of AI and RPA: there is much more. One major push currently is adopting agile methodologies and DevOps tools, and re-purposing TCoEs to become more automated and more technical. And there is also UX testing, which itself is a massive topic, and requiring investment in automation. The reinvention of testing services continues.

<![CDATA[Quality Engineering Becoming A Reality: TCS Launches Smart QE Platform]]>


In our last testing blog on TCS in July 2018, we discussed the work TCS has conducted around UX testing and the introduction of its CX Assurance Platform (CXAP). In this blog, I look at the IP that TCS launched in mid-2018 addressing another feature of the digital world: DevOps and AI.

TCS’ Smart QE Platform is built on existing TCS IP, including:

  • NETRA – automates the Dev to Ops process and includes test support services (e.g. environment management, service virtualization, and test data management)
  • 360 Degree Assurance – analyzes data gathered from a range of sources (e.g. ITSM tools, configuration management tools, and database and application server logs, and defect management tools).

The Smart QE Platform integrates NETA and 360 Degree Assurance, and this integration is an acknowledgement of the fragmentation of software tools in testing. The goal of Smart QE Platform is to drive automation and bring AI capabilities to continuous testing.

New AI use cases for automating testing

AI and analytics are a priority for TCS; it has complemented its ML use cases for test suite optimization and defect prediction, and added:

  • Dynamic root cause analysis of defects, incidents and log correlation
  • Code path analyzer, for determining the impact of code changes on test cases. The tool initially generates a knowledge base across code components at the sever (e.g. classes, methods, and LoCs) and UI (e.g. HTML) levels, and links these components with test cases. It will, for new releases, assess the impact of new code on its repository and, as a result, on the corresponding test cases
  • Smart QE maps, which assesses the maturity of an application and maps the elements for providing analytics
  • QE-bots, with the intent of automating certain tasks. Initially, TCS has automated the provisioning of test environments through a chat bot.

Enhancements to existing functionality

Along with its investment in AI use cases, TCS continues to enhance the functionality of Smart QE Platform; e.g. static code analysis, continuous deployment, dashboards, and a self-service portal. These enhancements and new functionality are incremental. Examples include the impact of code change in testing data format and requirements, and monitoring the availability of test environments.

To a large extent, TCS is driving its testing IP towards more specialization and it aims to automate an ecosystem of services that fell outside of testing activities in the past. The good news is that by increasing the availability of test environments, for instance, TCS is removing several bottlenecks that were impacting testing activities.

Incremental features in future

TCS will maintain its investment in the Smart QE Platform:

  • It will continue to introduce incremental changes, e.g. bringing new features for provisioning of environments. TCS is expanding the number of hosting options, e.g. on AWS and Microsoft Azure, and on private clouds based on VMware technology
  • TCS is also adding features such as test environment reservation, and continues to invest in test environments through enhanced monitoring or automated health checks. Self-healing is becoming a reality, starting with specific use cases, initially in a limited manner
  • In the upcoming release, TCS wants to integrate its functional testing IP (across UI, but also for API testing) into Smart QE Platform. Once this integration is done, TCS believes that it will have a very comprehensive IP.

There has been a significant change in testing IP over the last few years, with vendors aggregating IP and accelerators into platforms around a theme: DevOps or digital testing.

What is new is that, with leading providers such as TCS, AI is now becoming a reality in the automation of test services across the testing lifecycle. With developments such as these by TCS, testing is moving closer to genuinely reflecting its new name of Quality Engineering.

<![CDATA[TestingXperts: Focusing on Next-Gen & Technical Testing Services]]>


We recently caught up with TestingXperts (Tx), a software testing/QA specialist. Tx was set up in 2013 and has presence in Harrisburg, PA, London, and Chandigarh and Hyderabad in India. Revenues of Tx in 2017 were $15m, and its current headcount is 500.

The model of the company is based on Indian delivery: currently, around 80% of its personnel are located in India, primarily in Hyderabad and Chandigarh, with the remaining staff mostly located in:

  • The U.S., where it has a sales office and onshore delivery center in Harrisburg, PA. with 25 personnel, a sales office in NYC, with plans to open sales offices also in Texas and Toronto
  • London, U.K.
  • Melbourne
  • Amsterdam.

Portfolio specialization around IP-based offerings is TestingXperts’ priority

To drive its differentiation, Tx continues to expand its service portfolio to specialized and next-gen services such as mobile testing, UX testing, DevOps/continuous testing, and data migration testing. Newer offerings include testing of AI, blockchain, chatbots, and infrastructure-as-Code (IaC). Tx dedicates a high percentage of its revenues, ~5%, to internal R&D, and has developed eight IPs in support of these offerings.

IaC is unique to Tx; it tests the scripts that define servers, set up environments, balance loads, open ports, based on tools from ISVs such as Hashicorp’s Terraform, Ansible and puppet. Tx has created a testing framework, Tx-IaCT, for writing the python scripts that are used for validating that the right cloud infrastructure has been provisioned, conforms to specific benchmarks such as CIS standards (around server configuration, for security purposes) for AWS and internal corporate rules. Tx-IaCT is a differentiator for Tx. It continues to invest in it, expanding from AWS and Azure to Google Cloud. Tx is also expanding its test suite to industry-specific standards such as banking’s PCI and U.S. healthcare’s HIPAA.

The IP that Tx currently uses the most is Tx-Automate. It is a continuous testing platform that pre-integrates DevOps software tools such as CI, test management, defect management, and static analysis tools and test support activities, such as test data management, and web services/API testing. Tx-Automate integrates with Selenium for web-based applications and Appium for mobile applications, as well as with more traditional test execution tools such as Micro Focus UFT and Microsoft’s Visual Studio.

Along with Tx-Automate, TestingXperts has created mobile device labs in Chandigarh and in Hyderabad, with a smaller one in its Harrisburg facility. TestingXperts maintains its own lab, despite the abundance of cloud-based mobile labs, for several reasons. It provides access to real devices, rather than a mix of devices and emulations. The company also highlights that owning its own labs with 300 devices allows it to offer more competitive services to its clients and brings it flexibility. Along with the test labs, Tx has developed a set of core scripts based on Appium and Selendroid.


To some extent, the arrival of digital testing and other next-gen testing offerings (such as UX testing, crowdtesting, AI automation and testing, and RPA/chat bots) is redefining what ‘state of the art’ means for software testing services. With testing becoming much more technical, NelsonHall is finding that expertise-based offerings are no longer sufficient, and more comprehensive IP-based offerings are becoming the new norm.

With this in mind, it is refreshing to see that a small testing firm can bring specialized offerings (such as IaC testing) to market that few other vendors have.

<![CDATA[Highlights from IBM & The Weather Company at Red Bull Racing (vlog)]]>


Mike Smart presents summary highlights from IBM and The Weather Company at Red Bull Racing.