NelsonHall: Cyber Resiliency Services blog feed https://research.nelson-hall.com//sourcing-expertise/it-services/cyber-resiliency-services/?avpage-views=blog Insightful Analysis to Drive Your Cyber Strategy. NelsonHall's Cyber Resiliency Program is a dedicated service for organizations evaluating, or actively engaged in, the outsourcing of all or part of their IT security activities. <![CDATA[Accenture’s Zoran Tackles Digital Identity Failings]]>

 

NelsonHall recently visited Accenture at its Cyber Fusion Center in Washington D.C. to discuss innovations in its cyber resiliency offerings and the recent launch of its new digital identity tool, Zoran.

Failings of existing role-based access (RBA)

Typical identity and access management (IAM) systems control users’ access to data based on their role, i.e. position, competency, authority and responsibility within the enterprise. It’s a standard best practice to keep access to systems/information at a minimum, segmenting access to prevent one user, even a C-level user, from having carte blanche to traverse the organization's operations. Not only does this reduce the risk from a single user being compromised, it also reduces the potential insider threat posed by that user.

While these IAM solutions can match user provisioning requests to a directory of employee job titles to automate a lot of these processes, there can be a breakdown in the setup of these RBA IAM tools, with roles defined too widely as a catch-all, which in turn reduces the segmentation of the access. For example, if a member of your team works in the R&D department developing widget A, should they receive access to data related to widget B?

Likewise, another issue with these solutions is privilege creep, which is where an employee who has had several roles or responsibilities has retained previous permission sets when they have moved role. These and many more issues result in RBA systems being ineffective, as they are implemented as a static picture of the organization’s employees at a single point in time. In addition, recertification is a time-consuming and wasteful exercise.

Enter Zoran

Accenture developed Zoran in The Dock in Dublin, a multidisciplinary research and incubation hub. It brought in five companies to discuss the problem of identity management, two of which stayed on for the full development, handing over data to Accenture to be used in the development of Zoran.

Zoran analyses user access and privileges across the organization and performs data analytics to look for patterns in their access, entitlements, and assignments. The trends found in this patent pending analytics algorithm are used to generate confidence scores to determine whether users should have those privileges. These confidence scores can then be used to perform automatic operations such as recertification, for example, if a user’s details change after a specified period of time.

Zoran is not using machine learning to continuously improve confidence scores – i.e. if, for a group of users, an entitlement is always recertified, the confidence scoring algorithm is not updated to increase the confidence score. Accenture’s reason for this is that it runs the risk of being self-perpetuating, with digital identity analysts being more likely to recertify users because the confidence score has risen.

Currently, Zoran does not store which security analyst approved which certification for which user, although Accenture is in the process of adding this feature.

Will Zoran be the silver bullet for IAM?

IAM tools have been relatively slow to develop from simple automation to an ML/AI state, and this is certainly a step in the right direction. However, there will have to be some reskilling and change management around the recertification process.

While Zoran aims to reduce the uncertainty in recertifying permissions for a user, there is still a very limited risk of ‘false positive’ confidence scores being given which could automatically recertify a user, or that a security analyst could certify a user in something akin to a box-ticking exercise due to trust in the confidence score provided.

Accenture also needs to improve on developing the Zoran technologies with its other technologies; for example, its work with Ripjar’s Labyrinth security data analytics platform could yield some interesting results.

NelsonHall believes tools such as Zoran, combined with more traditional IAM solutions, are likely to be the current trajectory of the IAM market, with ML further segmenting groups/roles and providing increased trust in recertification processes.

]]>
<![CDATA[Quick Takeaways from IBM European Analyst & Advisory Exchange (vlog)]]>

 

Mike Smart reports directly from IBM’s European Analyst & Advisory Exchange 2017 with some quick take-aways regarding IBM’s transition from systems integrator to services integrator and its business resiliency services.

]]>
<![CDATA[How IT Services Vendors Can Help Organizations Meet GDPR]]>

 

In this, the second of two articles on GDPR, I look at how IT services vendors can help companies meet GDPR compliance in several areas. You can read the first article, ‘The Impact & Benefits of GDPR for Organizations’, here.

Application services

Application services can help organizations in ensuring that new and legacy applications meet the GDPR articles pertaining to applications: namely Article 25, which aims to make sure that applications have ‘data protection by design and default’.

In short, application providers should be providing:

  • Security by design in the early stages of the SDLC
  • Gap analyses on what personal data is required, how it is collected, processed and handled
  • Ensuring a level of security appropriate to the risk with:
    • Encryption and/or pseudonymisation of data
    • The ability to restore personal data in case of a breach or technical issue
    • Regular security testing of practices and solutions to ensure confidentiality, integrity, availability, and resilience
    • Data minimisation efforts, using the prior gap analysis so that only the required data is collected, stored, and accessed (for example, does the organization really need to know users’ age to provide a non-age restricted service?)
  • Ensuring that the principle of least privilege is used for internal users so that they may only access required data (for example, in a telecoms provider, a customer service agent providing technical assistance need not know clients’ full payment details and history).

The difficulty arises with articles of the GDPR that require organizations to be able to provide data portability and the right to be forgotten. For data portability (i.e. the right of the user to take their data from one vendor to another), the regulation encourages data controllers to develop formats for the data to enable portability. However, in legacy systems, this data may be structured in a way that makes portability difficult.

Also, GDPR’s ‘right to be forgotten’ allows users to have their data deleted without a trace, but this has the potential for disrupting how organizations backup data, due to technological limitations and existing regulations. There are concerns that the right to be forgotten is not achievable while meeting existing regulations that require organizations to hold data for an extended period of time. For example, MiFID II, for which financial institutions must record all conversations related to a financial deal for 5 years. GDPR’s right to be forgotten does not apply when other legal justifications are in place, and the regulation is superseded by the other legal requirement. Organizations in this position will need to consider carefully which data is required and which data can be safely erased.

Organizations that use data backup services also have to ensure that their backups meet GDPR requirements. Data that is restored from backups must also be free of data that the user has requested to be erased. However, in some technical implementations, it is technically impossible to delete bits of data from backups without destroying the entire backup.

Cybersecurity

Cybersecurity vendors can help organizations meet GDPR articles that impose more stringent data security. Most of the cybersecurity services providers’ frameworks divide the act of becoming compliant into five standard operations:

  • Assessment – the vendor conducts privacy, governance, process, and data security assessments and gap analyses to identify personal data and how it is processed within the organization, and constructs roadmaps to make the organization GDPR compliant
  • Design – the vendor designs an implementation plan of the standards, controls, policies, and architecture to support the roadmap
  • Transformation – the embedding of tools, technologies, and processes
  • Operation – the execution of business processes and the management of data subject rights
  • Conform – monitoring and auditing the organization's compliance to GDPR.

Cybersecurity vendors’ incident response (IR) services will be well placed to handle cybersecurity breaches that require notification to the in-country supervisory authority. The change to incident response protocols after GDPR is enforced is the requirement to notify the authority within 72 hours. Currently, typical IR SLAs can provide off-site services in one hour, and onsite support within ~24 hours. In situations where an existing agreement is in place, remediation vendors are less able to commit to the 72-hour deadline and less able to guide their clients in contacting authorities. As GDPR comes into place, we can expect to see the number of organizations choosing IR services retainers to grow.

Other vendor initiatives

An organization need not choose a single vendor to complete all these operations. Indeed, in a number of cases, vendors are being approached after the organization has conducted assessments of their current level of compliance independently or with the help of another vendor, and managing GDPR tools and auditing the compliance is expected to be rolled into existing Managed Security Services GRC operations.

Other service providers are working to ensure that their services are GDPR compliant. Initiatives to become compliant include:

  • Cloud services providers that were previously exempt from the 1995 directive are now regulated and have been working to meet the May 2018 GDPR deadline. As most of the GDPR requirements on cloud providers are covered by ISO 27001, meeting 27001 standards will certainly help the provider demonstrate that it is working towards ‘appropriate technical and organizational measures’, as specified by GDPR
  • SaaS vendors have been mapping incoming and outcoming data flows, and how data is processed and stored, and demonstrating that they can meet users’ requirements for the right to erasure, data portability, etc.
  • ADM vendors have been performing application design services as part of an SDLC as a matter of principle for years, and will not require drastic changes beyond possibly expanding the use of pseudonymization
  • Application security vendors have been performing vulnerability and compliance testing as a core service, and have added provisions to perform GDPR gap analysis.

DPO services

A service that NelsonHall expects to grow fast is Data Protection Officer (DPO) outsourcing. The DPO role (required for data controllers and processors alike) can either be internal or outsourced (provided that the DPO can perform their duties in an independent manner and not cause a conflict of interest).

Of the vendors we have spoken to about GDPR services over the past year, none had a defined DPO outsourcing service in place, and only one (LTI) has been working towards a defined service. LTI is currently in the process of training DPO officers, and is investigating exactly how the service should be offered. NelsonHall expects to see a number of distinct offers around DPO emerge from IT services and law firms very soon.

Not long now…

With the impending enforcement of GDPR less than 200 days away, and services from vendors solidifying, organizations would do well to start considering services now emerging to help them work towards compliance.

]]>
<![CDATA[The Impact & Benefits of GDPR for Organizations]]> In this, the first of two articles on GDPR, I look at how the regulation is set to impact companies, and at the benefits of compliance beyond simply avoiding penalties.

 

 

The EU's General Data Protection Regulation (GDPR) was adopted in April 2016 and will be put into force on 25 May 2018. The unified and enforceable laws contained in the regulation replace the outdated rules (that could be interpreted differently by each member state) contained in the 1995 EU Data Protection Directive.

The regulation is of critical importance to organizations because of the steep fines that can be levied for failing to meet the requirements – up to €20m or 4% of global annual turnover for the preceding financial year (whichever is greater) for serious breaches, and €10m or 2% of turnover in less serious cases such as procedural failures.

It is worth noting, however, that these are maximum levels that can be imposed by the supervisory bodies within countries, and in reality they may be much lower. The U.K. information commissioner, Elizabeth Denham, who will be leading the enforcement of GDPR in the U.K. has stated that early talk of fines at such high levels amount to scaremongering, and that ‘issuing fines has always been, and will continue to be, a last resort’. As a proof point in the last year financial year, the U.K. ICO conducted 17k investigations of which just 16 resulted in fines.

Additionally, authorities may be even less able to handle the number of cases related to GDPR after the May 2018 enforcement period begins due to the level of staffing. The U.K. ICO is particularly strong, with 500 personnel, and plans to add 200 new positions over the next two years to help cope with the increasing number of cases related to GDPR. Other member states have lower headcount levels.

Hence, the indications are that strict enforcement may not happen from the outset when the regulation comes into force, and that organizations shown to be working towards meeting the regulation may be given some leeway. Nevertheless, organizations should be looking to start on the road to compliance as soon as possible.

The GDPR exercise should not be seen as one of solely checking boxes to avoid being fined, as there are a number of benefits to organizations in being compliant:

  • GDPR can be seen as a chance to review the company’s data handling processes, restructuring them not only to meet compliance, but also to identify potential efficiency gains or new business opportunities/revenue streams
  • Increasing the level of security of user data through encryption or pseudonymization will build trust with users, as breaches in the organization's cybersecurity are less likely to impact them
  • Performing a review of IT processes, organizations will be able to identify and eliminate ‘shadow IT’ and build proper processes that are known to the organization
  • It is a chance to improve IT systems and processes behind the scenes, e.g. through the implementation of customer identity and access management (CIAM) and backup systems.

 

In the second blog on GDPR, I will look at how IT services vendors can help companies meet GDPR compliance.

]]>
<![CDATA[Atos’ Use of Machine Learning for the Prescriptive SOC]]>

 

When NelsonHall spoke to Atos earlier in the year about its managed security services, there was a clear push to move clients away from reactive security to a predictive and prescriptive security environment, so not only monitoring the end-to-end security of a client but also performing analytics on how the business and its customers would be affected by threats. Atos’ “Security at Heart” event two weeks ago provided more information on this.

I recently blogged about IBM’s progress in applying Watson to cybersecurity; Watson ingests additional sources such as security blogs into its data lake and machine learning to speed up threat detection and investigation. At face value, the prescriptive SOC offering from Atos isn’t very different in that it starts with a similar goal: use a wider set of security data sources and apply machine learning to better support clients.

With Atos’ prescriptive security approach, it has increased the amount of security data in the data lake that it analyzes. This information can come from threat intelligence feeds, contextual identity information, audit trails, full packet and DNS capture, social media, and information from the deep and dark web.

Atos highlights its ability to leverage its analytics and big data capabilities of its bullion high-end x86 servers to apply prescriptive analytics to the data in its data lake, then use the information, through McAfee’s DXL data exchange layer and threat defense life cycle, to automate responses to security events.

Using this capability Atos can reduce the number of manual actions that analysts are required to perform from 19 to 3. The benefits are clear; cyber analysts have more time to focus on applying their knowledge to secure the client and the speed, and completeness of the service offered increases. Atos claims its Prescriptive SOC analyzes 210 indicators of compromise compared to 6 in the previous service, reducing the time to respond to a threat from 24 hours to under seven minutes, and time to protect against a threat from 4.2 hours to around one minute.

Atos has been beta’ing its prescriptive managed security offering with several clients, mainly in the financial services sector.

Another highlight of the event was Atos’ Quantum computing capabilities, with the release of its Quantum Learning Machine (QLM) quantum computing emulator. These investments in quantum computing in effect future proof some of its cybersecurity capabilities.

The general consensus currently is that scale use of quantum computing by enterprises is still around a decade away. When this happens, quantum computing will add a powerful weapon to the threat actors’ arsenal: the ability to break current encryption methods. Atos' current investment in quantum computing, and specifically its quantum computing emulator, will help organizations develop and test today the quantum applications and algorithms of tomorrow. 

]]>
<![CDATA[IBM and the Road to Cognitive Security]]> Every day the sheer amount and complexity of cybersecurity information that security analysts are required to sift through increases. Analysts arrive at the start of each day, catch-up on recent attacks and research cybersecurity news, and are thrown into analyzing security incidents: going down the rabbit hole of reviewing data flows, finding outliers, investigating IPs, searching through both internal and external structured and unstructured data sources, etc. All this work compounds the skills shortage that the cybersecurity services industry faces and the fatigue that security analysts endure.

Enter IBM’s Cognitive Security.

IBM has been training its cognitive analytics solution, Watson, to understand the ‘language’ of cybersecurity.

Watson ingests data from a massive range of material spanning structured data from the likes of its X-Force Exchange and partner data every 5-minutes; crawling unstructured data such as security blogs every hour; and every 1-3 days pulling in millions of documents from across the web. Data is then filtered, removing unnecessary information using machine learning capabilities, for which Watson then extracts relevant information and annotating it for security analysts.

Building up its knowledge corpus, Watson is able to automate the analysts’ work of searching for data, linking into QRadar. Security analysts using QRadar are provided with a set of insights on a threat, and by pressing a single button can view a knowledge graph that details the relationships between devices, threats, files, IPs, etc. and then dives into more detail.

 

Watson’s knowledge graph and threat report

 

IBM has stated that the use of this cognitive power can speed up the analysis of a security threat from 1 hour to less than 1 minute, adding more insights than the analyst would ever be able to search.

Security analysts can rate the analytics performed by Watson, and with IBM increasing the amount of information being ingested into the knowledge corpus, the quality of the insights provided is improving fast. Initial feedback on the Watson for cybersecurity beta program set up in December 2016 to 40 clients wasn’t completely positive, but with bi-weekly calls the quality of results from Watson increased rapidly.

By shifting analysts’ work focus from searching through multiple sources for data, these cognitive solutions are reducing the time spent on L1 and L2 activities: not only is there a shortage of cyber analysts, but the deployment of Watson for Cybersecurity makes the work less grunt.

The difference in analyst time spent using traditional research and with Watson

 

Where else is IBM taking cognitive security?

By teaching Watson the ‘language’ of security, IBM has built two solutions that help its cyber analysts and clients interact.

The first is Security Services Virtual Analyst, launched in October 2016, which acts as a chatbot in the client’s MSS portal. The chatbot answers common questions from clients up to 50 times faster than waiting for a security analyst.

The second, Project Havyn, allows security analysts to talk directly to QRadar to perform actions more efficiently.

IBM has recently linked QRadar Advisor with Watson to Resilient Systems, the incident response solutions vendor it acquired in March 2016. With this, the Watson QRadar app can directly send on threat information to Resilient’s QRadar app, for Resilient to have the best information upon which to act and stop the attack. In future, it is not out of the range of possibility for vendors to look to implement a cognitive solution that can both analyze threats and also immediately perform remediation actions such as patching vulnerabilities.

With the use of IoT about to accelerate, and the increasing complexity and scale of cyberattacks already apparent, the importance of the use of these cognitive technologies in cybersecurity should not be underestimated.

]]>
<![CDATA[WannaCry and the Need for IT Spend on Cyber]]> Last Friday morning, the largest ransomware cyber attack infected an unprecedented number of machines across organizations worldwide. The ransomware named WannaCry demanded $300 in Bitcoins to be paid in three days, otherwise the ransom would double. If no payment was made after seven days, data would be deleted forever.

 

WannaCry ransom message

 

Each time this ransomware infected a new computer it tried to connect to a domain; if it could not reach the domain, WannaCry continued to spread. To aid its spread, WannaCry utilized a tool known as EternalBlue to identify and use file sharing protocols on the infected systems to spread. EternalBlue is a hacking tool developed by the NSA then stolen by a group called Shadow Brokers and dumped online in April.

With the online dump of the vulnerability, known as MS17-010, Microsoft went about producing and releasing a security patch to fix the vulnerability, quickly pushing the update live to its current operating systems. Unfortunately for some, operating systems that had been EoL’d before the attack, namely Windows XP, did not have a security patch released initially as Microsoft usually charges to provide custom support agreements for old version of Windows.

Organizations hit that have in part remained on XP include Telefónica, the NHS, FedEx, Renault, and Police and petrol stations in China.

Advice for ransomware is to:

  • Isolate the system
  • Power off
  • Secure any data backups
  • Contact law enforcement
  • Change account passwords.

The FBI had previously released a controversial statement saying they often advise people to pay the ransom, though it does state that paying is no guarantee of getting data back.

The trouble with any advice to pay WannaCry is the mechanism it has to release infected systems. WannaCry has no process to uniquely identify which infected machines have paid the ransom and therefore the likeliness that any infected machines will be released by the attacker is low. Nevertheless, the hackers’ bitcoin wallets have received more than 230 payments totaling more than $65k.

So what can be done about WannaCry and other similar ransomware?

After two days, Microsoft released a patch for Windows XP to fix the vulnerability.

Before this, the attack had been slowed by a security researcher who analyzed the code of WannaCry and detected the domain kill switch it had been attempting to connect to. By registering this domain (for less than $11), newly infected systems received the kill switch and did not go on to spread the ransomware. However, since this, a second version of WannaCry was released without this kill switch.

Organizations should be looking towards their IT service providers to mitigate the threat.

In the immediate timeframe, clients should look towards their service providers to download and apply all applicable OS security patches and antivirus updates and look at what data DR systems can restore.

Moving forwards, organizations should be looking with their IT service providers at:

  • Performing a cybersecurity vulnerability analysis to assess the current state of affairs, discover the organization's crown jewels, and close vulnerabilities
  • Developing business continuity plans to ensure even if/when a cyberattack occurs, the organization knows how to react and reduce the impact
  • Developing cybersecurity training programs to reduce the chance staff will download malware/ransomware.

As part of a wider conversation, if an enterprise has business critical infrastructure that remains on outdated OS’s, it should be looking at how these systems can be secured. These systems could be upgraded to more current OS’s, or if legacy processes or applications prevent this move, perhaps look at other methods of protecting these systems such as air gapping the infrastructure or even paying for Microsoft’s extended service agreement.

In the case of the NHS, at end 2016, incredibly 90% of NHS Trusts were still using Windows XP in some capacity – yet last year, the U.K. Government Digital Service decided not to extend a £5.5m one-year support deal that would have included security patches. We imagine there are some red faces at GDS. Decisions like this in not extending this support deal have now had a huge impact in some areas of the NHS, including in some areas causing delays in the delivery of life-saving services. There are clearly lessons to be learned in both the public and private sector about managing old estates.

]]>
<![CDATA[SOC 3.0 and Proactive Security Management: the HP Aspiration]]> NelsonHall recently attended HP’s security analyst day in London. The session provided a deep dive into HP's threat intelligence and the application of this intelligence into its security products line.

Concerns about security issues are expanding beyond CSOs/CISOs to the rest of the C suite, even commanding the attention of CEOs.  HP highlighted that

  • Conversations with clients now focus primarily on the business issues of security, questioning the increasing cost of security versus the level of protection delivered
  • The increasing complexity and difficulty - and cost - of resolving threats.

The increased importance of IT security is a consequence of:

  • Attacks on organizations becoming more deadly (recent examples include Target’s CEO being removed after malware was found to have had stolen details for 40m customer credit cards and Ebay where personal information was stolen for 233m customers)
  • The transformation of IT infrastructures to cloud and mobile devices
  • Needing to comply with increasing regulations (SOX, Basel III, GLBA, PCI etc.).

To illustrate the increasing attention being paid to cyber security, after the recent attack in which customer contact information was taken from 76m households and 7m small businesses. JP Morgan’s CEO recently stated that JP Morgan will likely double its level of cyber security spend within the next five years.

HP highlighted some innovation it is looking to apply to security operations centers (SOCs). HP described three levels of SOC:

  • SOC 1.0, ‘Secure the Perimeter’: base level of security analytics currently employed today by most MSSP vendors
  • SOC 2.0, ‘Secure the Application’. HP detailed the use of monitoring DNS records within security event information monitoring (SEIM). Monitoring the DNS gives a much higher number of events than the classic model (21bn vs 4.5bn within HP alone); it also gives a deeper insight into application security. Currently in beta phase at HP internally, 25% of the malware found so far is new and had not been detected by traditional methods. HP also detailed a case in which this style of DNS records search was used for an external client, using historic logs to capture a number of previously unknown vulnerabilities.
  • SOC 3.0, ‘Secure the Business’. The aspirational SOC level 3.0 uses predictive analytics and HP’s threat database to identify the types of threat that a client experiences and then proactively work to reduce the number of threats.

HP describes its internal SOC as currently at level 1.5; the monitoring of DNS records has not yet been rolled out across the company. Reaching level 3.0 – which is about proactive security management - will be a multi-year journey (around five years?) requiring a more sizeable threat database and a large set of use cases. HP will roll out its central threat database to more partners and receive information from as many clients as possible, then utilize big data analytics to discover trends in the billions of events monitored. And of course, the imminent break up of HP Group into HP Enterprise and HP Inc. will add to the complexity of servicing both new HP companies.

(NelsonHall will be publishing a market assessment in managed security services in Q4, along with detailed vendor profiles on selected key vendors, including HP)

]]>